A wearable system that learns a kinematic model and finds structure in everyday manipulation by using absolute orientation sensors and a camera

Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.

Bibliographic Details
Main Author: Kemp, Charles C. (Charles Clark), 1972-
Other Authors: Rodney Brooks.
Format: Thesis
Language:eng
Published: Massachusetts Institute of Technology 2006
Subjects:
Online Access:http://hdl.handle.net/1721.1/33920
_version_ 1826200610802761728
author Kemp, Charles C. (Charles Clark), 1972-
author2 Rodney Brooks.
author_facet Rodney Brooks.
Kemp, Charles C. (Charles Clark), 1972-
author_sort Kemp, Charles C. (Charles Clark), 1972-
collection MIT
description Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
first_indexed 2024-09-23T11:39:07Z
format Thesis
id mit-1721.1/33920
institution Massachusetts Institute of Technology
language eng
last_indexed 2024-09-23T11:39:07Z
publishDate 2006
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/339202019-04-12T09:20:40Z A wearable system that learns a kinematic model and finds structure in everyday manipulation by using absolute orientation sensors and a camera Kemp, Charles C. (Charles Clark), 1972- Rodney Brooks. Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. Electrical Engineering and Computer Science. Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005. Includes bibliographical references (p. 215-220). This thesis presents Duo, the first wearable system to autonomously learn a kinematic model of the wearer via body-mounted absolute orientation sensors and a head-mounted camera. With Duo, we demonstrate the significant benefits of endowing a wearable system with the ability to sense the kinematic configuration of the wearer's body. We also show that a kinematic model can be autonomously estimated offline from less than an hour of recorded video and orientation data from a wearer performing unconstrained, unscripted, household activities within a real, unaltered, home environment. We demonstrate that our system for autonomously estimating this kinematic model places very few constraints on the wearer's body, the placement of the sensors, and the appearance of the hand, which, for example, allows it to automatically discover a left-handed kinematic model for a left-handed wearer, and to automatically compensate for distinct camera mounts, and sensor configurations. Furthermore, we show that this learned kinematic model efficiently and robustly predicts the location of the dominant hand within video from the head-mounted camera even in situations where vision-based hand detectors would be likely to fail. (cont.) Additionally, we show ways in which the learned kinematic model can facilitate highly efficient processing of large databases of first person experience. Finally, we show that the kinematic model can efficiently direct visual processing so as to acquire a large number of high quality segments of the wearer's hand and the manipulated objects. Within the course of justifying these claims, we present methods for estimating global image motion, segmenting foreground motion, segmenting manipulation events, finding and representing significant hand postures, segmenting visual regions, and detecting visual points of interest with associated shape descriptors. We also describe our architecture and user-level application for machine augmented annotation and browsing of first person video and absolute orientations. Additionally, we present a real-time application in which the human and wearable cooperate through tightly integrated behaviors coordinated by the wearable's kinematic perception, and together acquire high-quality visual segments of manipulable objects that interest the wearable. by Charles Clark Kemp. Ph.D. 2006-08-25T18:56:51Z 2006-08-25T18:56:51Z 2005 2005 Thesis http://hdl.handle.net/1721.1/33920 67293580 eng M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582 220 p. 11917861 bytes 11927189 bytes application/pdf application/pdf application/pdf Massachusetts Institute of Technology
spellingShingle Electrical Engineering and Computer Science.
Kemp, Charles C. (Charles Clark), 1972-
A wearable system that learns a kinematic model and finds structure in everyday manipulation by using absolute orientation sensors and a camera
title A wearable system that learns a kinematic model and finds structure in everyday manipulation by using absolute orientation sensors and a camera
title_full A wearable system that learns a kinematic model and finds structure in everyday manipulation by using absolute orientation sensors and a camera
title_fullStr A wearable system that learns a kinematic model and finds structure in everyday manipulation by using absolute orientation sensors and a camera
title_full_unstemmed A wearable system that learns a kinematic model and finds structure in everyday manipulation by using absolute orientation sensors and a camera
title_short A wearable system that learns a kinematic model and finds structure in everyday manipulation by using absolute orientation sensors and a camera
title_sort wearable system that learns a kinematic model and finds structure in everyday manipulation by using absolute orientation sensors and a camera
topic Electrical Engineering and Computer Science.
url http://hdl.handle.net/1721.1/33920
work_keys_str_mv AT kempcharlesccharlesclark1972 awearablesystemthatlearnsakinematicmodelandfindsstructureineverydaymanipulationbyusingabsoluteorientationsensorsandacamera
AT kempcharlesccharlesclark1972 wearablesystemthatlearnsakinematicmodelandfindsstructureineverydaymanipulationbyusingabsoluteorientationsensorsandacamera