Visual classification of co-verbal gestures for gesture understanding

Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2001.

Bibliographic Details
Main Author: Campbell, Lee Winston
Other Authors: Aaron F. Bobick.
Format: Thesis
Language:eng
Published: Massachusetts Institute of Technology 2005
Subjects:
Online Access:http://hdl.handle.net/1721.1/8707
_version_ 1826207348786462720
author Campbell, Lee Winston
author2 Aaron F. Bobick.
author_facet Aaron F. Bobick.
Campbell, Lee Winston
author_sort Campbell, Lee Winston
collection MIT
description Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2001.
first_indexed 2024-09-23T13:47:56Z
format Thesis
id mit-1721.1/8707
institution Massachusetts Institute of Technology
language eng
last_indexed 2024-09-23T13:47:56Z
publishDate 2005
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/87072019-04-12T12:50:40Z Visual classification of co-verbal gestures for gesture understanding Campbell, Lee Winston Aaron F. Bobick. Massachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences. Massachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences. Architecture. Program in Media Arts and Sciences. Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2001. Includes bibliographical references (leaves 86-92). A person's communicative intent can be better understood by either a human or a machine if the person's gestures are understood. This thesis project demonstrates an expansion of both the range of co-verbal gestures a machine can identify, and the range of communicative intents the machine can infer. We develop an automatic system that uses realtime video as sensory input and then segments, classifies, and responds to co-verbal gestures made by users in realtime as they converse with a synthetic character known as REA, which is being developed in parallel by Justine Cassell and her students at the MIT Media Lab. A set of 670 natural gestures, videotaped and visually tracked in the course of conversational interviews and then hand segmented and annotated according to a widely used gesture classification scheme, is used in an offline training process that trains Hidden Markov Model classifiers. A number of feature sets are extracted and tested in the offline training process, and the best performer is employed in an online HMM segmenter and classifier that requires no encumbering attachments to the user. Modifications made to the REA system enable REA to respond to the user's beat and deictic gestures as well as turntaking requests the user may convey in gesture. (cont.) The recognition results obtained are far above chance, but too low for use in a production recognition system. The results provide a measure of validity for the gesture categories chosen, and they provide positive evidence for an appealing but difficult to prove proposition: to the extent that a machine can recognize and use these categories of gestures to infer information not present in the words spoken, there is exploitable complementary information in the gesture stream. by Lee Winston Campbell. Ph.D. 2005-08-23T22:31:37Z 2005-08-23T22:31:37Z 2001 2001 Thesis http://hdl.handle.net/1721.1/8707 49849552 eng M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582 92 leaves 7718413 bytes 7718173 bytes application/pdf application/pdf application/pdf Massachusetts Institute of Technology
spellingShingle Architecture. Program in Media Arts and Sciences.
Campbell, Lee Winston
Visual classification of co-verbal gestures for gesture understanding
title Visual classification of co-verbal gestures for gesture understanding
title_full Visual classification of co-verbal gestures for gesture understanding
title_fullStr Visual classification of co-verbal gestures for gesture understanding
title_full_unstemmed Visual classification of co-verbal gestures for gesture understanding
title_short Visual classification of co-verbal gestures for gesture understanding
title_sort visual classification of co verbal gestures for gesture understanding
topic Architecture. Program in Media Arts and Sciences.
url http://hdl.handle.net/1721.1/8707
work_keys_str_mv AT campbellleewinston visualclassificationofcoverbalgesturesforgestureunderstanding