Paired speech and gesture generation in embodied conversational agents
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000.
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Language: | eng |
Published: |
Massachusetts Institute of Technology
2012
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/70733 |
_version_ | 1826194059329273856 |
---|---|
author | Yan, Hao, 1973- |
author2 | Justine Cassell. |
author_facet | Justine Cassell. Yan, Hao, 1973- |
author_sort | Yan, Hao, 1973- |
collection | MIT |
description | Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000. |
first_indexed | 2024-09-23T09:49:51Z |
format | Thesis |
id | mit-1721.1/70733 |
institution | Massachusetts Institute of Technology |
language | eng |
last_indexed | 2024-09-23T09:49:51Z |
publishDate | 2012 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/707332019-04-12T15:36:56Z Paired speech and gesture generation in embodied conversational agents Yan, Hao, 1973- Justine Cassell. Massachusetts Institute of Technology. Dept. of Architecture. Program In Media Arts and Sciences. Massachusetts Institute of Technology. Dept. of Architecture. Program In Media Arts and Sciences. Architecture. Program In Media Arts and Sciences. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000. Includes bibliographical references (p. 68-71). Using face-to-face conversation as an interface metaphor, an embodied conversational agent is likely to be easier to use and learn than traditional graphical user interfaces. To make a believable agent that to some extent has the same social and conversational skills as humans do, the embodied conversational agent system must be able to deal with input of the user from different communication modalities such as speech and gesture, as well as generate appropriate behaviors for those communication modalities. In this thesis, I address the problem of paired speech and gesture generation in embodied conversational agents. I propose a real-time generation framework that is capable of generating a comprehensive description of communicative actions, including speech, gesture, and intonation, in the real-estate domain. The generation of speech, gesture, and intonation are based on the same underlying representation of real-estate properties, discourse information structure, intentional and attentional structures, and a mechanism to update the common ground between the user and the agent. Algorithms have been implemented to analyze the discourse information structure, contrast, and surprising semantic features, which together decide the intonation contour of the speech utterances and where gestures occur. I also investigate through a correlational study the role of communicative goals in determining the distribution of semantic features across speech and gesture modalities. by Hao Yan. S.M. 2012-05-15T21:07:32Z 2012-05-15T21:07:32Z 2000 2000 Thesis http://hdl.handle.net/1721.1/70733 47934332 eng M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582 71 p. application/pdf Massachusetts Institute of Technology |
spellingShingle | Architecture. Program In Media Arts and Sciences. Yan, Hao, 1973- Paired speech and gesture generation in embodied conversational agents |
title | Paired speech and gesture generation in embodied conversational agents |
title_full | Paired speech and gesture generation in embodied conversational agents |
title_fullStr | Paired speech and gesture generation in embodied conversational agents |
title_full_unstemmed | Paired speech and gesture generation in embodied conversational agents |
title_short | Paired speech and gesture generation in embodied conversational agents |
title_sort | paired speech and gesture generation in embodied conversational agents |
topic | Architecture. Program In Media Arts and Sciences. |
url | http://hdl.handle.net/1721.1/70733 |
work_keys_str_mv | AT yanhao1973 pairedspeechandgesturegenerationinembodiedconversationalagents |