Toward a One-interaction Data-driven Guide: Putting co-Speech Gesture Evidence to Work for Ambiguous Route Instructions

HRI ’21 Companion, March 8–11, 2021, Boulder, CO, USA

Bibliographic Details
Main Authors: DePalma, Nicholas, Smith, H, Chernova, Sonia, Hodgins, Jessica
Other Authors: Massachusetts Institute of Technology. Media Laboratory
Format: Article
Language:English
Published: ACM|Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction 2025
Online Access:https://hdl.handle.net/1721.1/158175
_version_ 1824457868733054976
author DePalma, Nicholas
Smith, H
Chernova, Sonia
Hodgins, Jessica
author2 Massachusetts Institute of Technology. Media Laboratory
author_facet Massachusetts Institute of Technology. Media Laboratory
DePalma, Nicholas
Smith, H
Chernova, Sonia
Hodgins, Jessica
author_sort DePalma, Nicholas
collection MIT
description HRI ’21 Companion, March 8–11, 2021, Boulder, CO, USA
first_indexed 2025-02-19T04:16:50Z
format Article
id mit-1721.1/158175
institution Massachusetts Institute of Technology
language English
last_indexed 2025-02-19T04:16:50Z
publishDate 2025
publisher ACM|Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction
record_format dspace
spelling mit-1721.1/1581752025-02-13T19:21:53Z Toward a One-interaction Data-driven Guide: Putting co-Speech Gesture Evidence to Work for Ambiguous Route Instructions DePalma, Nicholas Smith, H Chernova, Sonia Hodgins, Jessica Massachusetts Institute of Technology. Media Laboratory HRI ’21 Companion, March 8–11, 2021, Boulder, CO, USA While recent work on gesture synthesis in agent and robot literature has treated gesture as co-speech and thus dependent on verbal utterances, we present evidence that gesture may leverage model context (i.e. the navigational task) and is not solely dependent on verbal utterance. This effect is particularly evident within ambiguous verbal utterances. Decoupling this dependency may allow future systems to synthesize clarifying gestures that clarify the ambiguous verbal utterance while enabling research in better understanding the semantics of the gesture. We bring together evidence from our own experiences in this domain that allow us to see for the first time what kind of end-to-end concerns models need to be developed to synthesize gesture for one-shot interactions while still preserving user outcomes and allowing for ambiguous utterances by the robot. We discuss these issues within the context of "cardinal direction gesture plans" which represent instructions that refer to the actions the human must follow in the future. 2025-02-05T17:13:26Z 2025-02-05T17:13:26Z 2021-03-08 2025-02-01T08:53:51Z Article http://purl.org/eprint/type/ConferencePaper 978-1-4503-8290-8 https://hdl.handle.net/1721.1/158175 DePalma, Nicholas, Smith, H, Chernova, Sonia and Hodgins, Jessica. 2021. "Toward a One-interaction Data-driven Guide: Putting co-Speech Gesture Evidence to Work for Ambiguous Route Instructions." PUBLISHER_CC PUBLISHER_CC en https://doi.org/10.1145/3434074.3447223 Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/ The author(s) application/pdf ACM|Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction Association for Computing Machinery
spellingShingle DePalma, Nicholas
Smith, H
Chernova, Sonia
Hodgins, Jessica
Toward a One-interaction Data-driven Guide: Putting co-Speech Gesture Evidence to Work for Ambiguous Route Instructions
title Toward a One-interaction Data-driven Guide: Putting co-Speech Gesture Evidence to Work for Ambiguous Route Instructions
title_full Toward a One-interaction Data-driven Guide: Putting co-Speech Gesture Evidence to Work for Ambiguous Route Instructions
title_fullStr Toward a One-interaction Data-driven Guide: Putting co-Speech Gesture Evidence to Work for Ambiguous Route Instructions
title_full_unstemmed Toward a One-interaction Data-driven Guide: Putting co-Speech Gesture Evidence to Work for Ambiguous Route Instructions
title_short Toward a One-interaction Data-driven Guide: Putting co-Speech Gesture Evidence to Work for Ambiguous Route Instructions
title_sort toward a one interaction data driven guide putting co speech gesture evidence to work for ambiguous route instructions
url https://hdl.handle.net/1721.1/158175
work_keys_str_mv AT depalmanicholas towardaoneinteractiondatadrivenguideputtingcospeechgestureevidencetoworkforambiguousrouteinstructions
AT smithh towardaoneinteractiondatadrivenguideputtingcospeechgestureevidencetoworkforambiguousrouteinstructions
AT chernovasonia towardaoneinteractiondatadrivenguideputtingcospeechgestureevidencetoworkforambiguousrouteinstructions
AT hodginsjessica towardaoneinteractiondatadrivenguideputtingcospeechgestureevidencetoworkforambiguousrouteinstructions