When Text and Speech are Not Enough: A Multimodal Dataset of Collaboration in a Situated Task
To adequately model information exchanged in real human-human interactions, considering speech or text alone leaves out many critical modalities. The channels contributing to the “making of sense” in human-human interactions include but are not limited to gesture, speech, user-interaction modeling,...
Main Authors: | Ibrahim Khebour, Richard Brutti, Indrani Dey, Rachel Dickler, Kelsey Sikes, Kenneth Lai, Mariah Bradford, Brittany Cates, Paige Hansen, Changsoo Jung, Brett Wisniewski, Corbyn Terpstra, Leanne Hirshfield, Sadhana Puntambekar, Nathaniel Blanchard, James Pustejovsky, Nikhil Krishnaswamy |
---|---|
Format: | Article |
Language: | English |
Published: |
Ubiquity Press
2024-01-01
|
Series: | Journal of Open Humanities Data |
Subjects: | |
Online Access: | https://account.openhumanitiesdata.metajnl.com/index.php/up-j-johd/article/view/168 |
Similar Items
-
Affordance embeddings for situated language understanding
by: Nikhil Krishnaswamy, et al.
Published: (2022-09-01) -
Grounding human-object interaction to affordance behavior in multimodal datasets
by: Alexander Henlein, et al.
Published: (2023-01-01) -
What is Enough and is Enough, Enough?
by: Kaustubh D Patel
Published: (2019-01-01) -
Enough is Enough
by: Bill Cohen, et al.
Published: (2014-11-01) -
Hand injury prevention in India: Are we doing enough?
by: Nikhil Panse, et al.
Published: (2011-09-01)