Aligning Human and Robot Representations
HRI ’24, March 11–14, 2024, Boulder, CO, USA
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
ACM
2024
|
Online Access: | https://hdl.handle.net/1721.1/154054 |
_version_ | 1824458098781192192 |
---|---|
author | Bobu, Andreea Peng, Andi Agrawal, Pulkit Shah, Julie A Dragan, Anca D. |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Bobu, Andreea Peng, Andi Agrawal, Pulkit Shah, Julie A Dragan, Anca D. |
author_sort | Bobu, Andreea |
collection | MIT |
description | HRI ’24, March 11–14, 2024, Boulder, CO, USA |
first_indexed | 2024-09-23T11:35:08Z |
format | Article |
id | mit-1721.1/154054 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2025-02-19T04:20:30Z |
publishDate | 2024 |
publisher | ACM |
record_format | dspace |
spelling | mit-1721.1/1540542025-01-04T05:54:37Z Aligning Human and Robot Representations Bobu, Andreea Peng, Andi Agrawal, Pulkit Shah, Julie A Dragan, Anca D. Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology. Department of Aeronautics and Astronautics HRI ’24, March 11–14, 2024, Boulder, CO, USA To act in the world, robots rely on a representation of salient task aspects: for example, to carry a coffee mug, a robot may consider movement efficiency or mug orientation in its behavior. However, if we want robots to act for and with people, their representations must not be just functional but also reflective of what humans care about, i.e. they must be aligned. We observe that current learning approaches suffer from representation misalignment, where the robot's learned representation does not capture the human's representation. We suggest that because humans are the ultimate evaluator of robot performance, we must explicitly focus our efforts on aligning learned representations with humans, in addition to learning the downstream task. We advocate that current representation learning approaches in robotics should be studied from the perspective of how well they accomplish the objective of representation alignment. We mathematically define the problem, identify its key desiderata, and situate current methods within this formalism. We conclude by suggesting future directions for exploring open challenges. 2024-04-03T18:51:02Z 2024-04-03T18:51:02Z 2024-03-11 2024-04-01T07:46:34Z Article http://purl.org/eprint/type/ConferencePaper 979-8-4007-0322-5 https://hdl.handle.net/1721.1/154054 Bobu, Andreea, Peng, Andi, Agrawal, Pulkit, Shah, Julie A and Dragan, Anca D. 2024. "Aligning Human and Robot Representations." PUBLISHER_CC en 10.1145/3610977.3634987 Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/ The author(s) application/pdf ACM ACM |
spellingShingle | Bobu, Andreea Peng, Andi Agrawal, Pulkit Shah, Julie A Dragan, Anca D. Aligning Human and Robot Representations |
title | Aligning Human and Robot Representations |
title_full | Aligning Human and Robot Representations |
title_fullStr | Aligning Human and Robot Representations |
title_full_unstemmed | Aligning Human and Robot Representations |
title_short | Aligning Human and Robot Representations |
title_sort | aligning human and robot representations |
url | https://hdl.handle.net/1721.1/154054 |
work_keys_str_mv | AT bobuandreea aligninghumanandrobotrepresentations AT pengandi aligninghumanandrobotrepresentations AT agrawalpulkit aligninghumanandrobotrepresentations AT shahjuliea aligninghumanandrobotrepresentations AT draganancad aligninghumanandrobotrepresentations |