Multimodal interaction with an autonomous forklift
We describe a multimodal framework for interacting with an autonomous robotic forklift. A key element enabling effective interaction is a wireless, handheld tablet with which a human supervisor can command the forklift using speech and sketch. Most current sketch interfaces treat the canvas as a bla...
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | en_US |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2012
|
Online Access: | http://hdl.handle.net/1721.1/69957 https://orcid.org/0000-0002-3097-360X https://orcid.org/0000-0001-5232-7281 |
_version_ | 1811093394568511488 |
---|---|
author | Correa, Andrew Thomas Walter, Matthew R. Fletcher, Luke Sebastian Glass, James R. Teller, Seth Davis, Randall |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Correa, Andrew Thomas Walter, Matthew R. Fletcher, Luke Sebastian Glass, James R. Teller, Seth Davis, Randall |
author_sort | Correa, Andrew Thomas |
collection | MIT |
description | We describe a multimodal framework for interacting with an autonomous robotic forklift. A key element enabling effective interaction is a wireless, handheld tablet with which a human supervisor can command the forklift using speech and sketch. Most current sketch interfaces treat the canvas as a blank slate. In contrast, our interface uses live and synthesized camera images from the forklift as a canvas, and augments them with object and obstacle information from the world. This connection enables users to ¿draw on the world,¿ enabling a simpler set of sketched gestures. Our interface supports commands that include summoning the forklift and directing it to lift, transport, and place loads of palletized cargo. We describe an exploratory evaluation of the system designed to identify areas for detailed study. Our framework incorporates external signaling to interact with humans near the vehicle. The robot uses audible and visual annunciation to convey its current state and intended actions. The system also provides seamless autonomy handoff: any human can take control of the robot by entering its cabin, at which point the forklift can be operated manually until the human exits. |
first_indexed | 2024-09-23T15:44:34Z |
format | Article |
id | mit-1721.1/69957 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T15:44:34Z |
publishDate | 2012 |
publisher | Institute of Electrical and Electronics Engineers (IEEE) |
record_format | dspace |
spelling | mit-1721.1/699572022-10-02T03:46:43Z Multimodal interaction with an autonomous forklift Correa, Andrew Thomas Walter, Matthew R. Fletcher, Luke Sebastian Glass, James R. Teller, Seth Davis, Randall Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Davis, Randall Correa, Andrew Thomas Walter, Matthew R. Fletcher, Luke Sebastian Glass, James R. Teller, Seth Davis, Randall We describe a multimodal framework for interacting with an autonomous robotic forklift. A key element enabling effective interaction is a wireless, handheld tablet with which a human supervisor can command the forklift using speech and sketch. Most current sketch interfaces treat the canvas as a blank slate. In contrast, our interface uses live and synthesized camera images from the forklift as a canvas, and augments them with object and obstacle information from the world. This connection enables users to ¿draw on the world,¿ enabling a simpler set of sketched gestures. Our interface supports commands that include summoning the forklift and directing it to lift, transport, and place loads of palletized cargo. We describe an exploratory evaluation of the system designed to identify areas for detailed study. Our framework incorporates external signaling to interact with humans near the vehicle. The robot uses audible and visual annunciation to convey its current state and intended actions. The system also provides seamless autonomy handoff: any human can take control of the robot by entering its cabin, at which point the forklift can be operated manually until the human exits. United States. Army. Logistics Innovation Agency United States. Army Combined Arms Support Command United States. Dept. of the Air Force (Air Force Contract FA8721-05-C-0002) 2012-04-05T17:34:52Z 2012-04-05T17:34:52Z 2010-04 Article http://purl.org/eprint/type/ConferencePaper 978-1-4244-4893-7 978-1-4244-4892-0 INSPEC Accession Number: 11261828 http://hdl.handle.net/1721.1/69957 Correa, Andrew et al. “Multimodal Interaction with an Autonomous Forklift.” IEEE, 2010. 243–250. Web. 5 Apr. 2012. © 2010 Institute of Electrical and Electronics Engineers https://orcid.org/0000-0002-3097-360X https://orcid.org/0000-0001-5232-7281 en_US http://dx.doi.org/10.1109/HRI.2010.5453188 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI) Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Institute of Electrical and Electronics Engineers (IEEE) IEEE |
spellingShingle | Correa, Andrew Thomas Walter, Matthew R. Fletcher, Luke Sebastian Glass, James R. Teller, Seth Davis, Randall Multimodal interaction with an autonomous forklift |
title | Multimodal interaction with an autonomous forklift |
title_full | Multimodal interaction with an autonomous forklift |
title_fullStr | Multimodal interaction with an autonomous forklift |
title_full_unstemmed | Multimodal interaction with an autonomous forklift |
title_short | Multimodal interaction with an autonomous forklift |
title_sort | multimodal interaction with an autonomous forklift |
url | http://hdl.handle.net/1721.1/69957 https://orcid.org/0000-0002-3097-360X https://orcid.org/0000-0001-5232-7281 |
work_keys_str_mv | AT correaandrewthomas multimodalinteractionwithanautonomousforklift AT waltermatthewr multimodalinteractionwithanautonomousforklift AT fletcherlukesebastian multimodalinteractionwithanautonomousforklift AT glassjamesr multimodalinteractionwithanautonomousforklift AT tellerseth multimodalinteractionwithanautonomousforklift AT davisrandall multimodalinteractionwithanautonomousforklift |