Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching
This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel obj...
Main Authors: | , , , , , , , , , , , , , , , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2020
|
Online Access: | https://hdl.handle.net/1721.1/126872 |
_version_ | 1826197637892669440 |
---|---|
author | Zeng, Andy Song, Shuran Yu, Kuan-Ting Donlon, Elliott S Hogan, Francois R. Bauza Villalonga, Maria Ma, Daolin Taylor, Orion Thomas Liu, Melody Romo, Eudald Fazeli, Nima Alet, Ferran Chavan Dafle, Nikhil Narsingh Holladay, Rachel Morena, Isabella Qu Nair, Prem Green, Druck Taylor, Ian Liu, Weber Funkhouser, Thomas Rodriguez, Alberto |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Zeng, Andy Song, Shuran Yu, Kuan-Ting Donlon, Elliott S Hogan, Francois R. Bauza Villalonga, Maria Ma, Daolin Taylor, Orion Thomas Liu, Melody Romo, Eudald Fazeli, Nima Alet, Ferran Chavan Dafle, Nikhil Narsingh Holladay, Rachel Morena, Isabella Qu Nair, Prem Green, Druck Taylor, Ian Liu, Weber Funkhouser, Thomas Rodriguez, Alberto |
author_sort | Zeng, Andy |
collection | MIT |
description | This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select and execute among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu. |
first_indexed | 2024-09-23T10:50:42Z |
format | Article |
id | mit-1721.1/126872 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T10:50:42Z |
publishDate | 2020 |
publisher | Institute of Electrical and Electronics Engineers (IEEE) |
record_format | dspace |
spelling | mit-1721.1/1268722022-09-30T23:26:00Z Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching Zeng, Andy Song, Shuran Yu, Kuan-Ting Donlon, Elliott S Hogan, Francois R. Bauza Villalonga, Maria Ma, Daolin Taylor, Orion Thomas Liu, Melody Romo, Eudald Fazeli, Nima Alet, Ferran Chavan Dafle, Nikhil Narsingh Holladay, Rachel Morena, Isabella Qu Nair, Prem Green, Druck Taylor, Ian Liu, Weber Funkhouser, Thomas Rodriguez, Alberto Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology. Department of Mechanical Engineering This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select and execute among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu. NSF (Grants IIS-1251217 and VEC 1539014/1539099) 2020-09-01T16:02:35Z 2020-09-01T16:02:35Z 2018-09 2018-05 2020-08-03T12:54:52Z Article http://purl.org/eprint/type/ConferencePaper 9781538630815 https://hdl.handle.net/1721.1/126872 Zeng, Andy et al. "Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching." IEEE International Conference on Robotics and Automation, May 2018, Brisbane, Australia, Institute of Electrical and Electronics Engineers, September 2018. © 2018 IEEE en http://dx.doi.org/10.1109/icra.2018.8461044 IEEE International Conference on Robotics and Automation (ICRA) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) arXiv |
spellingShingle | Zeng, Andy Song, Shuran Yu, Kuan-Ting Donlon, Elliott S Hogan, Francois R. Bauza Villalonga, Maria Ma, Daolin Taylor, Orion Thomas Liu, Melody Romo, Eudald Fazeli, Nima Alet, Ferran Chavan Dafle, Nikhil Narsingh Holladay, Rachel Morena, Isabella Qu Nair, Prem Green, Druck Taylor, Ian Liu, Weber Funkhouser, Thomas Rodriguez, Alberto Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching |
title | Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching |
title_full | Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching |
title_fullStr | Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching |
title_full_unstemmed | Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching |
title_short | Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching |
title_sort | robotic pick and place of novel objects in clutter with multi affordance grasping and cross domain image matching |
url | https://hdl.handle.net/1721.1/126872 |
work_keys_str_mv | AT zengandy roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT songshuran roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT yukuanting roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT donlonelliotts roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT hoganfrancoisr roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT bauzavillalongamaria roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT madaolin roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT taylororionthomas roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT liumelody roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT romoeudald roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT fazelinima roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT aletferran roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT chavandaflenikhilnarsingh roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT holladayrachel roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT morenaisabella roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT qunairprem roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT greendruck roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT taylorian roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT liuweber roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT funkhouserthomas roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching AT rodriguezalberto roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching |