Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching

This article presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel o...

全面介绍

书目详细资料
Main Authors: Zeng, Andy, Song, Shuran, Yu, Kuan-Ting, Donlon, Elliott S, Hogan, Francois R., Bauza Villalonga, Maria, Ma, Daolin, Taylor, Orion Thomas, Liu, Melody, Romo, Eudald, Fazeli, Nima, Alet, Ferran, Chavan Dafle, Nikhil Narsingh, Holladay, Rachel, Morona, Isabella, Nair, Prem Qu, Green, Druck, Taylor, Ian, Liu, Weber, Funkhouser, Thomas, Rodriguez, Alberto
其他作者: Massachusetts Institute of Technology. Department of Mechanical Engineering
格式: 文件
语言:English
出版: SAGE Publications 2021
在线阅读:https://hdl.handle.net/1721.1/130311
_version_ 1826214526473732096
author Zeng, Andy
Song, Shuran
Yu, Kuan-Ting
Donlon, Elliott S
Hogan, Francois R.
Bauza Villalonga, Maria
Ma, Daolin
Taylor, Orion Thomas
Liu, Melody
Romo, Eudald
Fazeli, Nima
Alet, Ferran
Chavan Dafle, Nikhil Narsingh
Holladay, Rachel
Morona, Isabella
Nair, Prem Qu
Green, Druck
Taylor, Ian
Liu, Weber
Funkhouser, Thomas
Rodriguez, Alberto
author2 Massachusetts Institute of Technology. Department of Mechanical Engineering
author_facet Massachusetts Institute of Technology. Department of Mechanical Engineering
Zeng, Andy
Song, Shuran
Yu, Kuan-Ting
Donlon, Elliott S
Hogan, Francois R.
Bauza Villalonga, Maria
Ma, Daolin
Taylor, Orion Thomas
Liu, Melody
Romo, Eudald
Fazeli, Nima
Alet, Ferran
Chavan Dafle, Nikhil Narsingh
Holladay, Rachel
Morona, Isabella
Nair, Prem Qu
Green, Druck
Taylor, Ian
Liu, Weber
Funkhouser, Thomas
Rodriguez, Alberto
author_sort Zeng, Andy
collection MIT
description This article presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses an object-agnostic grasping framework to map from visual observations to actions: inferring dense pixel-wise probability maps of the affordances for four different grasping primitive actions. It then executes the action with the highest affordance and recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional data collection or re-training. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT–Princeton Team system that took first place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu/
first_indexed 2024-09-23T16:06:50Z
format Article
id mit-1721.1/130311
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T16:06:50Z
publishDate 2021
publisher SAGE Publications
record_format dspace
spelling mit-1721.1/1303112022-09-29T18:19:19Z Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching Zeng, Andy Song, Shuran Yu, Kuan-Ting Donlon, Elliott S Hogan, Francois R. Bauza Villalonga, Maria Ma, Daolin Taylor, Orion Thomas Liu, Melody Romo, Eudald Fazeli, Nima Alet, Ferran Chavan Dafle, Nikhil Narsingh Holladay, Rachel Morona, Isabella Nair, Prem Qu Green, Druck Taylor, Ian Liu, Weber Funkhouser, Thomas Rodriguez, Alberto Massachusetts Institute of Technology. Department of Mechanical Engineering This article presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses an object-agnostic grasping framework to map from visual observations to actions: inferring dense pixel-wise probability maps of the affordances for four different grasping primitive actions. It then executes the action with the highest affordance and recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional data collection or re-training. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT–Princeton Team system that took first place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu/ NSF (Grants IIS-1251217, VEC 1539014/1539099) 2021-03-31T19:02:14Z 2021-03-31T19:02:14Z 2019-08 2020-08-03T13:55:36Z Article http://purl.org/eprint/type/JournalArticle 0278-3649 1741-3176 https://hdl.handle.net/1721.1/130311 Zeng, Andy et al. "Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching." International Journal of Robotics Research (August 2019): 1-16. en 10.1177/0278364919868017 International Journal of Robotics Research Creative Commons Attribution-NonCommercial-NoDerivs License http://creativecommons.org/licenses/by-nc-nd/4.0/ application/pdf SAGE Publications Sage
spellingShingle Zeng, Andy
Song, Shuran
Yu, Kuan-Ting
Donlon, Elliott S
Hogan, Francois R.
Bauza Villalonga, Maria
Ma, Daolin
Taylor, Orion Thomas
Liu, Melody
Romo, Eudald
Fazeli, Nima
Alet, Ferran
Chavan Dafle, Nikhil Narsingh
Holladay, Rachel
Morona, Isabella
Nair, Prem Qu
Green, Druck
Taylor, Ian
Liu, Weber
Funkhouser, Thomas
Rodriguez, Alberto
Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching
title Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching
title_full Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching
title_fullStr Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching
title_full_unstemmed Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching
title_short Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching
title_sort robotic pick and place of novel objects in clutter with multi affordance grasping and cross domain image matching
url https://hdl.handle.net/1721.1/130311
work_keys_str_mv AT zengandy roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT songshuran roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT yukuanting roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT donlonelliotts roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT hoganfrancoisr roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT bauzavillalongamaria roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT madaolin roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT taylororionthomas roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT liumelody roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT romoeudald roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT fazelinima roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT aletferran roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT chavandaflenikhilnarsingh roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT holladayrachel roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT moronaisabella roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT nairpremqu roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT greendruck roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT taylorian roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT liuweber roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT funkhouserthomas roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching
AT rodriguezalberto roboticpickandplaceofnovelobjectsinclutterwithmultiaffordancegraspingandcrossdomainimagematching