Visual Reconstruction and Localization-Based Robust Robotic 6-DoF Grasping in the Wild

The intelligent grasping expects that the manipulator has the ability to grasp objects with high degree of freedom in a wild (unstructured) environment. Due to low perception ability in handing targets and environments, most industrial robots are limited to top-down 4-DoF grasping. In this work, we...

Full description

Bibliographic Details
Main Authors: Ji Liang, Jiguang Zhang, Bingbing Pan, Shibiao Xu, Guangheng Zhao, Ge Yu, Xiaopeng Zhang
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9427547/
Description
Summary:The intelligent grasping expects that the manipulator has the ability to grasp objects with high degree of freedom in a wild (unstructured) environment. Due to low perception ability in handing targets and environments, most industrial robots are limited to top-down 4-DoF grasping. In this work, we propose a novel low-cost coarse to fine robotic grasping framework. First, we design a global localization based environment perception method, which enables the manipulator to roughly and automatically locate work space. Then, constrained by the above initial localization, a 3D point cloud reconstruction based 6-DoF pose estimation method is proposed for the manipulator further fine locating grasping target. Finally, our framework realizes full function of visual 6DoF robotic grasping, which includes two different visual servoing and grasp planning strategies for different objects grasping. Meanwhile, it also can integrate various state-of-arts 6DoF pose estimation algorithms to facilitate various practical grasping applications or researches. Experimental results show that our method achieves autonomous robotic grasping with high degree of freedom in an unknown environment. Especially for objects with occlusion, singular shape or small scale, our method can still maintain robust grasping.
ISSN:2169-3536