6D Pose Estimation of Transparent Object From Single RGB Image for Robotic Manipulation

Grasping and manipulating transparent objects with a robot is a challenge in robot vision. To successfully perform robotic grasping, 6D object pose estimation is needed. However, transparent objects are difficult to recognize because their appearance varies depending on the background, and modern 3D...

Full description

Bibliographic Details
Main Authors: Munkhtulga Byambaa, Gou Koutaki, Lodoiravsal Choimaa
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9931681/
_version_ 1797989172795932672
author Munkhtulga Byambaa
Gou Koutaki
Lodoiravsal Choimaa
author_facet Munkhtulga Byambaa
Gou Koutaki
Lodoiravsal Choimaa
author_sort Munkhtulga Byambaa
collection DOAJ
description Grasping and manipulating transparent objects with a robot is a challenge in robot vision. To successfully perform robotic grasping, 6D object pose estimation is needed. However, transparent objects are difficult to recognize because their appearance varies depending on the background, and modern 3D sensors cannot collect reliable depth data on transparent object surfaces due to the translucent, refractive, and specular surfaces. To address these challenges, we proposed a 6D pose estimation of transparent objects for manipulation. Given a single RGB image of transparent objects, the 2D keypoints are estimated using a deep neural network. Then, the PnP algorithm takes camera intrinsics, object model size, and keypoints as inputs to estimate the 6D pose of the object. Finally, the predicted poses of the transparent object were used for grasp planning. Our experiments demonstrated that our picking system is capable of grasping transparent objects from different backgrounds. To the best of our knowledge, this is the first time a robot has grasped transparent objects from a single RGB image. Furthermore, the experiments show that our method is better than the 6D pose estimation baselines and can be generalized to real-world images.
first_indexed 2024-04-11T08:14:55Z
format Article
id doaj.art-7fcd69a2ecb347d997d82ab2a3db1888
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-04-11T08:14:55Z
publishDate 2022-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-7fcd69a2ecb347d997d82ab2a3db18882022-12-22T04:35:11ZengIEEEIEEE Access2169-35362022-01-011011489711490610.1109/ACCESS.2022.321781199316816D Pose Estimation of Transparent Object From Single RGB Image for Robotic ManipulationMunkhtulga Byambaa0https://orcid.org/0000-0002-3560-9583Gou Koutaki1https://orcid.org/0000-0002-3414-1085Lodoiravsal Choimaa2https://orcid.org/0000-0002-1773-1059Department of Computer Science and Electrical Engineering, Kumamoto University, Kumamoto, JapanDepartment of Computer Science and Electrical Engineering, Kumamoto University, Kumamoto, JapanMachine Intelligence Laboratory, National University of Mongolia, Ulaanbaatar, MongoliaGrasping and manipulating transparent objects with a robot is a challenge in robot vision. To successfully perform robotic grasping, 6D object pose estimation is needed. However, transparent objects are difficult to recognize because their appearance varies depending on the background, and modern 3D sensors cannot collect reliable depth data on transparent object surfaces due to the translucent, refractive, and specular surfaces. To address these challenges, we proposed a 6D pose estimation of transparent objects for manipulation. Given a single RGB image of transparent objects, the 2D keypoints are estimated using a deep neural network. Then, the PnP algorithm takes camera intrinsics, object model size, and keypoints as inputs to estimate the 6D pose of the object. Finally, the predicted poses of the transparent object were used for grasp planning. Our experiments demonstrated that our picking system is capable of grasping transparent objects from different backgrounds. To the best of our knowledge, this is the first time a robot has grasped transparent objects from a single RGB image. Furthermore, the experiments show that our method is better than the 6D pose estimation baselines and can be generalized to real-world images.https://ieeexplore.ieee.org/document/9931681/Pose estimationsynthetic datarobot pickingtransparent object
spellingShingle Munkhtulga Byambaa
Gou Koutaki
Lodoiravsal Choimaa
6D Pose Estimation of Transparent Object From Single RGB Image for Robotic Manipulation
IEEE Access
Pose estimation
synthetic data
robot picking
transparent object
title 6D Pose Estimation of Transparent Object From Single RGB Image for Robotic Manipulation
title_full 6D Pose Estimation of Transparent Object From Single RGB Image for Robotic Manipulation
title_fullStr 6D Pose Estimation of Transparent Object From Single RGB Image for Robotic Manipulation
title_full_unstemmed 6D Pose Estimation of Transparent Object From Single RGB Image for Robotic Manipulation
title_short 6D Pose Estimation of Transparent Object From Single RGB Image for Robotic Manipulation
title_sort 6d pose estimation of transparent object from single rgb image for robotic manipulation
topic Pose estimation
synthetic data
robot picking
transparent object
url https://ieeexplore.ieee.org/document/9931681/
work_keys_str_mv AT munkhtulgabyambaa 6dposeestimationoftransparentobjectfromsinglergbimageforroboticmanipulation
AT goukoutaki 6dposeestimationoftransparentobjectfromsinglergbimageforroboticmanipulation
AT lodoiravsalchoimaa 6dposeestimationoftransparentobjectfromsinglergbimageforroboticmanipulation