RoboTwin Metaverse Platform for Robotic Random Bin Picking
Although vision-guided robotic picking systems are commonly used in factory environments, achieving rapid changeover for diverse workpiece types can still be challenging because the manual redefinition of vision software and tedious collection and annotation of datasets consistently hinder the autom...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-07-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/13/15/8779 |
_version_ | 1797587080343191552 |
---|---|
author | Cheng-Han Tsai Eduin E. Hernandez Xiu-Wen You Hsin-Yi Lin Jen-Yuan Chang |
author_facet | Cheng-Han Tsai Eduin E. Hernandez Xiu-Wen You Hsin-Yi Lin Jen-Yuan Chang |
author_sort | Cheng-Han Tsai |
collection | DOAJ |
description | Although vision-guided robotic picking systems are commonly used in factory environments, achieving rapid changeover for diverse workpiece types can still be challenging because the manual redefinition of vision software and tedious collection and annotation of datasets consistently hinder the automation process. In this paper, we present a novel approach for rapid workpiece changeover in a vision-guided robotic picking system using the proposed RoboTwin and FOVision systems. The RoboTwin system offers a realistic metaverse scene that enables tuning robot movements and gripper reactions. Additionally, it automatically generates annotated virtual images for each workpiece’s pickable point. These images serve as training datasets for an AI model and are deployed to the FOVision system, a platform that includes vision and edge computing capabilities for the robotic manipulator. The system achieves an instance segmentation mean average precision of 70% and a picking success rate of over 80% in real-world detection scenarios. The proposed approach can accelerate dataset generation by 80 times compared with manual annotation, which helps to reduce simulation-to-real gap errors and enables rapid line changeover within flexible manufacturing systems in factories. |
first_indexed | 2024-03-11T00:31:13Z |
format | Article |
id | doaj.art-49ee6e0f15fb4e52aaee87363142d987 |
institution | Directory Open Access Journal |
issn | 2076-3417 |
language | English |
last_indexed | 2024-03-11T00:31:13Z |
publishDate | 2023-07-01 |
publisher | MDPI AG |
record_format | Article |
series | Applied Sciences |
spelling | doaj.art-49ee6e0f15fb4e52aaee87363142d9872023-11-18T22:37:26ZengMDPI AGApplied Sciences2076-34172023-07-011315877910.3390/app13158779RoboTwin Metaverse Platform for Robotic Random Bin PickingCheng-Han Tsai0Eduin E. Hernandez1Xiu-Wen You2Hsin-Yi Lin3Jen-Yuan Chang4Department of Power Mechanical Engineering, National Tsing Hua University, Hsinchu 30013, TaiwanMechanical and Mechatronics System Research Labs (MMSL), Industrial Technology Research Institute (ITRI), Hsinchu 310401, TaiwanMechanical and Mechatronics System Research Labs (MMSL), Industrial Technology Research Institute (ITRI), Hsinchu 310401, TaiwanMechanical and Mechatronics System Research Labs (MMSL), Industrial Technology Research Institute (ITRI), Hsinchu 310401, TaiwanDepartment of Power Mechanical Engineering, National Tsing Hua University, Hsinchu 30013, TaiwanAlthough vision-guided robotic picking systems are commonly used in factory environments, achieving rapid changeover for diverse workpiece types can still be challenging because the manual redefinition of vision software and tedious collection and annotation of datasets consistently hinder the automation process. In this paper, we present a novel approach for rapid workpiece changeover in a vision-guided robotic picking system using the proposed RoboTwin and FOVision systems. The RoboTwin system offers a realistic metaverse scene that enables tuning robot movements and gripper reactions. Additionally, it automatically generates annotated virtual images for each workpiece’s pickable point. These images serve as training datasets for an AI model and are deployed to the FOVision system, a platform that includes vision and edge computing capabilities for the robotic manipulator. The system achieves an instance segmentation mean average precision of 70% and a picking success rate of over 80% in real-world detection scenarios. The proposed approach can accelerate dataset generation by 80 times compared with manual annotation, which helps to reduce simulation-to-real gap errors and enables rapid line changeover within flexible manufacturing systems in factories.https://www.mdpi.com/2076-3417/13/15/8779metaverseauto annotationrobotic random bin picking |
spellingShingle | Cheng-Han Tsai Eduin E. Hernandez Xiu-Wen You Hsin-Yi Lin Jen-Yuan Chang RoboTwin Metaverse Platform for Robotic Random Bin Picking Applied Sciences metaverse auto annotation robotic random bin picking |
title | RoboTwin Metaverse Platform for Robotic Random Bin Picking |
title_full | RoboTwin Metaverse Platform for Robotic Random Bin Picking |
title_fullStr | RoboTwin Metaverse Platform for Robotic Random Bin Picking |
title_full_unstemmed | RoboTwin Metaverse Platform for Robotic Random Bin Picking |
title_short | RoboTwin Metaverse Platform for Robotic Random Bin Picking |
title_sort | robotwin metaverse platform for robotic random bin picking |
topic | metaverse auto annotation robotic random bin picking |
url | https://www.mdpi.com/2076-3417/13/15/8779 |
work_keys_str_mv | AT chenghantsai robotwinmetaverseplatformforroboticrandombinpicking AT eduinehernandez robotwinmetaverseplatformforroboticrandombinpicking AT xiuwenyou robotwinmetaverseplatformforroboticrandombinpicking AT hsinyilin robotwinmetaverseplatformforroboticrandombinpicking AT jenyuanchang robotwinmetaverseplatformforroboticrandombinpicking |