Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep-Reinforcement-Learning Approach
Industrial robot manipulators are playing a significant role in modern manufacturing industries. Though peg-in-hole assembly is a common industrial task that has been extensively researched, safely solving complex, high-precision assembly in an unstructured environment remains an open problem. Reinf...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-10-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/10/19/6923 |
_version_ | 1797551971632152576 |
---|---|
author | Cristian C. Beltran-Hernandez Damien Petit Ixchel G. Ramirez-Alpizar Kensuke Harada |
author_facet | Cristian C. Beltran-Hernandez Damien Petit Ixchel G. Ramirez-Alpizar Kensuke Harada |
author_sort | Cristian C. Beltran-Hernandez |
collection | DOAJ |
description | Industrial robot manipulators are playing a significant role in modern manufacturing industries. Though peg-in-hole assembly is a common industrial task that has been extensively researched, safely solving complex, high-precision assembly in an unstructured environment remains an open problem. Reinforcement-learning (RL) methods have proven to be successful in autonomously solving manipulation tasks. However, RL is still not widely adopted in real robotic systems because working with real hardware entails additional challenges, especially when using position-controlled manipulators. The main contribution of this work is a learning-based method to solve peg-in-hole tasks with hole-position uncertainty. We propose the use of an off-policy, model-free reinforcement-learning method, and we bootstraped the training speed by using several transfer-learning techniques (sim2real) and domain randomization. Our proposed learning framework for position-controlled robots was extensively evaluated in contact-rich insertion tasks in a variety of environments. |
first_indexed | 2024-03-10T15:53:24Z |
format | Article |
id | doaj.art-e82bdbe9a9634d56a17d101e6814b7a2 |
institution | Directory Open Access Journal |
issn | 2076-3417 |
language | English |
last_indexed | 2024-03-10T15:53:24Z |
publishDate | 2020-10-01 |
publisher | MDPI AG |
record_format | Article |
series | Applied Sciences |
spelling | doaj.art-e82bdbe9a9634d56a17d101e6814b7a22023-11-20T15:56:21ZengMDPI AGApplied Sciences2076-34172020-10-011019692310.3390/app10196923Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep-Reinforcement-Learning ApproachCristian C. Beltran-Hernandez0Damien Petit1Ixchel G. Ramirez-Alpizar2Kensuke Harada3Graduate School of Engineering Science, Osaka University, Osaka 560-8531, JapanGraduate School of Engineering Science, Osaka University, Osaka 560-8531, JapanAutomation Research Team, Industrial CPS Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tokyo 135-0064, JapanGraduate School of Engineering Science, Osaka University, Osaka 560-8531, JapanIndustrial robot manipulators are playing a significant role in modern manufacturing industries. Though peg-in-hole assembly is a common industrial task that has been extensively researched, safely solving complex, high-precision assembly in an unstructured environment remains an open problem. Reinforcement-learning (RL) methods have proven to be successful in autonomously solving manipulation tasks. However, RL is still not widely adopted in real robotic systems because working with real hardware entails additional challenges, especially when using position-controlled manipulators. The main contribution of this work is a learning-based method to solve peg-in-hole tasks with hole-position uncertainty. We propose the use of an off-policy, model-free reinforcement-learning method, and we bootstraped the training speed by using several transfer-learning techniques (sim2real) and domain randomization. Our proposed learning framework for position-controlled robots was extensively evaluated in contact-rich insertion tasks in a variety of environments.https://www.mdpi.com/2076-3417/10/19/6923reinforcement learningcompliance controlrobotic assemblysim2realdomain randomization |
spellingShingle | Cristian C. Beltran-Hernandez Damien Petit Ixchel G. Ramirez-Alpizar Kensuke Harada Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep-Reinforcement-Learning Approach Applied Sciences reinforcement learning compliance control robotic assembly sim2real domain randomization |
title | Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep-Reinforcement-Learning Approach |
title_full | Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep-Reinforcement-Learning Approach |
title_fullStr | Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep-Reinforcement-Learning Approach |
title_full_unstemmed | Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep-Reinforcement-Learning Approach |
title_short | Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep-Reinforcement-Learning Approach |
title_sort | variable compliance control for robotic peg in hole assembly a deep reinforcement learning approach |
topic | reinforcement learning compliance control robotic assembly sim2real domain randomization |
url | https://www.mdpi.com/2076-3417/10/19/6923 |
work_keys_str_mv | AT cristiancbeltranhernandez variablecompliancecontrolforroboticpeginholeassemblyadeepreinforcementlearningapproach AT damienpetit variablecompliancecontrolforroboticpeginholeassemblyadeepreinforcementlearningapproach AT ixchelgramirezalpizar variablecompliancecontrolforroboticpeginholeassemblyadeepreinforcementlearningapproach AT kensukeharada variablecompliancecontrolforroboticpeginholeassemblyadeepreinforcementlearningapproach |