Simulating object handover between collaborative robots
Collaborative robots are adopted in the drive towards Industry 4.0 to automate manufacturing, while retaining a human workforce. This area of research is known as human-robot collaboration (HRC) and focusses on understanding the interactions between the robot and a human. During HRC the robot is oft...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
EDP Sciences
2023-01-01
|
Series: | MATEC Web of Conferences |
Online Access: | https://www.matec-conferences.org/articles/matecconf/pdf/2023/15/matecconf_rapdasa2023_04012.pdf |
_version_ | 1797343911574765568 |
---|---|
author | van Eden Beatrice Botha Natasha |
author_facet | van Eden Beatrice Botha Natasha |
author_sort | van Eden Beatrice |
collection | DOAJ |
description | Collaborative robots are adopted in the drive towards Industry 4.0 to automate manufacturing, while retaining a human workforce. This area of research is known as human-robot collaboration (HRC) and focusses on understanding the interactions between the robot and a human. During HRC the robot is often programmed to perform a predefined task, however when working in a dynamic and unstructured environment this is not achievable. To this end, machine learning is commonly employed to train the collaborative robot to autonomously execute a collaborative task. Most of the current research is concerned with HRC, however, when considering the smart factory of the future investigating an autonomous collaborative task between two robots is pertinent. In this paper deep reinforcement learning (DRL) is considered to teach two collaborative robots to handover an object in a simulated environment. The simulation environment was developed using Pybullet and OpenAI gym. Three DRL algorithms and three different reward functions were investigated. The results clearly indicated that PPO is the best performing DRL algorithm as it provided the highest reward output, which is indicative that the robots were learning how to perform the task, even though they were not successful. A discrete reward function with reward shaping, to incentivise the cobot to perform the desired actions and incremental goals (picking up the object, lifting the object and transferring the object), provided the overall best performance. |
first_indexed | 2024-03-08T10:54:41Z |
format | Article |
id | doaj.art-cbbb124f389f4708b1312e64e77439f5 |
institution | Directory Open Access Journal |
issn | 2261-236X |
language | English |
last_indexed | 2024-03-08T10:54:41Z |
publishDate | 2023-01-01 |
publisher | EDP Sciences |
record_format | Article |
series | MATEC Web of Conferences |
spelling | doaj.art-cbbb124f389f4708b1312e64e77439f52024-01-26T16:40:09ZengEDP SciencesMATEC Web of Conferences2261-236X2023-01-013880401210.1051/matecconf/202338804012matecconf_rapdasa2023_04012Simulating object handover between collaborative robotsvan Eden Beatrice0Botha Natasha1Centre for Robotics and Future Production, Manufacturing Cluster, Council for Scientific and Industrial ResearchCentre for Robotics and Future Production, Manufacturing Cluster, Council for Scientific and Industrial ResearchCollaborative robots are adopted in the drive towards Industry 4.0 to automate manufacturing, while retaining a human workforce. This area of research is known as human-robot collaboration (HRC) and focusses on understanding the interactions between the robot and a human. During HRC the robot is often programmed to perform a predefined task, however when working in a dynamic and unstructured environment this is not achievable. To this end, machine learning is commonly employed to train the collaborative robot to autonomously execute a collaborative task. Most of the current research is concerned with HRC, however, when considering the smart factory of the future investigating an autonomous collaborative task between two robots is pertinent. In this paper deep reinforcement learning (DRL) is considered to teach two collaborative robots to handover an object in a simulated environment. The simulation environment was developed using Pybullet and OpenAI gym. Three DRL algorithms and three different reward functions were investigated. The results clearly indicated that PPO is the best performing DRL algorithm as it provided the highest reward output, which is indicative that the robots were learning how to perform the task, even though they were not successful. A discrete reward function with reward shaping, to incentivise the cobot to perform the desired actions and incremental goals (picking up the object, lifting the object and transferring the object), provided the overall best performance.https://www.matec-conferences.org/articles/matecconf/pdf/2023/15/matecconf_rapdasa2023_04012.pdf |
spellingShingle | van Eden Beatrice Botha Natasha Simulating object handover between collaborative robots MATEC Web of Conferences |
title | Simulating object handover between collaborative robots |
title_full | Simulating object handover between collaborative robots |
title_fullStr | Simulating object handover between collaborative robots |
title_full_unstemmed | Simulating object handover between collaborative robots |
title_short | Simulating object handover between collaborative robots |
title_sort | simulating object handover between collaborative robots |
url | https://www.matec-conferences.org/articles/matecconf/pdf/2023/15/matecconf_rapdasa2023_04012.pdf |
work_keys_str_mv | AT vanedenbeatrice simulatingobjecthandoverbetweencollaborativerobots AT bothanatasha simulatingobjecthandoverbetweencollaborativerobots |