Automation of unstructured production environment by applying reinforcement learning

Implementation of Machine Learning (ML) to improve product and production development processes poses a significant opportunity for manufacturing industries. ML has the capability to calibrate models with considerable adaptability and high accuracy. This capability is specifically promising for appl...

Full description

Bibliographic Details
Main Authors: Sanjay Nambiar, Anton Wiberg, Mehdi Tarkian
Format: Article
Language:English
Published: Frontiers Media S.A. 2023-03-01
Series:Frontiers in Manufacturing Technology
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fmtec.2023.1154263/full
_version_ 1797869981551034368
author Sanjay Nambiar
Anton Wiberg
Mehdi Tarkian
author_facet Sanjay Nambiar
Anton Wiberg
Mehdi Tarkian
author_sort Sanjay Nambiar
collection DOAJ
description Implementation of Machine Learning (ML) to improve product and production development processes poses a significant opportunity for manufacturing industries. ML has the capability to calibrate models with considerable adaptability and high accuracy. This capability is specifically promising for applications where classical production automation is too expensive, e.g., for mass customization cases where the production environment is uncertain and unstructured. To cope with the diversity in production systems and working environments, Reinforcement Learning (RL) in combination with lightweight game engines can be used from initial stages of a product and production development process. However, there are multiple challenges such as collecting observations in a virtual environment which can interact similar to a physical environment. This project focuses on setting up RL methodologies to perform path-finding and collision detection in varying environments. One case study is human assembly evaluation method in the automobile industry which is currently manual intensive to investigate digitally. For this case, a mannequin is trained to perform pick and place operations in varying environments and thus automating assembly validation process in early design phases. The next application is path-finding of mobile robots including an articulated arm to perform pick and place operations. This application is expensive to setup with classical methods and thus RL enables an automated approach for this task as well.
first_indexed 2024-04-10T00:20:08Z
format Article
id doaj.art-84218694813446929d6ceb10b9cc6ea2
institution Directory Open Access Journal
issn 2813-0359
language English
last_indexed 2024-04-10T00:20:08Z
publishDate 2023-03-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Manufacturing Technology
spelling doaj.art-84218694813446929d6ceb10b9cc6ea22023-03-16T04:42:44ZengFrontiers Media S.A.Frontiers in Manufacturing Technology2813-03592023-03-01310.3389/fmtec.2023.11542631154263Automation of unstructured production environment by applying reinforcement learningSanjay NambiarAnton WibergMehdi TarkianImplementation of Machine Learning (ML) to improve product and production development processes poses a significant opportunity for manufacturing industries. ML has the capability to calibrate models with considerable adaptability and high accuracy. This capability is specifically promising for applications where classical production automation is too expensive, e.g., for mass customization cases where the production environment is uncertain and unstructured. To cope with the diversity in production systems and working environments, Reinforcement Learning (RL) in combination with lightweight game engines can be used from initial stages of a product and production development process. However, there are multiple challenges such as collecting observations in a virtual environment which can interact similar to a physical environment. This project focuses on setting up RL methodologies to perform path-finding and collision detection in varying environments. One case study is human assembly evaluation method in the automobile industry which is currently manual intensive to investigate digitally. For this case, a mannequin is trained to perform pick and place operations in varying environments and thus automating assembly validation process in early design phases. The next application is path-finding of mobile robots including an articulated arm to perform pick and place operations. This application is expensive to setup with classical methods and thus RL enables an automated approach for this task as well.https://www.frontiersin.org/articles/10.3389/fmtec.2023.1154263/fullreinforcement learningunity game enginemobile robotmannequinproduction environmentpath-finding
spellingShingle Sanjay Nambiar
Anton Wiberg
Mehdi Tarkian
Automation of unstructured production environment by applying reinforcement learning
Frontiers in Manufacturing Technology
reinforcement learning
unity game engine
mobile robot
mannequin
production environment
path-finding
title Automation of unstructured production environment by applying reinforcement learning
title_full Automation of unstructured production environment by applying reinforcement learning
title_fullStr Automation of unstructured production environment by applying reinforcement learning
title_full_unstemmed Automation of unstructured production environment by applying reinforcement learning
title_short Automation of unstructured production environment by applying reinforcement learning
title_sort automation of unstructured production environment by applying reinforcement learning
topic reinforcement learning
unity game engine
mobile robot
mannequin
production environment
path-finding
url https://www.frontiersin.org/articles/10.3389/fmtec.2023.1154263/full
work_keys_str_mv AT sanjaynambiar automationofunstructuredproductionenvironmentbyapplyingreinforcementlearning
AT antonwiberg automationofunstructuredproductionenvironmentbyapplyingreinforcementlearning
AT mehditarkian automationofunstructuredproductionenvironmentbyapplyingreinforcementlearning