Efficient Sim-to-Real Transfer in Reinforcement Learning Through Domain Randomization and Domain Adaptation
Reinforcement learning has gained significant interest in modern industries for its advancements in tackling challenging control tasks compared to rule-based programs. However, the robustness aspect of this technique is still under development, limiting its widespread adoption. This problem has beco...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10343164/ |
_version_ | 1827586139883569152 |
---|---|
author | Aidar Shakerimov Tohid Alizadeh Huseyin Atakan Varol |
author_facet | Aidar Shakerimov Tohid Alizadeh Huseyin Atakan Varol |
author_sort | Aidar Shakerimov |
collection | DOAJ |
description | Reinforcement learning has gained significant interest in modern industries for its advancements in tackling challenging control tasks compared to rule-based programs. However, the robustness aspect of this technique is still under development, limiting its widespread adoption. This problem has become more pronounced as users switch to training simulations to reduce costs, resulting in a reality gap that negatively affects real-world performance. One popular method employed to mitigate this problem is randomizing uncertain parameters of the environment during training. Nevertheless, this approach requires expert knowledge to determine the appropriate range of randomization. On the other hand, there is a technique that introduces fine-tuning of agents by adapting their policies to new environments. However, it is challenging to adapt policies when the new environment requires shifting them off their initial distribution. These obstacles limit the practical utilization and popularization of both techniques. Our study proposes a hybrid approach that handles these issues by fine-tuning agents trained with domain randomization through additional real-world training. To assess the efficacy of our approach, we conducted experiments involving a rotary inverted pendulum, augmented with an extra weight not represented in the simulation. Additionally, we employed simulated environments including a cart pole, a simple pendulum, a quadruped, and an ant robot scenario. These environments were given in two distinct versions with mismatching parameters to imitate a gap between training and testing conditions. The results demonstrate that adding as few as twenty to fifty additional real-world training episodes can significantly enhance the performance of agents trained with domain randomization. Moreover, including fifty to two hundred additional episodes can elevate it to a level comparable to those fully trained in the real world. Our study concludes that achieving efficient simulation-to-reality transfer is feasible with domain randomization and relatively small amounts of real-world training. |
first_indexed | 2024-03-08T23:57:43Z |
format | Article |
id | doaj.art-60c5cbaebef8429ca255ed066c1d1c4f |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-03-08T23:57:43Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-60c5cbaebef8429ca255ed066c1d1c4f2023-12-13T00:01:23ZengIEEEIEEE Access2169-35362023-01-011113680913682410.1109/ACCESS.2023.333956810343164Efficient Sim-to-Real Transfer in Reinforcement Learning Through Domain Randomization and Domain AdaptationAidar Shakerimov0https://orcid.org/0000-0001-9903-1699Tohid Alizadeh1https://orcid.org/0000-0002-9717-3009Huseyin Atakan Varol2https://orcid.org/0000-0002-4042-425XDepartment of Robotics, School of Engineering and Digital Sciences, Nazarbayev University, Astana, KazakhstanDepartment of Robotics, School of Engineering and Digital Sciences, Nazarbayev University, Astana, KazakhstanDepartment of Robotics, School of Engineering and Digital Sciences, Nazarbayev University, Astana, KazakhstanReinforcement learning has gained significant interest in modern industries for its advancements in tackling challenging control tasks compared to rule-based programs. However, the robustness aspect of this technique is still under development, limiting its widespread adoption. This problem has become more pronounced as users switch to training simulations to reduce costs, resulting in a reality gap that negatively affects real-world performance. One popular method employed to mitigate this problem is randomizing uncertain parameters of the environment during training. Nevertheless, this approach requires expert knowledge to determine the appropriate range of randomization. On the other hand, there is a technique that introduces fine-tuning of agents by adapting their policies to new environments. However, it is challenging to adapt policies when the new environment requires shifting them off their initial distribution. These obstacles limit the practical utilization and popularization of both techniques. Our study proposes a hybrid approach that handles these issues by fine-tuning agents trained with domain randomization through additional real-world training. To assess the efficacy of our approach, we conducted experiments involving a rotary inverted pendulum, augmented with an extra weight not represented in the simulation. Additionally, we employed simulated environments including a cart pole, a simple pendulum, a quadruped, and an ant robot scenario. These environments were given in two distinct versions with mismatching parameters to imitate a gap between training and testing conditions. The results demonstrate that adding as few as twenty to fifty additional real-world training episodes can significantly enhance the performance of agents trained with domain randomization. Moreover, including fifty to two hundred additional episodes can elevate it to a level comparable to those fully trained in the real world. Our study concludes that achieving efficient simulation-to-reality transfer is feasible with domain randomization and relatively small amounts of real-world training.https://ieeexplore.ieee.org/document/10343164/Distributional shift problemdomain adaptationdomain randomizationreality gapreinforcement learningrobustness |
spellingShingle | Aidar Shakerimov Tohid Alizadeh Huseyin Atakan Varol Efficient Sim-to-Real Transfer in Reinforcement Learning Through Domain Randomization and Domain Adaptation IEEE Access Distributional shift problem domain adaptation domain randomization reality gap reinforcement learning robustness |
title | Efficient Sim-to-Real Transfer in Reinforcement Learning Through Domain Randomization and Domain Adaptation |
title_full | Efficient Sim-to-Real Transfer in Reinforcement Learning Through Domain Randomization and Domain Adaptation |
title_fullStr | Efficient Sim-to-Real Transfer in Reinforcement Learning Through Domain Randomization and Domain Adaptation |
title_full_unstemmed | Efficient Sim-to-Real Transfer in Reinforcement Learning Through Domain Randomization and Domain Adaptation |
title_short | Efficient Sim-to-Real Transfer in Reinforcement Learning Through Domain Randomization and Domain Adaptation |
title_sort | efficient sim to real transfer in reinforcement learning through domain randomization and domain adaptation |
topic | Distributional shift problem domain adaptation domain randomization reality gap reinforcement learning robustness |
url | https://ieeexplore.ieee.org/document/10343164/ |
work_keys_str_mv | AT aidarshakerimov efficientsimtorealtransferinreinforcementlearningthroughdomainrandomizationanddomainadaptation AT tohidalizadeh efficientsimtorealtransferinreinforcementlearningthroughdomainrandomizationanddomainadaptation AT huseyinatakanvarol efficientsimtorealtransferinreinforcementlearningthroughdomainrandomizationanddomainadaptation |