Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning
In reinforcement learning, the epsilon (ε)-greedy strategy is commonly employed as an exploration technique This method, however, leads to extensive initial exploration and prolonged learning periods. Existing approaches to mitigate this issue involve constraining the exploration range using expert...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-08-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/23/17/7411 |
_version_ | 1797581880116117504 |
---|---|
author | Dohyun Kyoung Yunsick Sung |
author_facet | Dohyun Kyoung Yunsick Sung |
author_sort | Dohyun Kyoung |
collection | DOAJ |
description | In reinforcement learning, the epsilon (ε)-greedy strategy is commonly employed as an exploration technique This method, however, leads to extensive initial exploration and prolonged learning periods. Existing approaches to mitigate this issue involve constraining the exploration range using expert data or utilizing pretrained models. Nevertheless, these methods do not effectively reduce the initial exploration range, as the exploration by the agent is limited to states adjacent to those included in the expert data. This paper proposes a method to reduce the initial exploration range in reinforcement learning through a pretrained transformer decoder on expert data. The proposed method involves pretraining a transformer decoder with massive expert data to guide the agent’s actions during the early learning stages. After achieving a certain learning threshold, the actions are determined using the epsilon-greedy strategy. An experiment was conducted in the basketball game FreeStyle1 to compare the proposed method with the traditional Deep Q-Network (DQN) using the epsilon-greedy strategy. The results indicated that the proposed method yielded approximately 2.5 times the average reward and a 26% higher win rate, proving its enhanced performance in reducing exploration range and optimizing learning times. This innovative method presents a significant improvement over traditional exploration techniques in reinforcement learning. |
first_indexed | 2024-03-10T23:13:54Z |
format | Article |
id | doaj.art-e8b1fce2847e4aaf91703996ee9d9674 |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-10T23:13:54Z |
publishDate | 2023-08-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-e8b1fce2847e4aaf91703996ee9d96742023-11-19T08:49:32ZengMDPI AGSensors1424-82202023-08-012317741110.3390/s23177411Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement LearningDohyun Kyoung0Yunsick Sung1Department of Autonomous Things Intelligence, Graduate School, Dongguk University–Seoul, Seoul 04620, Republic of KoreaDivision of AI Software Convergence, Dongguk University–Seoul, Seoul 04620, Republic of KoreaIn reinforcement learning, the epsilon (ε)-greedy strategy is commonly employed as an exploration technique This method, however, leads to extensive initial exploration and prolonged learning periods. Existing approaches to mitigate this issue involve constraining the exploration range using expert data or utilizing pretrained models. Nevertheless, these methods do not effectively reduce the initial exploration range, as the exploration by the agent is limited to states adjacent to those included in the expert data. This paper proposes a method to reduce the initial exploration range in reinforcement learning through a pretrained transformer decoder on expert data. The proposed method involves pretraining a transformer decoder with massive expert data to guide the agent’s actions during the early learning stages. After achieving a certain learning threshold, the actions are determined using the epsilon-greedy strategy. An experiment was conducted in the basketball game FreeStyle1 to compare the proposed method with the traditional Deep Q-Network (DQN) using the epsilon-greedy strategy. The results indicated that the proposed method yielded approximately 2.5 times the average reward and a 26% higher win rate, proving its enhanced performance in reducing exploration range and optimizing learning times. This innovative method presents a significant improvement over traditional exploration techniques in reinforcement learning.https://www.mdpi.com/1424-8220/23/17/7411machine learningreinforcement learningpretrainingexplorationtransformer-decoder |
spellingShingle | Dohyun Kyoung Yunsick Sung Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning Sensors machine learning reinforcement learning pretraining exploration transformer-decoder |
title | Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning |
title_full | Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning |
title_fullStr | Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning |
title_full_unstemmed | Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning |
title_short | Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning |
title_sort | transformer decoder based enhanced exploration method to alleviate initial exploration problems in reinforcement learning |
topic | machine learning reinforcement learning pretraining exploration transformer-decoder |
url | https://www.mdpi.com/1424-8220/23/17/7411 |
work_keys_str_mv | AT dohyunkyoung transformerdecoderbasedenhancedexplorationmethodtoalleviateinitialexplorationproblemsinreinforcementlearning AT yunsicksung transformerdecoderbasedenhancedexplorationmethodtoalleviateinitialexplorationproblemsinreinforcementlearning |