Proximal policy optimization with adaptive threshold for symmetric relative density ratio
Deep reinforcement learning (DRL) is one of the promising approaches for introducing robots into complicated environments. The recent remarkable progress of DRL stands on regularization of policy, which allows the policy to improve stably and efficiently. A popular method, so-called proximal policy...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2023-03-01
|
Series: | Results in Control and Optimization |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2666720722000649 |
_version_ | 1811157866732584960 |
---|---|
author | Taisuke Kobayashi |
author_facet | Taisuke Kobayashi |
author_sort | Taisuke Kobayashi |
collection | DOAJ |
description | Deep reinforcement learning (DRL) is one of the promising approaches for introducing robots into complicated environments. The recent remarkable progress of DRL stands on regularization of policy, which allows the policy to improve stably and efficiently. A popular method, so-called proximal policy optimization (PPO), and its variants constrain density ratio of the latest and baseline policies when the density ratio exceeds a given threshold. This threshold can be designed relatively intuitively, and in fact its recommended value range has been suggested. However, the density ratio is asymmetric for its center, and the possible error scale from its center, which should be close to the threshold, would depend on how the baseline policy is given. In order to maximize the values of regularization of policy, this paper proposes a new PPO derived using relative Pearson (RPE) divergence, therefore so-called PPO-RPE, to design the threshold adaptively. In PPO-RPE, the relative density ratio, which can be formed with symmetry, replaces the raw density ratio. Thanks to this symmetry, its error scale from center can easily be estimated, hence, the threshold can be adapted for the estimated error scale. From three simple benchmark simulations, the importance of algorithm-dependent threshold design is revealed. By simulating additional four locomotion tasks, it is verified that the proposed method statistically contributes to task accomplishment by appropriately restricting the policy updates. |
first_indexed | 2024-04-10T05:14:42Z |
format | Article |
id | doaj.art-3e7ee3cfdcb840a19b424c78be385ddb |
institution | Directory Open Access Journal |
issn | 2666-7207 |
language | English |
last_indexed | 2024-04-10T05:14:42Z |
publishDate | 2023-03-01 |
publisher | Elsevier |
record_format | Article |
series | Results in Control and Optimization |
spelling | doaj.art-3e7ee3cfdcb840a19b424c78be385ddb2023-03-09T04:13:50ZengElsevierResults in Control and Optimization2666-72072023-03-0110100192Proximal policy optimization with adaptive threshold for symmetric relative density ratioTaisuke Kobayashi0Correspondence to: Principles of Informatics Research Division, National Institute of Informatics, Tokyo, Japan.; Principles of Informatics Research Division, National Institute of Informatics, Tokyo, Japan; School of Multidisciplinary Sciences, Department of Informatics, The Graduate University for Advanced Studies (SOKENDAI), Kanagawa, JapanDeep reinforcement learning (DRL) is one of the promising approaches for introducing robots into complicated environments. The recent remarkable progress of DRL stands on regularization of policy, which allows the policy to improve stably and efficiently. A popular method, so-called proximal policy optimization (PPO), and its variants constrain density ratio of the latest and baseline policies when the density ratio exceeds a given threshold. This threshold can be designed relatively intuitively, and in fact its recommended value range has been suggested. However, the density ratio is asymmetric for its center, and the possible error scale from its center, which should be close to the threshold, would depend on how the baseline policy is given. In order to maximize the values of regularization of policy, this paper proposes a new PPO derived using relative Pearson (RPE) divergence, therefore so-called PPO-RPE, to design the threshold adaptively. In PPO-RPE, the relative density ratio, which can be formed with symmetry, replaces the raw density ratio. Thanks to this symmetry, its error scale from center can easily be estimated, hence, the threshold can be adapted for the estimated error scale. From three simple benchmark simulations, the importance of algorithm-dependent threshold design is revealed. By simulating additional four locomotion tasks, it is verified that the proposed method statistically contributes to task accomplishment by appropriately restricting the policy updates.http://www.sciencedirect.com/science/article/pii/S2666720722000649Reinforcement learningDeep learningPolicy regularization |
spellingShingle | Taisuke Kobayashi Proximal policy optimization with adaptive threshold for symmetric relative density ratio Results in Control and Optimization Reinforcement learning Deep learning Policy regularization |
title | Proximal policy optimization with adaptive threshold for symmetric relative density ratio |
title_full | Proximal policy optimization with adaptive threshold for symmetric relative density ratio |
title_fullStr | Proximal policy optimization with adaptive threshold for symmetric relative density ratio |
title_full_unstemmed | Proximal policy optimization with adaptive threshold for symmetric relative density ratio |
title_short | Proximal policy optimization with adaptive threshold for symmetric relative density ratio |
title_sort | proximal policy optimization with adaptive threshold for symmetric relative density ratio |
topic | Reinforcement learning Deep learning Policy regularization |
url | http://www.sciencedirect.com/science/article/pii/S2666720722000649 |
work_keys_str_mv | AT taisukekobayashi proximalpolicyoptimizationwithadaptivethresholdforsymmetricrelativedensityratio |