Action-driven contrastive representation for reinforcement learning

In reinforcement learning, reward-driven feature learning directly from high-dimensional images faces two challenges: sample-efficiency for solving control tasks and generalization to unseen observations. In prior works, these issues have been addressed through learning representation from pixel inp...

Full description

Bibliographic Details
Main Authors: Minbeom Kim, Kyeongha Rho, Yong-duk Kim, Kyomin Jung
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2022-01-01
Series:PLoS ONE
Online Access:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8932622/?tool=EBI
_version_ 1818446010062471168
author Minbeom Kim
Kyeongha Rho
Yong-duk Kim
Kyomin Jung
author_facet Minbeom Kim
Kyeongha Rho
Yong-duk Kim
Kyomin Jung
author_sort Minbeom Kim
collection DOAJ
description In reinforcement learning, reward-driven feature learning directly from high-dimensional images faces two challenges: sample-efficiency for solving control tasks and generalization to unseen observations. In prior works, these issues have been addressed through learning representation from pixel inputs. However, their representation faced the limitations of being vulnerable to the high diversity inherent in environments or not taking the characteristics for solving control tasks. To attenuate these phenomena, we propose the novel contrastive representation method, Action-Driven Auxiliary Task (ADAT), which forces a representation to concentrate on essential features for deciding actions and ignore control-irrelevant details. In the augmented state-action dictionary of ADAT, the agent learns representation to maximize agreement between observations sharing the same actions. The proposed method significantly outperforms model-free and model-based algorithms in the Atari and OpenAI ProcGen, widely used benchmarks for sample-efficiency and generalization.
first_indexed 2024-12-14T19:40:55Z
format Article
id doaj.art-bd57e6ce71354f308e95053807461aaf
institution Directory Open Access Journal
issn 1932-6203
language English
last_indexed 2024-12-14T19:40:55Z
publishDate 2022-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj.art-bd57e6ce71354f308e95053807461aaf2022-12-21T22:49:42ZengPublic Library of Science (PLoS)PLoS ONE1932-62032022-01-01173Action-driven contrastive representation for reinforcement learningMinbeom KimKyeongha RhoYong-duk KimKyomin JungIn reinforcement learning, reward-driven feature learning directly from high-dimensional images faces two challenges: sample-efficiency for solving control tasks and generalization to unseen observations. In prior works, these issues have been addressed through learning representation from pixel inputs. However, their representation faced the limitations of being vulnerable to the high diversity inherent in environments or not taking the characteristics for solving control tasks. To attenuate these phenomena, we propose the novel contrastive representation method, Action-Driven Auxiliary Task (ADAT), which forces a representation to concentrate on essential features for deciding actions and ignore control-irrelevant details. In the augmented state-action dictionary of ADAT, the agent learns representation to maximize agreement between observations sharing the same actions. The proposed method significantly outperforms model-free and model-based algorithms in the Atari and OpenAI ProcGen, widely used benchmarks for sample-efficiency and generalization.https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8932622/?tool=EBI
spellingShingle Minbeom Kim
Kyeongha Rho
Yong-duk Kim
Kyomin Jung
Action-driven contrastive representation for reinforcement learning
PLoS ONE
title Action-driven contrastive representation for reinforcement learning
title_full Action-driven contrastive representation for reinforcement learning
title_fullStr Action-driven contrastive representation for reinforcement learning
title_full_unstemmed Action-driven contrastive representation for reinforcement learning
title_short Action-driven contrastive representation for reinforcement learning
title_sort action driven contrastive representation for reinforcement learning
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8932622/?tool=EBI
work_keys_str_mv AT minbeomkim actiondrivencontrastiverepresentationforreinforcementlearning
AT kyeongharho actiondrivencontrastiverepresentationforreinforcementlearning
AT yongdukkim actiondrivencontrastiverepresentationforreinforcementlearning
AT kyominjung actiondrivencontrastiverepresentationforreinforcementlearning