Action-driven contrastive representation for reinforcement learning.
In reinforcement learning, reward-driven feature learning directly from high-dimensional images faces two challenges: sample-efficiency for solving control tasks and generalization to unseen observations. In prior works, these issues have been addressed through learning representation from pixel inp...
Main Authors: | Minbeom Kim, Kyeongha Rho, Yong-Duk Kim, Kyomin Jung |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2022-01-01
|
Series: | PLoS ONE |
Online Access: | https://doi.org/10.1371/journal.pone.0265456 |
Similar Items
-
CST-RL: Contrastive Spatio-Temporal Representations for Reinforcement Learning
by: Chi-Kai Ho, et al.
Published: (2023-01-01) -
Deep Reinforcement Learning-Based Air-to-Air Combat Maneuver Generation in a Realistic Environment
by: Jung Ho Bae, et al.
Published: (2023-01-01) -
Iterative Learning for K-Approval Votes in Crowdsourcing Systems
by: Joonyoung Kim, et al.
Published: (2021-01-01) -
Contrastive self-supervised representation learning without negative samples for multimodal human action recognition
by: Huaigang Yang, et al.
Published: (2023-07-01) -
Visualization of Concrete Slump Flow Using the Kinect Sensor
by: Jung-Hoon Kim, et al.
Published: (2018-03-01)