Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents
Performance of state-of-the art offline and model-based reinforcement learning (RL) algorithms deteriorates significantly when subjected to severe data scarcity and the presence of heterogeneous agents. In this work, we propose a model-based offline RL method to approach this setting. Using all avai...
Main Author: | Alumootil, Varkey |
---|---|
Other Authors: | Shah, Devavrat |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2022
|
Online Access: | https://hdl.handle.net/1721.1/139143 |
Similar Items
-
PerSim: Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents via Latent Factor Representation
by: Yang, Cindy X.
Published: (2022) -
Data-efficient multi-agent reinforcement learning
by: Wong, Reuben Yuh Sheng
Published: (2022) -
Human-centric dialog training via offline reinforcement learning
by: Jaques, Natasha, et al.
Published: (2022) -
Offline Pricing and Demand Learning with Censored Data
by: Bu, Jinzhi, et al.
Published: (2023) -
Online and offline learning in operations
by: Wang, Li
Published: (2021)