PerSim: Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents via Latent Factor Representation
Offline reinforcement learning, where a policy is learned from a fixed dataset of trajectories without further interaction with the environment, is one of the greatest challenges in reinforcement learning. Despite its compelling application to large, real-world datasets, existing RL benchmarks have...
Main Author: | Yang, Cindy X. |
---|---|
Other Authors: | Shah, Devavrat |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2022
|
Online Access: | https://hdl.handle.net/1721.1/139130 |
Similar Items
-
Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents
by: Alumootil, Varkey
Published: (2022) -
Human-centric dialog training via offline reinforcement learning
by: Jaques, Natasha, et al.
Published: (2022) -
SimHazard : an agent-world exception simulator
by: Shue, David (David Dau Chuen), 1976-
Published: (2013) -
A dual latent variable personalized dialogue agent
by: Lee, Jing Yang, et al.
Published: (2023) -
Incorporating the range-based method into GridSim for modeling task and resource heterogeneity
by: Eng, Kailun, et al.
Published: (2017)