Importance-Weighted Variational Inference Model Estimation for Offline Bayesian Model-Based Reinforcement Learning
This paper proposes a model estimation method in offline Bayesian model-based reinforcement learning (MBRL). Learning a Bayes-adaptive Markov decision process (BAMDP) model using standard variational inference often suffers from poor predictive performance due to covariate shift between offline data...
Main Authors: | Toru Hishinuma, Kei Senda |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10368011/ |
Similar Items
-
Offline Meta-Reinforcement Learning with Contrastive Prediction
by: HAN Xu, WU Feng
Published: (2023-08-01) -
Optimizing trajectories for highway driving with offline reinforcement learning
by: Branka Mirchevska, et al.
Published: (2023-05-01) -
Combined Constraint on Behavior Cloning and Discriminator in Offline Reinforcement Learning
by: Shunya Kidera, et al.
Published: (2024-01-01) -
Corrigendum: Optimizing trajectories for highway driving with offline reinforcement learning
by: Branka Mirchevska, et al.
Published: (2023-12-01) -
Bayesian and variational inference for reinforcement
learning
by: Fellows, M
Published: (2021)