Deep variational reinforcement learning for POMDPs

Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown. Consequently, there is great need for reinforcement learning methods that can tackle such problems given only a stream of incomplete and noisy observations. In this...

সম্পূর্ণ বিবরণ

গ্রন্থ-পঞ্জীর বিবরন
প্রধান লেখক: Igl, M, Zintgraf, L, Le, T, Wood, F, Whiteson, S
বিন্যাস: Conference item
প্রকাশিত: Journal of Machine Learning Research 2018
_version_ 1826293943406428160
author Igl, M
Zintgraf, L
Le, T
Wood, F
Whiteson, S
author_facet Igl, M
Zintgraf, L
Le, T
Wood, F
Whiteson, S
author_sort Igl, M
collection OXFORD
description Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown. Consequently, there is great need for reinforcement learning methods that can tackle such problems given only a stream of incomplete and noisy observations. In this paper, we propose deep variational reinforcement learning (DVRL), which introduces an inductive bias that allows an agent to learn a generative model of the environment and perform inference in that model to effectively aggregate the available information. We develop an n-step approximation to the evidence lower bound (ELBO), allowing the model to be trained jointly with the policy. This ensures that the latent state representation is suitable for the control task. In experiments on Mountain Hike and flickering Atari we show that our method outperforms previous approaches relying on recurrent neural networks to encode the past.
first_indexed 2024-03-07T03:37:59Z
format Conference item
id oxford-uuid:bced68dd-f0de-4c9a-a0b2-be6b1dd6ca8b
institution University of Oxford
last_indexed 2024-03-07T03:37:59Z
publishDate 2018
publisher Journal of Machine Learning Research
record_format dspace
spelling oxford-uuid:bced68dd-f0de-4c9a-a0b2-be6b1dd6ca8b2022-03-27T05:28:05ZDeep variational reinforcement learning for POMDPsConference itemhttp://purl.org/coar/resource_type/c_5794uuid:bced68dd-f0de-4c9a-a0b2-be6b1dd6ca8bSymplectic Elements at OxfordJournal of Machine Learning Research2018Igl, MZintgraf, LLe, TWood, FWhiteson, SMany real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown. Consequently, there is great need for reinforcement learning methods that can tackle such problems given only a stream of incomplete and noisy observations. In this paper, we propose deep variational reinforcement learning (DVRL), which introduces an inductive bias that allows an agent to learn a generative model of the environment and perform inference in that model to effectively aggregate the available information. We develop an n-step approximation to the evidence lower bound (ELBO), allowing the model to be trained jointly with the policy. This ensures that the latent state representation is suitable for the control task. In experiments on Mountain Hike and flickering Atari we show that our method outperforms previous approaches relying on recurrent neural networks to encode the past.
spellingShingle Igl, M
Zintgraf, L
Le, T
Wood, F
Whiteson, S
Deep variational reinforcement learning for POMDPs
title Deep variational reinforcement learning for POMDPs
title_full Deep variational reinforcement learning for POMDPs
title_fullStr Deep variational reinforcement learning for POMDPs
title_full_unstemmed Deep variational reinforcement learning for POMDPs
title_short Deep variational reinforcement learning for POMDPs
title_sort deep variational reinforcement learning for pomdps
work_keys_str_mv AT iglm deepvariationalreinforcementlearningforpomdps
AT zintgrafl deepvariationalreinforcementlearningforpomdps
AT let deepvariationalreinforcementlearningforpomdps
AT woodf deepvariationalreinforcementlearningforpomdps
AT whitesons deepvariationalreinforcementlearningforpomdps