TreeQN and ATreeC: differentiable tree planning for deep reinforcement learning
Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where tran...
Main Authors: | , , , |
---|---|
Format: | Conference item |
Published: |
International Conference on Learning Representations
2018
|
_version_ | 1797050660139565056 |
---|---|
author | Farquhar, G Rocktaeschel, T Igl, M Whiteson, S |
author_facet | Farquhar, G Rocktaeschel, T Igl, M Whiteson, S |
author_sort | Farquhar, G |
collection | OXFORD |
description | Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the tree. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games. Furthermore, we present ablation studies that demonstrate the effect of different auxiliary losses on learning transition models. |
first_indexed | 2024-03-06T18:08:27Z |
format | Conference item |
id | oxford-uuid:0234a569-9860-41af-93c0-84229b4757d2 |
institution | University of Oxford |
last_indexed | 2024-03-06T18:08:27Z |
publishDate | 2018 |
publisher | International Conference on Learning Representations |
record_format | dspace |
spelling | oxford-uuid:0234a569-9860-41af-93c0-84229b4757d22022-03-26T08:39:24ZTreeQN and ATreeC: differentiable tree planning for deep reinforcement learningConference itemhttp://purl.org/coar/resource_type/c_5794uuid:0234a569-9860-41af-93c0-84229b4757d2Symplectic Elements at OxfordInternational Conference on Learning Representations2018Farquhar, GRocktaeschel, TIgl, MWhiteson, SCombining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the tree. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games. Furthermore, we present ablation studies that demonstrate the effect of different auxiliary losses on learning transition models. |
spellingShingle | Farquhar, G Rocktaeschel, T Igl, M Whiteson, S TreeQN and ATreeC: differentiable tree planning for deep reinforcement learning |
title | TreeQN and ATreeC: differentiable tree planning for deep reinforcement learning |
title_full | TreeQN and ATreeC: differentiable tree planning for deep reinforcement learning |
title_fullStr | TreeQN and ATreeC: differentiable tree planning for deep reinforcement learning |
title_full_unstemmed | TreeQN and ATreeC: differentiable tree planning for deep reinforcement learning |
title_short | TreeQN and ATreeC: differentiable tree planning for deep reinforcement learning |
title_sort | treeqn and atreec differentiable tree planning for deep reinforcement learning |
work_keys_str_mv | AT farquharg treeqnandatreecdifferentiabletreeplanningfordeepreinforcementlearning AT rocktaeschelt treeqnandatreecdifferentiabletreeplanningfordeepreinforcementlearning AT iglm treeqnandatreecdifferentiabletreeplanningfordeepreinforcementlearning AT whitesons treeqnandatreecdifferentiabletreeplanningfordeepreinforcementlearning |