Rapidly exploring learning trees
Inverse Reinforcement Learning (IRL) for path planning enables robots to learn cost functions for difficult tasks from demonstration, instead of hard-coding them. However, IRL methods face practical limitations that stem from the need to repeat costly planning procedures. In this paper, we propose R...
Главные авторы: | Shiarlis, K, Messias, J, Whiteson, S |
---|---|
Формат: | Conference item |
Опубликовано: |
IEEE
2017
|
Схожие документы
-
Inverse reinforcement learning from failure
по: Shiarlis, K, и др.
Опубликовано: (2016) -
TACO: Learning task decomposition via temporal alignment for control
по: Shiarlis, K, и др.
Опубликовано: (2018) -
Dynamic-depth context tree weighting
по: Messias, J, и др.
Опубликовано: (2018) -
Learning from demonstration in the wild
по: Behbahani, F, и др.
Опубликовано: (2019) -
VariBAD: a very good method for Bayes-adaptive deep RL via meta-learning
по: Zintgraf, L, и др.
Опубликовано: (2020)