Rapidly exploring learning trees
Inverse Reinforcement Learning (IRL) for path planning enables robots to learn cost functions for difficult tasks from demonstration, instead of hard-coding them. However, IRL methods face practical limitations that stem from the need to repeat costly planning procedures. In this paper, we propose R...
主要な著者: | Shiarlis, K, Messias, J, Whiteson, S |
---|---|
フォーマット: | Conference item |
出版事項: |
IEEE
2017
|
類似資料
-
Inverse reinforcement learning from failure
著者:: Shiarlis, K, 等
出版事項: (2016) -
TACO: Learning task decomposition via temporal alignment for control
著者:: Shiarlis, K, 等
出版事項: (2018) -
Dynamic-depth context tree weighting
著者:: Messias, J, 等
出版事項: (2018) -
Learning from demonstration in the wild
著者:: Behbahani, F, 等
出版事項: (2019) -
VariBAD: a very good method for Bayes-adaptive deep RL via meta-learning
著者:: Zintgraf, L, 等
出版事項: (2020)