Rapidly exploring learning trees

Inverse Reinforcement Learning (IRL) for path planning enables robots to learn cost functions for difficult tasks from demonstration, instead of hard-coding them. However, IRL methods face practical limitations that stem from the need to repeat costly planning procedures. In this paper, we propose R...

Mô tả đầy đủ

Chi tiết về thư mục
Những tác giả chính: Shiarlis, K, Messias, J, Whiteson, S
Định dạng: Conference item
Được phát hành: IEEE 2017
Miêu tả
Tóm tắt:Inverse Reinforcement Learning (IRL) for path planning enables robots to learn cost functions for difficult tasks from demonstration, instead of hard-coding them. However, IRL methods face practical limitations that stem from the need to repeat costly planning procedures. In this paper, we propose Rapidly Exploring Learning Trees (RLT∗ ), which learns the cost functions of Optimal Rapidly Exploring Random Trees (RRT∗ ) from demonstration, thereby making inverse learning methods applicable to more complex tasks. Our approach extends Maximum Margin Planning to work with RRT∗ cost functions. Furthermore, we propose a caching scheme that greatly reduces the computational cost of this approach. Experimental results on simulated and real-robot data from a social navigation scenario show that RLT∗ achieves better performance at lower computational cost than existing methods. We also successfully deploy control policies learned with RLT∗ on a real telepresence robot.