Summary: | Robot static obstacle avoidance has always been a hot topic in Robot Control. The traditional method utilizes a global path planner, such as A*, with a high precision map, to automatically generate a path that could avoid the obstacles. However, considering the difficulties of producing a high precision map in the real world, map-free methods, such as Reinforcement Learning (RL) methods, have attracted more and more researchers. This dissertation compares various RL algorithms, including DQN, DDQN, and DDPG, with the traditional method, and discusses their performance in different tasks, respectively. A new RL training platform, ROSRL, is also proposed in this dissertation, which improves training efficiency. Researchers can easily deploy RL algorithms and test their performance in ROSRL. The research result of this dissertation is meaningful in exploring state-of-art RL algorithms in static obstacle avoidance problems.
|