From reinforcement learning to classical path planning: motion planning with obstacle avoidance

This project investigates the comparative performance of Reinforcement Learning (RL) and sampling-based motion planning methods in robotics, focusing on obstacle avoidance, illustrated in a 3D and 2D environment respectively with a singular agent and obstacle present. This is broken down into two ph...

Full description

Bibliographic Details
Main Author: Ng, Tze Minh
Other Authors: Yeo Chai Kiat
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/181149
_version_ 1826110030284324864
author Ng, Tze Minh
author2 Yeo Chai Kiat
author_facet Yeo Chai Kiat
Ng, Tze Minh
author_sort Ng, Tze Minh
collection NTU
description This project investigates the comparative performance of Reinforcement Learning (RL) and sampling-based motion planning methods in robotics, focusing on obstacle avoidance, illustrated in a 3D and 2D environment respectively with a singular agent and obstacle present. This is broken down into two phases. The approach involves first replicating the results of a chosen research paper on Soft Actor Critic with Prioritised Experience Replay (SACPER) and running it on a simulation software. Then, a comparative analysis of different sampling-based motion planning algorithms is generated. Through this process, insights into how differing scenarios and tasks call for different methods for optimal performance will be uncovered. Phase 1 involving the implementation of SACPER was unable to learn due to a stagnant reward curve, which necessitated the need for increased time and computing resources. Phase 2 investigated how sampling-based methods performed in a 2D environment based on slight changes in the environment. Overall, this project contributes to the understanding of motion planning for robotics, emphasizing the strengths and limitations of learning and sampling-based strategies. Future developments in considering a hybrid approach between learning and sampling-based strategies could be pioneered.
first_indexed 2025-03-09T09:57:14Z
format Final Year Project (FYP)
id ntu-10356/181149
institution Nanyang Technological University
language English
last_indexed 2025-03-09T09:57:14Z
publishDate 2024
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1811492024-11-18T00:46:50Z From reinforcement learning to classical path planning: motion planning with obstacle avoidance Ng, Tze Minh Yeo Chai Kiat College of Computing and Data Science ASCKYEO@ntu.edu.sg Computer and Information Science Reinforcement learning Motion planning This project investigates the comparative performance of Reinforcement Learning (RL) and sampling-based motion planning methods in robotics, focusing on obstacle avoidance, illustrated in a 3D and 2D environment respectively with a singular agent and obstacle present. This is broken down into two phases. The approach involves first replicating the results of a chosen research paper on Soft Actor Critic with Prioritised Experience Replay (SACPER) and running it on a simulation software. Then, a comparative analysis of different sampling-based motion planning algorithms is generated. Through this process, insights into how differing scenarios and tasks call for different methods for optimal performance will be uncovered. Phase 1 involving the implementation of SACPER was unable to learn due to a stagnant reward curve, which necessitated the need for increased time and computing resources. Phase 2 investigated how sampling-based methods performed in a 2D environment based on slight changes in the environment. Overall, this project contributes to the understanding of motion planning for robotics, emphasizing the strengths and limitations of learning and sampling-based strategies. Future developments in considering a hybrid approach between learning and sampling-based strategies could be pioneered. Bachelor's degree 2024-11-18T00:46:50Z 2024-11-18T00:46:50Z 2024 Final Year Project (FYP) Ng, T. M. (2024). From reinforcement learning to classical path planning: motion planning with obstacle avoidance. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/181149 https://hdl.handle.net/10356/181149 en SCSE23-1186 application/pdf Nanyang Technological University
spellingShingle Computer and Information Science
Reinforcement learning
Motion planning
Ng, Tze Minh
From reinforcement learning to classical path planning: motion planning with obstacle avoidance
title From reinforcement learning to classical path planning: motion planning with obstacle avoidance
title_full From reinforcement learning to classical path planning: motion planning with obstacle avoidance
title_fullStr From reinforcement learning to classical path planning: motion planning with obstacle avoidance
title_full_unstemmed From reinforcement learning to classical path planning: motion planning with obstacle avoidance
title_short From reinforcement learning to classical path planning: motion planning with obstacle avoidance
title_sort from reinforcement learning to classical path planning motion planning with obstacle avoidance
topic Computer and Information Science
Reinforcement learning
Motion planning
url https://hdl.handle.net/10356/181149
work_keys_str_mv AT ngtzeminh fromreinforcementlearningtoclassicalpathplanningmotionplanningwithobstacleavoidance