Dynamic Path Planning using a modification Q-Learning Algorithm for a Mobile Robot
Robot navigation involves a challenging task: path planning for a mobile robot operating in a changing environment. This work presents an enhanced Q-learning based path planning technique. For mobile robots operating in dynamic environments, an algorithm and a few heuristic searching techniques are...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
EDP Sciences
2024-01-01
|
Series: | BIO Web of Conferences |
Online Access: | https://www.bio-conferences.org/articles/bioconf/pdf/2024/16/bioconf_iscku2024_00011.pdf |
_version_ | 1797213241891356672 |
---|---|
author | Fallooh Noor H. Sadiq Ahmed T. Abbas Eyad I. hashim Ivan A. |
author_facet | Fallooh Noor H. Sadiq Ahmed T. Abbas Eyad I. hashim Ivan A. |
author_sort | Fallooh Noor H. |
collection | DOAJ |
description | Robot navigation involves a challenging task: path planning for a mobile robot operating in a changing environment. This work presents an enhanced Q-learning based path planning technique. For mobile robots operating in dynamic environments, an algorithm and a few heuristic searching techniques are suggested. Enhanced Q-learning employs a novel exploration approach that blends Boltzmann and ε-greedy exploration. Heuristic searching techniques are also offered in order to constrict the orientation angle variation range and narrow the search space. In the meantime, the robotics literature of the energy field notes that the decrease in orientation angle and path length is significant. A dynamic reward is suggested to help the mobile robot approach the target location in order to expedite the convergence of the Q-learning and shorten the computation time. There are two sections to the experiments: quick and reassured route planning. With quickly path planning, the mobile robot can reach the objective with the best path length, and with secure path planning, it can avoid obstacles. The superior performance of the suggested strategy is quick and reassured 8-connection Q-learning (Q8CQL) was validated by simulations, comparing it to classical Q-learning and other planning methods in terms of time taken and ideal path. |
first_indexed | 2024-04-24T10:55:09Z |
format | Article |
id | doaj.art-3f051d156a674ca99b4c229fa4e1c687 |
institution | Directory Open Access Journal |
issn | 2117-4458 |
language | English |
last_indexed | 2024-04-24T10:55:09Z |
publishDate | 2024-01-01 |
publisher | EDP Sciences |
record_format | Article |
series | BIO Web of Conferences |
spelling | doaj.art-3f051d156a674ca99b4c229fa4e1c6872024-04-12T07:36:21ZengEDP SciencesBIO Web of Conferences2117-44582024-01-01970001110.1051/bioconf/20249700011bioconf_iscku2024_00011Dynamic Path Planning using a modification Q-Learning Algorithm for a Mobile RobotFallooh Noor H.0Sadiq Ahmed T.1Abbas Eyad I.2hashim Ivan A.3Electrical Engineering Department, University of TechnologyComputer Science Department, University of TechnologyElectrical Engineering Department, University of TechnologyElectrical Engineering Department, University of TechnologyRobot navigation involves a challenging task: path planning for a mobile robot operating in a changing environment. This work presents an enhanced Q-learning based path planning technique. For mobile robots operating in dynamic environments, an algorithm and a few heuristic searching techniques are suggested. Enhanced Q-learning employs a novel exploration approach that blends Boltzmann and ε-greedy exploration. Heuristic searching techniques are also offered in order to constrict the orientation angle variation range and narrow the search space. In the meantime, the robotics literature of the energy field notes that the decrease in orientation angle and path length is significant. A dynamic reward is suggested to help the mobile robot approach the target location in order to expedite the convergence of the Q-learning and shorten the computation time. There are two sections to the experiments: quick and reassured route planning. With quickly path planning, the mobile robot can reach the objective with the best path length, and with secure path planning, it can avoid obstacles. The superior performance of the suggested strategy is quick and reassured 8-connection Q-learning (Q8CQL) was validated by simulations, comparing it to classical Q-learning and other planning methods in terms of time taken and ideal path.https://www.bio-conferences.org/articles/bioconf/pdf/2024/16/bioconf_iscku2024_00011.pdf |
spellingShingle | Fallooh Noor H. Sadiq Ahmed T. Abbas Eyad I. hashim Ivan A. Dynamic Path Planning using a modification Q-Learning Algorithm for a Mobile Robot BIO Web of Conferences |
title | Dynamic Path Planning using a modification Q-Learning Algorithm for a Mobile Robot |
title_full | Dynamic Path Planning using a modification Q-Learning Algorithm for a Mobile Robot |
title_fullStr | Dynamic Path Planning using a modification Q-Learning Algorithm for a Mobile Robot |
title_full_unstemmed | Dynamic Path Planning using a modification Q-Learning Algorithm for a Mobile Robot |
title_short | Dynamic Path Planning using a modification Q-Learning Algorithm for a Mobile Robot |
title_sort | dynamic path planning using a modification q learning algorithm for a mobile robot |
url | https://www.bio-conferences.org/articles/bioconf/pdf/2024/16/bioconf_iscku2024_00011.pdf |
work_keys_str_mv | AT falloohnoorh dynamicpathplanningusingamodificationqlearningalgorithmforamobilerobot AT sadiqahmedt dynamicpathplanningusingamodificationqlearningalgorithmforamobilerobot AT abbaseyadi dynamicpathplanningusingamodificationqlearningalgorithmforamobilerobot AT hashimivana dynamicpathplanningusingamodificationqlearningalgorithmforamobilerobot |