Real Time Mini-Robot Using Improved Q-learning
The task planning by a robot becomes easier when it has the requisite knowledge about its world and there is a self improving ability. In many artificial intelligent research areas like robotics navigation, path planning, and autonomous it needs to extract features precisely from environment to get...
Main Author: | |
---|---|
Format: | Article |
Language: | Arabic |
Published: |
Mustansiriyah University/College of Engineering
2011-09-01
|
Series: | Journal of Engineering and Sustainable Development |
Subjects: | |
Online Access: | https://jeasd.uomustansiriyah.edu.iq/index.php/jeasd/article/view/1330 |
_version_ | 1798022422669033472 |
---|---|
author | Mohannad Abid Shehab Ahmed |
author_facet | Mohannad Abid Shehab Ahmed |
author_sort | Mohannad Abid Shehab Ahmed |
collection | DOAJ |
description |
The task planning by a robot becomes easier when it has the requisite knowledge about its world and there is a self improving ability. In many artificial intelligent research areas like robotics navigation, path planning, and autonomous it needs to extract features precisely from environment to get the shortest path away from obstacles and even smooth this path. Choosing the path is being related to many variables like, the random of site and movement of obstacles, changing obstacle’s speed, robot’s size, and robot’s speed variation. Scaling down robots to miniature size introduces many new challenges including memory and program size limitations, low processor performance, and low power autonomy. As a result to obvious, the simplified Q-learning tends to solve these problems as well as it learns the robot behavior on line and in real time. In this paper, numerical efficient methods (sparse reward function and directed explorer) are presented and added to the simplified type to get a self-improving on Q-Learning operations which involves the number of trial, task time and hazard, so it is natural to try to reduce the number of states, actions, and overall time. The overall analysis results in an accurate and numerically stable method for improving Q-learning.
|
first_indexed | 2024-04-11T17:29:40Z |
format | Article |
id | doaj.art-200319d74a81443ba49e7c5c0a0302f2 |
institution | Directory Open Access Journal |
issn | 2520-0917 2520-0925 |
language | Arabic |
last_indexed | 2024-04-11T17:29:40Z |
publishDate | 2011-09-01 |
publisher | Mustansiriyah University/College of Engineering |
record_format | Article |
series | Journal of Engineering and Sustainable Development |
spelling | doaj.art-200319d74a81443ba49e7c5c0a0302f22022-12-22T04:12:03ZaraMustansiriyah University/College of EngineeringJournal of Engineering and Sustainable Development2520-09172520-09252011-09-01153Real Time Mini-Robot Using Improved Q-learningMohannad Abid Shehab Ahmed0Electrical Engineering Department, Al-Mustansiriyah University, Baghdad, Iraq The task planning by a robot becomes easier when it has the requisite knowledge about its world and there is a self improving ability. In many artificial intelligent research areas like robotics navigation, path planning, and autonomous it needs to extract features precisely from environment to get the shortest path away from obstacles and even smooth this path. Choosing the path is being related to many variables like, the random of site and movement of obstacles, changing obstacle’s speed, robot’s size, and robot’s speed variation. Scaling down robots to miniature size introduces many new challenges including memory and program size limitations, low processor performance, and low power autonomy. As a result to obvious, the simplified Q-learning tends to solve these problems as well as it learns the robot behavior on line and in real time. In this paper, numerical efficient methods (sparse reward function and directed explorer) are presented and added to the simplified type to get a self-improving on Q-Learning operations which involves the number of trial, task time and hazard, so it is natural to try to reduce the number of states, actions, and overall time. The overall analysis results in an accurate and numerically stable method for improving Q-learning. https://jeasd.uomustansiriyah.edu.iq/index.php/jeasd/article/view/1330Reinforcement LearningQ-LearningMobile Robot89c52 MCU |
spellingShingle | Mohannad Abid Shehab Ahmed Real Time Mini-Robot Using Improved Q-learning Journal of Engineering and Sustainable Development Reinforcement Learning Q-Learning Mobile Robot 89c52 MCU |
title | Real Time Mini-Robot Using Improved Q-learning |
title_full | Real Time Mini-Robot Using Improved Q-learning |
title_fullStr | Real Time Mini-Robot Using Improved Q-learning |
title_full_unstemmed | Real Time Mini-Robot Using Improved Q-learning |
title_short | Real Time Mini-Robot Using Improved Q-learning |
title_sort | real time mini robot using improved q learning |
topic | Reinforcement Learning Q-Learning Mobile Robot 89c52 MCU |
url | https://jeasd.uomustansiriyah.edu.iq/index.php/jeasd/article/view/1330 |
work_keys_str_mv | AT mohannadabidshehabahmed realtimeminirobotusingimprovedqlearning |