Dynamic modelling and intelligent control of a mobile robot

Recently, the intelligent agent has become one of the important issues in Artificial Intelligence. The intelligent agent is expected to think and to act like a human to accomplish a given task in the near future. Generally, if given a goal to the intelligent agent, it is supposed to perform some act...

Full description

Bibliographic Details
Main Author: Thida Khin Saw.
Other Authors: Er Meng Joo
Format: Thesis
Language:English
Published: 2009
Subjects:
Online Access:http://hdl.handle.net/10356/18780
_version_ 1811687797080195072
author Thida Khin Saw.
author2 Er Meng Joo
author_facet Er Meng Joo
Thida Khin Saw.
author_sort Thida Khin Saw.
collection NTU
description Recently, the intelligent agent has become one of the important issues in Artificial Intelligence. The intelligent agent is expected to think and to act like a human to accomplish a given task in the near future. Generally, if given a goal to the intelligent agent, it is supposed to perform some actions by itself and adapt to environments to achieve the goal. The intelligent agent must fine the optimal solutions form given user’s goals and the environments that the agent explores/ many researches attempted to solve these kinds of problems. Reinforcement learning is the learning algorithm that has been used for the intelligent agents in the dynamic environments in order to satisfy the autonomy and the adaptability. The reinforcement learning problem is to maximize the numerical reward signal in a given environment. Out of many reinforcement learning algorithms, Q-learning is the most famous on which stores Q-values associated with each state-action pair in a look-up table. But tabular formulation has the limitation of generalization in continuous environment. This problem is solved by adopting the fuzzy inference system (FIS) in Fuzzy Q-learning (FQL) algorithm proposed in [2]. But the FQL algorithm only adjusts the parameters of fuzzy systems and do not involve structure identification. Structure identification is achieved in Dynamic Fuzzy Q-Learning (DFQL) Algorithm proposed in [3]. It constructs a structure online. Although the rules can be generated online, the redundant rules can not be deleted. Some rules which are no more active are pruned in the recently developed algorithm, Dynamic Self-Generated Fuzzy Q-Learning (DSGFQL)/ This thesis focuses on the actual implementation of currently developed algorithms, DSGFQL, DFQL, and FQL. The experimental results cooperated with simulation results shows the superiorities of fuzzy control algorithms to the basic control algorithm.
first_indexed 2024-10-01T05:22:01Z
format Thesis
id ntu-10356/18780
institution Nanyang Technological University
language English
last_indexed 2024-10-01T05:22:01Z
publishDate 2009
record_format dspace
spelling ntu-10356/187802023-07-04T16:03:42Z Dynamic modelling and intelligent control of a mobile robot Thida Khin Saw. Er Meng Joo School of Electrical and Electronic Engineering DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics Recently, the intelligent agent has become one of the important issues in Artificial Intelligence. The intelligent agent is expected to think and to act like a human to accomplish a given task in the near future. Generally, if given a goal to the intelligent agent, it is supposed to perform some actions by itself and adapt to environments to achieve the goal. The intelligent agent must fine the optimal solutions form given user’s goals and the environments that the agent explores/ many researches attempted to solve these kinds of problems. Reinforcement learning is the learning algorithm that has been used for the intelligent agents in the dynamic environments in order to satisfy the autonomy and the adaptability. The reinforcement learning problem is to maximize the numerical reward signal in a given environment. Out of many reinforcement learning algorithms, Q-learning is the most famous on which stores Q-values associated with each state-action pair in a look-up table. But tabular formulation has the limitation of generalization in continuous environment. This problem is solved by adopting the fuzzy inference system (FIS) in Fuzzy Q-learning (FQL) algorithm proposed in [2]. But the FQL algorithm only adjusts the parameters of fuzzy systems and do not involve structure identification. Structure identification is achieved in Dynamic Fuzzy Q-Learning (DFQL) Algorithm proposed in [3]. It constructs a structure online. Although the rules can be generated online, the redundant rules can not be deleted. Some rules which are no more active are pruned in the recently developed algorithm, Dynamic Self-Generated Fuzzy Q-Learning (DSGFQL)/ This thesis focuses on the actual implementation of currently developed algorithms, DSGFQL, DFQL, and FQL. The experimental results cooperated with simulation results shows the superiorities of fuzzy control algorithms to the basic control algorithm. Master of Science (Computer Control and Automation) 2009-07-17T08:25:41Z 2009-07-17T08:25:41Z 2008 2008 Thesis http://hdl.handle.net/10356/18780 en 88 p. application/pdf
spellingShingle DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
Thida Khin Saw.
Dynamic modelling and intelligent control of a mobile robot
title Dynamic modelling and intelligent control of a mobile robot
title_full Dynamic modelling and intelligent control of a mobile robot
title_fullStr Dynamic modelling and intelligent control of a mobile robot
title_full_unstemmed Dynamic modelling and intelligent control of a mobile robot
title_short Dynamic modelling and intelligent control of a mobile robot
title_sort dynamic modelling and intelligent control of a mobile robot
topic DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
url http://hdl.handle.net/10356/18780
work_keys_str_mv AT thidakhinsaw dynamicmodellingandintelligentcontrolofamobilerobot