Intelligent control of an autonomous vehicle
Reinforcement Learning is the learning methodology whereby a learner develops its knowledge through the trial-and-error interactions with the dynamic environment. Based on how the learner reacts to the environment, the leaner will only receive “reward” of “punishment” instead of “instructive” inform...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Language: | English |
Published: |
2009
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/18792 |
_version_ | 1811683603821625344 |
---|---|
author | San, Linn. |
author2 | Er Meng Joo |
author_facet | Er Meng Joo San, Linn. |
author_sort | San, Linn. |
collection | NTU |
description | Reinforcement Learning is the learning methodology whereby a learner develops its knowledge through the trial-and-error interactions with the dynamic environment. Based on how the learner reacts to the environment, the leaner will only receive “reward” of “punishment” instead of “instructive” information. Among the reinforcement learning concepts, Q-Learning is the most popular algorithm due to its simplicity and well-developed theory. But, Q-Learning is not able to address to generalize large states and actions space. The practical learning agent requires a compact representation to generalize experiences in the continuous domain. Many research works have been done on the generalization issue of Q-Learning. Fuzzy Q-Learning (FQL) approach was proposed in [18] for the representation of Q-Learning to address the continuous domain. The greatest achievement of FQL is that it can enable the original Q-Learning to handle continuous states and actions by means of fuzzy logic, which is regarded as a systematic mathematical approach to emulate human way of thinking. A fuzzy system can be decomposed into two phases; namely structure identification phase and parameter identification phase. Structure identification phase concerns about partitioning the input space and determining the number of fuzzy rules while the parameter identification phase involves determining the parameter of premises and consequents. The FQL approach is only well-defined in parameter identification and does not focus on structure identification. |
first_indexed | 2024-10-01T04:15:22Z |
format | Thesis |
id | ntu-10356/18792 |
institution | Nanyang Technological University |
language | English |
last_indexed | 2024-10-01T04:15:22Z |
publishDate | 2009 |
record_format | dspace |
spelling | ntu-10356/187922023-07-04T15:20:25Z Intelligent control of an autonomous vehicle San, Linn. Er Meng Joo School of Electrical and Electronic Engineering DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Control engineering Reinforcement Learning is the learning methodology whereby a learner develops its knowledge through the trial-and-error interactions with the dynamic environment. Based on how the learner reacts to the environment, the leaner will only receive “reward” of “punishment” instead of “instructive” information. Among the reinforcement learning concepts, Q-Learning is the most popular algorithm due to its simplicity and well-developed theory. But, Q-Learning is not able to address to generalize large states and actions space. The practical learning agent requires a compact representation to generalize experiences in the continuous domain. Many research works have been done on the generalization issue of Q-Learning. Fuzzy Q-Learning (FQL) approach was proposed in [18] for the representation of Q-Learning to address the continuous domain. The greatest achievement of FQL is that it can enable the original Q-Learning to handle continuous states and actions by means of fuzzy logic, which is regarded as a systematic mathematical approach to emulate human way of thinking. A fuzzy system can be decomposed into two phases; namely structure identification phase and parameter identification phase. Structure identification phase concerns about partitioning the input space and determining the number of fuzzy rules while the parameter identification phase involves determining the parameter of premises and consequents. The FQL approach is only well-defined in parameter identification and does not focus on structure identification. Master of Science (Computer Control and Automation) 2009-07-20T01:23:08Z 2009-07-20T01:23:08Z 2008 2008 Thesis http://hdl.handle.net/10356/18792 en 100 p. application/pdf |
spellingShingle | DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Control engineering San, Linn. Intelligent control of an autonomous vehicle |
title | Intelligent control of an autonomous vehicle |
title_full | Intelligent control of an autonomous vehicle |
title_fullStr | Intelligent control of an autonomous vehicle |
title_full_unstemmed | Intelligent control of an autonomous vehicle |
title_short | Intelligent control of an autonomous vehicle |
title_sort | intelligent control of an autonomous vehicle |
topic | DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Control engineering |
url | http://hdl.handle.net/10356/18792 |
work_keys_str_mv | AT sanlinn intelligentcontrolofanautonomousvehicle |