A Comparative Analysis of Reinforcement Learning Methods

This paper analyzes the suitability of reinforcement learning (RL) for both programming and adapting situated agents. We discuss two RL algorithms: Q-learning and the Bucket Brigade. We introduce a special case of the Bucket Brigade, and analyze and compare its performance to Q in a number of...

Full description

Bibliographic Details
Main Author: Mataric, Maja
Language:en_US
Published: 2004
Subjects:
Online Access:http://hdl.handle.net/1721.1/5978
_version_ 1826188877363150848
author Mataric, Maja
author_facet Mataric, Maja
author_sort Mataric, Maja
collection MIT
description This paper analyzes the suitability of reinforcement learning (RL) for both programming and adapting situated agents. We discuss two RL algorithms: Q-learning and the Bucket Brigade. We introduce a special case of the Bucket Brigade, and analyze and compare its performance to Q in a number of experiments. Next we discuss the key problems of RL: time and space complexity, input generalization, sensitivity to parameter values, and selection of the reinforcement function. We address the tradeoffs between the built-in and learned knowledge and the number of training examples required by a learning algorithm. Finally, we suggest directions for future research.
first_indexed 2024-09-23T08:06:04Z
id mit-1721.1/5978
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T08:06:04Z
publishDate 2004
record_format dspace
spelling mit-1721.1/59782019-04-09T16:39:11Z A Comparative Analysis of Reinforcement Learning Methods Mataric, Maja reinforcement learning situated agents inputsgeneralization complexity built-in knowledge This paper analyzes the suitability of reinforcement learning (RL) for both programming and adapting situated agents. We discuss two RL algorithms: Q-learning and the Bucket Brigade. We introduce a special case of the Bucket Brigade, and analyze and compare its performance to Q in a number of experiments. Next we discuss the key problems of RL: time and space complexity, input generalization, sensitivity to parameter values, and selection of the reinforcement function. We address the tradeoffs between the built-in and learned knowledge and the number of training examples required by a learning algorithm. Finally, we suggest directions for future research. 2004-10-04T14:25:16Z 2004-10-04T14:25:16Z 1991-10-01 AIM-1322 http://hdl.handle.net/1721.1/5978 en_US AIM-1322 13 p. 1444645 bytes 1130480 bytes application/postscript application/pdf application/postscript application/pdf
spellingShingle reinforcement
learning
situated agents
inputsgeneralization
complexity
built-in knowledge
Mataric, Maja
A Comparative Analysis of Reinforcement Learning Methods
title A Comparative Analysis of Reinforcement Learning Methods
title_full A Comparative Analysis of Reinforcement Learning Methods
title_fullStr A Comparative Analysis of Reinforcement Learning Methods
title_full_unstemmed A Comparative Analysis of Reinforcement Learning Methods
title_short A Comparative Analysis of Reinforcement Learning Methods
title_sort comparative analysis of reinforcement learning methods
topic reinforcement
learning
situated agents
inputsgeneralization
complexity
built-in knowledge
url http://hdl.handle.net/1721.1/5978
work_keys_str_mv AT mataricmaja acomparativeanalysisofreinforcementlearningmethods
AT mataricmaja comparativeanalysisofreinforcementlearningmethods