Optimizing robot trajectories using reinforcement learning
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Language: | eng |
Published: |
Massachusetts Institute of Technology
2008
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/40531 |
_version_ | 1826207753148825600 |
---|---|
author | Kollar, Thomas (Thomas Fleming) |
author2 | Nicholas Roy. |
author_facet | Nicholas Roy. Kollar, Thomas (Thomas Fleming) |
author_sort | Kollar, Thomas (Thomas Fleming) |
collection | MIT |
description | Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007. |
first_indexed | 2024-09-23T13:54:25Z |
format | Thesis |
id | mit-1721.1/40531 |
institution | Massachusetts Institute of Technology |
language | eng |
last_indexed | 2024-09-23T13:54:25Z |
publishDate | 2008 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/405312019-04-10T15:07:08Z Optimizing robot trajectories using reinforcement learning Kollar, Thomas (Thomas Fleming) Nicholas Roy. Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. Electrical Engineering and Computer Science. Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007. Includes bibliographical references (leaves 93-96). The mapping problem has received considerable attention in robotics recently. Mature techniques now allow practitioners to reliably and consistently generate 2-D and 3-D maps of objects, office buildings, city blocks and metropolitan areas with a comparatively small number of errors. Nevertheless, the ease of construction and quality of map are strongly dependent on the exploration strategy used to acquire sensor data. Most exploration strategies concentrate on selecting the next best measurement to take, trading off information gathering for regular relocalization. What has not been studied so far is the effect the robot controller has on the map quality. Certain kinds of robot motion (e.g, sharp turns) are hard to estimate correctly, and increase the likelihood of errors in the mapping process. We show how reinforcement learning can be used to generate better motion control. The learned policy will be shown to reduce the overall map uncertainty and squared error, while jointly reducing data-association errors. by Thomas Kollar. S.M. 2008-02-27T22:44:15Z 2008-02-27T22:44:15Z 2007 2007 Thesis http://hdl.handle.net/1721.1/40531 191913909 eng M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582 96 leaves application/pdf Massachusetts Institute of Technology |
spellingShingle | Electrical Engineering and Computer Science. Kollar, Thomas (Thomas Fleming) Optimizing robot trajectories using reinforcement learning |
title | Optimizing robot trajectories using reinforcement learning |
title_full | Optimizing robot trajectories using reinforcement learning |
title_fullStr | Optimizing robot trajectories using reinforcement learning |
title_full_unstemmed | Optimizing robot trajectories using reinforcement learning |
title_short | Optimizing robot trajectories using reinforcement learning |
title_sort | optimizing robot trajectories using reinforcement learning |
topic | Electrical Engineering and Computer Science. |
url | http://hdl.handle.net/1721.1/40531 |
work_keys_str_mv | AT kollarthomasthomasfleming optimizingrobottrajectoriesusingreinforcementlearning |