Task-Level Robot Learning: Ball Throwing
We are investigating how to program robots so that they learn tasks from practice. One method, task-level learning, provides advantages over simply perfecting models of the robot's lower level systems. Task-level learning can compensate for the structural modeling errors of the robot'...
Main Authors: | , , |
---|---|
Language: | en_US |
Published: |
2004
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/6055 |
_version_ | 1826190228933574656 |
---|---|
author | Aboaf, Eric W. Atkeson, Christopher G. Reinkensmeyer, David J. |
author_facet | Aboaf, Eric W. Atkeson, Christopher G. Reinkensmeyer, David J. |
author_sort | Aboaf, Eric W. |
collection | MIT |
description | We are investigating how to program robots so that they learn tasks from practice. One method, task-level learning, provides advantages over simply perfecting models of the robot's lower level systems. Task-level learning can compensate for the structural modeling errors of the robot's lower level control systems and can speed up the learning process by reducing the degrees of freedom of the models to be learned. We demonstrate two general learning procedures---fixed-model learning and refined-model learning---on a ball-throwing robot system. |
first_indexed | 2024-09-23T08:37:02Z |
id | mit-1721.1/6055 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T08:37:02Z |
publishDate | 2004 |
record_format | dspace |
spelling | mit-1721.1/60552019-04-09T18:47:23Z Task-Level Robot Learning: Ball Throwing Aboaf, Eric W. Atkeson, Christopher G. Reinkensmeyer, David J. robotics learning tasks We are investigating how to program robots so that they learn tasks from practice. One method, task-level learning, provides advantages over simply perfecting models of the robot's lower level systems. Task-level learning can compensate for the structural modeling errors of the robot's lower level control systems and can speed up the learning process by reducing the degrees of freedom of the models to be learned. We demonstrate two general learning procedures---fixed-model learning and refined-model learning---on a ball-throwing robot system. 2004-10-04T14:37:02Z 2004-10-04T14:37:02Z 1987-12-01 AIM-1006 http://hdl.handle.net/1721.1/6055 en_US AIM-1006 18 p. 2480509 bytes 978972 bytes application/postscript application/pdf application/postscript application/pdf |
spellingShingle | robotics learning tasks Aboaf, Eric W. Atkeson, Christopher G. Reinkensmeyer, David J. Task-Level Robot Learning: Ball Throwing |
title | Task-Level Robot Learning: Ball Throwing |
title_full | Task-Level Robot Learning: Ball Throwing |
title_fullStr | Task-Level Robot Learning: Ball Throwing |
title_full_unstemmed | Task-Level Robot Learning: Ball Throwing |
title_short | Task-Level Robot Learning: Ball Throwing |
title_sort | task level robot learning ball throwing |
topic | robotics learning tasks |
url | http://hdl.handle.net/1721.1/6055 |
work_keys_str_mv | AT aboafericw tasklevelrobotlearningballthrowing AT atkesonchristopherg tasklevelrobotlearningballthrowing AT reinkensmeyerdavidj tasklevelrobotlearningballthrowing |