Approximate dynamic programming using model-free Bellman Residual Elimination

This paper presents an modification to the method of Bellman Residual Elimination (BRE) for approximate dynamic programming. While prior work on BRE has focused on learning an approximate policy for an underlying Markov Decision Process (MDP) when the state transition model of the MDP is known, this...

Full description

Bibliographic Details
Main Authors: Bethke, Brett M., How, Jonathan P.
Other Authors: Massachusetts Institute of Technology. Aerospace Controls Laboratory
Format: Article
Language:en_US
Published: Institute of Electrical and Electronics Engineers / American Automatic Control Council 2011
Online Access:http://hdl.handle.net/1721.1/66203
https://orcid.org/0000-0001-8576-1930
_version_ 1826211427491250176
author Bethke, Brett M.
How, Jonathan P.
author2 Massachusetts Institute of Technology. Aerospace Controls Laboratory
author_facet Massachusetts Institute of Technology. Aerospace Controls Laboratory
Bethke, Brett M.
How, Jonathan P.
author_sort Bethke, Brett M.
collection MIT
description This paper presents an modification to the method of Bellman Residual Elimination (BRE) for approximate dynamic programming. While prior work on BRE has focused on learning an approximate policy for an underlying Markov Decision Process (MDP) when the state transition model of the MDP is known, this work proposes a model-free variant of BRE that does not require knowledge of the state transition model. Instead, state trajectories of the system, generated using simulation and/or observations of the real system in operation, are used to build stochastic approximations of the quantities needed to carry out the BRE algorithm. The resulting algorithm can be shown to converge to the policy produced by the nominal, model-based BRE algorithm in the limit of observing an infinite number of trajectories. To validate the performance of the approach, we compare model-based and model-free BRE against LSPI, a well-known approximate dynamic programming algorithm. Measuring performance in terms of both computational complexity and policy quality, we present results showing that BRE performs at least as well as, and sometimes significantly better than, LSPI on a standard benchmark problem.
first_indexed 2024-09-23T15:05:43Z
format Article
id mit-1721.1/66203
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T15:05:43Z
publishDate 2011
publisher Institute of Electrical and Electronics Engineers / American Automatic Control Council
record_format dspace
spelling mit-1721.1/662032022-09-29T12:40:00Z Approximate dynamic programming using model-free Bellman Residual Elimination Bethke, Brett M. How, Jonathan P. Massachusetts Institute of Technology. Aerospace Controls Laboratory Massachusetts Institute of Technology. Department of Aeronautics and Astronautics How, Jonathan P. Bethke, Brett M. How, Jonathan P. This paper presents an modification to the method of Bellman Residual Elimination (BRE) for approximate dynamic programming. While prior work on BRE has focused on learning an approximate policy for an underlying Markov Decision Process (MDP) when the state transition model of the MDP is known, this work proposes a model-free variant of BRE that does not require knowledge of the state transition model. Instead, state trajectories of the system, generated using simulation and/or observations of the real system in operation, are used to build stochastic approximations of the quantities needed to carry out the BRE algorithm. The resulting algorithm can be shown to converge to the policy produced by the nominal, model-based BRE algorithm in the limit of observing an infinite number of trajectories. To validate the performance of the approach, we compare model-based and model-free BRE against LSPI, a well-known approximate dynamic programming algorithm. Measuring performance in terms of both computational complexity and policy quality, we present results showing that BRE performs at least as well as, and sometimes significantly better than, LSPI on a standard benchmark problem. Boeing Company. Phantom Works United States. Air Force Office of Scientific Research (Grant FA9550-08-1-0086) Hertz Foundation 2011-10-11T17:05:58Z 2011-10-11T17:05:58Z 2010-06 Article http://purl.org/eprint/type/ConferencePaper 978-1-4244-7426-4 0743-1619 INSPEC Accession Number: 11508712 http://hdl.handle.net/1721.1/66203 Bethke, B., and J.P. How. “Approximate dynamic programming using model-free Bellman Residual Elimination.” American Control Conference (ACC), 2010. 2010. 4146-4151. Print. https://orcid.org/0000-0001-8576-1930 en_US http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5530611&isnumber=5530425 American Control Conference 2010 Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Institute of Electrical and Electronics Engineers / American Automatic Control Council IEEE
spellingShingle Bethke, Brett M.
How, Jonathan P.
Approximate dynamic programming using model-free Bellman Residual Elimination
title Approximate dynamic programming using model-free Bellman Residual Elimination
title_full Approximate dynamic programming using model-free Bellman Residual Elimination
title_fullStr Approximate dynamic programming using model-free Bellman Residual Elimination
title_full_unstemmed Approximate dynamic programming using model-free Bellman Residual Elimination
title_short Approximate dynamic programming using model-free Bellman Residual Elimination
title_sort approximate dynamic programming using model free bellman residual elimination
url http://hdl.handle.net/1721.1/66203
https://orcid.org/0000-0001-8576-1930
work_keys_str_mv AT bethkebrettm approximatedynamicprogrammingusingmodelfreebellmanresidualelimination
AT howjonathanp approximatedynamicprogrammingusingmodelfreebellmanresidualelimination