Approximate Dynamic Programming Using Bellman Residual Elimination and Gaussian Process Regression
This paper presents an approximate policy iteration algorithm for solving infinite-horizon, discounted Markov decision processes (MDPs) for which a model of the system is available. The algorithm is similar in spirit to Bellman residual minimization methods. However, by using Gaussian process regres...
Main Authors: | Bethke, Brett M., How, Jonathan P. |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics |
Format: | Article |
Language: | en_US |
Published: |
Institute of Electrical and Electronics Engineers
2010
|
Online Access: | http://hdl.handle.net/1721.1/58878 https://orcid.org/0000-0001-8576-1930 |
Similar Items
-
Approximate Dynamic Programming Using Bellman Residual Elimination and Gaussian Process Regression
by: How, Jonathan P., et al.
Published: (2010) -
Approximate dynamic programming using model-free Bellman Residual Elimination
by: Bethke, Brett M., et al.
Published: (2011) -
Kernel-based approximate dynamic programming using Bellman residual elimination
by: Bethke, Brett (Brett M.)
Published: (2010) -
Agent capability in persistent mission planning using approximate dynamic programming
by: Bethke, Brett M., et al.
Published: (2010) -
Piecewise constant policy approximations to Hamilton–Jacobi–Bellman equations
by: Reisinger, C, et al.
Published: (2016)