Approximate Dynamic Programming Using Bellman Residual Elimination and Gaussian Process Regression

This paper presents an approximate policy iteration algorithm for solving infinite-horizon, discounted Markov decision processes (MDPs) for which a model of the system is available. The algorithm is similar in spirit to Bellman residual minimization methods. However, by using Gaussian process regres...

Full description

Bibliographic Details
Main Authors: Bethke, Brett M. (Contributor), How, Jonathan P. (Contributor)
Other Authors: Massachusetts Institute of Technology. Department of Aeronautics and Astronautics (Contributor)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers, 2010-10-05T19:42:03Z.
Subjects:
Online Access:Get fulltext

Similar Items