Fast approximate hierarchical solution of MDPs

Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009. === Cataloged from PDF version of thesis. === Includes bibliographical references (p. 89-91). === In this thesis, we present an efficient algorithm for creating and solving hierarchical...

Full description

Bibliographic Details
Main Author: Barry, Jennifer L. (Jennifer Lynn)
Other Authors: Leslie Pack Kaelbling and Tomáz Lozano-Pérez.
Format: Others
Language:English
Published: Massachusetts Institute of Technology 2010
Subjects:
Online Access:http://hdl.handle.net/1721.1/53202
Description
Summary:Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009. === Cataloged from PDF version of thesis. === Includes bibliographical references (p. 89-91). === In this thesis, we present an efficient algorithm for creating and solving hierarchical models of large Markov decision processes (MDPs). As the size of the MDP increases, finding an exact solution becomes intractable, so we expect only to find an approximate solution. We also assume that the hierarchies we create are not necessarily applicable to more than one problem so that we must be able to construct and solve the hierarchical model in less time than it would have taken to simply solve the original, flat model. Our approach works in two stages. We first create the hierarchical MDP by forming clusters of states that can transition easily among themselves. We then solve the hierarchical MDP. We use a quick bottom-up pass based on a deterministic approximation of expected costs to move from one state to another to derive a policy from the top down, which avoids solving low-level MDPs for multiple objectives. The resulting policy may be suboptimal but it is guaranteed to reach a goal state in any problem in which it is reachable under the optimal policy. We have two versions of this algorithm, one for enumerated-state MDPs and one for factored MDPs. We have tested the enumerated-state algorithm on classic problems and shown that it is better than or comparable to current work in the field. Factored MDPs are a way of specifying extremely large MDPs without listing all of the states. Because the problem has a compact representation, we suspect that the solution should, in many cases, also have a compact representation. We have an implementation for factored MDPs and have shown that it can find solutions for large, factored problems. === by Jennifer L. Barry. === S.M.