Point-Based Policy Transformation: Adapting Policy to Changing POMDP Models

Motion planning under uncertainty that can efficiently take into account changes in the environment is critical for robots to operate reliably in our living spaces. Partially Observable Markov Decision Process (POMDP) provides a systematic and general framework for motion planning under uncertainty....

Full description

Bibliographic Details
Main Authors: Kurniawati, Hanna (Author), Patrikalakis, Nicholas M (Contributor)
Other Authors: Massachusetts Institute of Technology. Department of Mechanical Engineering (Contributor)
Format: Article
Language:English
Published: Springer Nature America, Inc, 2019-01-04T15:08:45Z.
Subjects:
Online Access:Get fulltext
Description
Summary:Motion planning under uncertainty that can efficiently take into account changes in the environment is critical for robots to operate reliably in our living spaces. Partially Observable Markov Decision Process (POMDP) provides a systematic and general framework for motion planning under uncertainty. Point-based POMDP has advanced POMDP planning tremendously over the past few years, enabling POMDP planning to be practical for many simple to moderately difficult robotics problems. However, when environmental changes alter the POMDP model, most existing POMDP planners recompute the solution from scratch, often wasting significant computational resources that have been spent for solving the original problem. In this paper, we propose a novel algorithm, called Point-Based Policy Transformation (PBPT), that solves the altered POMDP problem by transforming the solution of the original problem to accommodate changes in the problem. PBPT uses the point-based POMDP approach. It transforms the original solution by modifying the set of sampled beliefs that represents the belief space B, and then uses this new set of sampled beliefs to revise the original solution. Preliminary results indicate that PBPT generates a good policy for the altered POMDP model in a matter of minutes, while recomputing the policy using the fastest offline POMDP planner today fails to find a policy with similar quality after two hours of planning time, even when the policy for the original problem is reused as an initial policy. Keywords: Optimal Policy, Autonomous Underwater Vehicle, Reward Function, Good Policy State Trace
Singapore-MIT Alliance for Research and Technology. Center for Environmental Sensing and Modeling