Summary: | This thesis is about chance and choice, or decisions under uncertainty. The
desire for creating an intelligent agent performing rewarding tasks in a realistic
world urges for working models to do sequential decision making and planning.
In responding to this grand wish, decision-theoretic planning (DTP)
has evolved from decision theory and control theory, and has been applied to
planning in artificial intelligence. Recent interest has been directed toward
Markov Decision Processes (MDPs) introduced from operations research.
While fruitful results have been tapped from research in fully observable
MDPs, partially observable MDPs (POMDPs) are still too difficult to solve
as observation uncertainties are incorporated. Abstraction and approximation
techniques become the focus.
This research attempts to enhance POMDPs by applying A l techniques.
In particular, we transform the linear POMDP constructs into a structured
representation based on binary decision trees and Bayesian Networks to
achieve compactness. A handful of tree-oriented operations is then developed
to perform structural belief updates and value computation. Along
ii with the structured representation, we explore the belief space with a heuristic
online search approach, in which best-first search strategy with heuristic
pruning is employed.
Experimenting with a structured testbed domain reveals great potentials
of exploiting structure and heuristics to empower POMDPs for more practical
applications. === Science, Faculty of === Computer Science, Department of === Graduate
|