A Semi-Markov Decision Model for Recognizing the Destination of a Maneuvering Agent in Real Time Strategy Games

Recognizing destinations of a maneuvering agent is important in real time strategy games. Because finding path in an uncertain environment is essentially a sequential decision problem, we can model the maneuvering process by the Markov decision process (MDP). However, the MDP does not define an acti...

Full description

Bibliographic Details
Main Authors: Quanjun Yin, Shiguang Yue, Yabing Zha, Peng Jiao
Format: Article
Language:English
Published: Hindawi Limited 2016-01-01
Series:Mathematical Problems in Engineering
Online Access:http://dx.doi.org/10.1155/2016/1907971
Description
Summary:Recognizing destinations of a maneuvering agent is important in real time strategy games. Because finding path in an uncertain environment is essentially a sequential decision problem, we can model the maneuvering process by the Markov decision process (MDP). However, the MDP does not define an action duration. In this paper, we propose a novel semi-Markov decision model (SMDM). In the SMDM, the destination is regarded as a hidden state, which affects selection of an action; the action is affiliated with a duration variable, which indicates whether the action is completed. We also exploit a Rao-Blackwellised particle filter (RBPF) for inference under the dynamic Bayesian network structure of the SMDM. In experiments, we simulate agents’ maneuvering in a combat field and employ agents’ traces to evaluate the performance of our method. The results show that the SMDM outperforms another extension of the MDP in terms of precision, recall, and F-measure. Destinations are recognized efficiently by our method no matter whether they are changed or not. Additionally, the RBPF infer destinations with smaller variance and less time than the SPF. The average failure rates of the RBPF are lower when the number of particles is not enough.
ISSN:1024-123X
1563-5147