Developing intelligent agents for training systems that learn their strategies from expert players
Computer-based training systems have become a mainstay in military and private institutions for training people how to perform certain complex tasks. As these tasks expand in difficulty, intelligent agents will appear as virtual teammates or tutors assisting a trainee in performing and learning the...
Main Author: | |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
Texas A&M University
2005
|
Subjects: | |
Online Access: | http://hdl.handle.net/1969.1/2662 |
id |
ndltd-tamu.edu-oai-repository.tamu.edu-1969.1-2662 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-tamu.edu-oai-repository.tamu.edu-1969.1-26622013-01-08T10:37:57ZDeveloping intelligent agents for training systems that learn their strategies from expert playersWhetzel, Jonathan Huntgame playingmachine learningknowledge acquisitiondata miningComputer-based training systems have become a mainstay in military and private institutions for training people how to perform certain complex tasks. As these tasks expand in difficulty, intelligent agents will appear as virtual teammates or tutors assisting a trainee in performing and learning the task. For developing these agents, we must obtain the strategies from expert players and emulate their behavior within the agent. Past researchers have shown the challenges in acquiring this information from expert human players and translating it into the agent. A solution for this problem involves using computer systems that assist in the human expert knowledge elicitation process. In this thesis, we present an approach for developing an agent for the game Revised Space Fortress, a game representative of the complex tasks found in training systems. Using machine learning techniques, the agent learns the strategy for the game by observing how a human expert plays. We highlight the challenges encountered while designing and training the agent in this real-time game environment, and our solutions toward handling these problems. Afterward, we discuss our experiment that examines whether trainees experience a difference in performance when training with a human or virtual partner, and how expert agents that express distinctive behaviors affect the learning of a human trainee. We show from our results that a partner agent that learns its strategy from an expert player serves the same benefit as a training partner compared to a programmed expert-level agent and a human partner of equal intelligence to the trainee.Texas A&M UniversityVolz, Richard A.2005-11-01T15:48:46Z2005-11-01T15:48:46Z2005-082005-11-01T15:48:46ZBookThesisElectronic Thesistext619078 byteselectronicapplication/pdfborn digitalhttp://hdl.handle.net/1969.1/2662en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
topic |
game playing machine learning knowledge acquisition data mining |
spellingShingle |
game playing machine learning knowledge acquisition data mining Whetzel, Jonathan Hunt Developing intelligent agents for training systems that learn their strategies from expert players |
description |
Computer-based training systems have become a mainstay in military and
private institutions for training people how to perform certain complex tasks. As
these tasks expand in difficulty, intelligent agents will appear as virtual teammates
or tutors assisting a trainee in performing and learning the task. For developing
these agents, we must obtain the strategies from expert players and emulate their
behavior within the agent. Past researchers have shown the challenges in acquiring
this information from expert human players and translating it into the agent. A
solution for this problem involves using computer systems that assist in the human
expert knowledge elicitation process. In this thesis, we present an approach for
developing an agent for the game Revised Space Fortress, a game representative of
the complex tasks found in training systems. Using machine learning techniques,
the agent learns the strategy for the game by observing how a human expert plays.
We highlight the challenges encountered while designing and training the agent in
this real-time game environment, and our solutions toward handling these
problems. Afterward, we discuss our experiment that examines whether trainees
experience a difference in performance when training with a human or virtual
partner, and how expert agents that express distinctive behaviors affect the
learning of a human trainee. We show from our results that a partner agent that
learns its strategy from an expert player serves the same benefit as a training
partner compared to a programmed expert-level agent and a human partner of
equal intelligence to the trainee. |
author2 |
Volz, Richard A. |
author_facet |
Volz, Richard A. Whetzel, Jonathan Hunt |
author |
Whetzel, Jonathan Hunt |
author_sort |
Whetzel, Jonathan Hunt |
title |
Developing intelligent agents for training systems that learn their strategies from expert players |
title_short |
Developing intelligent agents for training systems that learn their strategies from expert players |
title_full |
Developing intelligent agents for training systems that learn their strategies from expert players |
title_fullStr |
Developing intelligent agents for training systems that learn their strategies from expert players |
title_full_unstemmed |
Developing intelligent agents for training systems that learn their strategies from expert players |
title_sort |
developing intelligent agents for training systems that learn their strategies from expert players |
publisher |
Texas A&M University |
publishDate |
2005 |
url |
http://hdl.handle.net/1969.1/2662 |
work_keys_str_mv |
AT whetzeljonathanhunt developingintelligentagentsfortrainingsystemsthatlearntheirstrategiesfromexpertplayers |
_version_ |
1716503181530234880 |