Latent goal models for dynamic strategic interaction.

Understanding the principles by which agents interact with both complex environments and each other is a key goal of decision neuroscience. However, most previous studies have used experimental paradigms in which choices are discrete (and few), play is static, and optimal solutions are known. Yet in...

Full description

Bibliographic Details
Main Authors: Shariq N Iqbal, Lun Yin, Caroline B Drucker, Qian Kuang, Jean-François Gariépy, Michael L Platt, John M Pearson
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2019-03-01
Series:PLoS Computational Biology
Online Access:http://europepmc.org/articles/PMC6472832?pdf=render
id doaj-dcad5db7f1a94e4aba3c4fe14365e323
record_format Article
spelling doaj-dcad5db7f1a94e4aba3c4fe14365e3232020-11-25T02:10:47ZengPublic Library of Science (PLoS)PLoS Computational Biology1553-734X1553-73582019-03-01153e100689510.1371/journal.pcbi.1006895Latent goal models for dynamic strategic interaction.Shariq N IqbalLun YinCaroline B DruckerQian KuangJean-François GariépyMichael L PlattJohn M PearsonUnderstanding the principles by which agents interact with both complex environments and each other is a key goal of decision neuroscience. However, most previous studies have used experimental paradigms in which choices are discrete (and few), play is static, and optimal solutions are known. Yet in natural environments, interactions between agents typically involve continuous action spaces, ongoing dynamics, and no known optimal solution. Here, we seek to bridge this divide by using a "penalty shot" task in which pairs of monkeys competed against each other in a competitive, real-time video game. We modeled monkeys' strategies as driven by stochastically evolving goals, onscreen positions that served as set points for a control model that produced observed joystick movements. We fit this goal-based dynamical system model using approximate Bayesian inference methods, using neural networks to parameterize players' goals as a dynamic mixture of Gaussian components. Our model is conceptually simple, constructed of interpretable components, and capable of generating synthetic data that capture the complexity of real player dynamics. We further characterized players' strategies using the number of change points on each trial. We found that this complexity varied more across sessions than within sessions, and that more complex strategies benefited offensive players but not defensive players. Together, our experimental paradigm and model offer a powerful combination of tools for the study of realistic social dynamics in the laboratory setting.http://europepmc.org/articles/PMC6472832?pdf=render
collection DOAJ
language English
format Article
sources DOAJ
author Shariq N Iqbal
Lun Yin
Caroline B Drucker
Qian Kuang
Jean-François Gariépy
Michael L Platt
John M Pearson
spellingShingle Shariq N Iqbal
Lun Yin
Caroline B Drucker
Qian Kuang
Jean-François Gariépy
Michael L Platt
John M Pearson
Latent goal models for dynamic strategic interaction.
PLoS Computational Biology
author_facet Shariq N Iqbal
Lun Yin
Caroline B Drucker
Qian Kuang
Jean-François Gariépy
Michael L Platt
John M Pearson
author_sort Shariq N Iqbal
title Latent goal models for dynamic strategic interaction.
title_short Latent goal models for dynamic strategic interaction.
title_full Latent goal models for dynamic strategic interaction.
title_fullStr Latent goal models for dynamic strategic interaction.
title_full_unstemmed Latent goal models for dynamic strategic interaction.
title_sort latent goal models for dynamic strategic interaction.
publisher Public Library of Science (PLoS)
series PLoS Computational Biology
issn 1553-734X
1553-7358
publishDate 2019-03-01
description Understanding the principles by which agents interact with both complex environments and each other is a key goal of decision neuroscience. However, most previous studies have used experimental paradigms in which choices are discrete (and few), play is static, and optimal solutions are known. Yet in natural environments, interactions between agents typically involve continuous action spaces, ongoing dynamics, and no known optimal solution. Here, we seek to bridge this divide by using a "penalty shot" task in which pairs of monkeys competed against each other in a competitive, real-time video game. We modeled monkeys' strategies as driven by stochastically evolving goals, onscreen positions that served as set points for a control model that produced observed joystick movements. We fit this goal-based dynamical system model using approximate Bayesian inference methods, using neural networks to parameterize players' goals as a dynamic mixture of Gaussian components. Our model is conceptually simple, constructed of interpretable components, and capable of generating synthetic data that capture the complexity of real player dynamics. We further characterized players' strategies using the number of change points on each trial. We found that this complexity varied more across sessions than within sessions, and that more complex strategies benefited offensive players but not defensive players. Together, our experimental paradigm and model offer a powerful combination of tools for the study of realistic social dynamics in the laboratory setting.
url http://europepmc.org/articles/PMC6472832?pdf=render
work_keys_str_mv AT shariqniqbal latentgoalmodelsfordynamicstrategicinteraction
AT lunyin latentgoalmodelsfordynamicstrategicinteraction
AT carolinebdrucker latentgoalmodelsfordynamicstrategicinteraction
AT qiankuang latentgoalmodelsfordynamicstrategicinteraction
AT jeanfrancoisgariepy latentgoalmodelsfordynamicstrategicinteraction
AT michaellplatt latentgoalmodelsfordynamicstrategicinteraction
AT johnmpearson latentgoalmodelsfordynamicstrategicinteraction
_version_ 1724917411629498368