Grid cells, place cells, and geodesic generalization for spatial reinforcement learning.
Reinforcement learning (RL) provides an influential characterization of the brain's mechanisms for learning to make advantageous choices. An important problem, though, is how complex tasks can be represented in a way that enables efficient learning. We consider this problem through the lens of...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2011-10-01
|
Series: | PLoS Computational Biology |
Online Access: | http://europepmc.org/articles/PMC3203050?pdf=render |
id |
doaj-e4f55ce9d14946b3b148aa56450c1df6 |
---|---|
record_format |
Article |
spelling |
doaj-e4f55ce9d14946b3b148aa56450c1df62020-11-25T01:53:27ZengPublic Library of Science (PLoS)PLoS Computational Biology1553-734X1553-73582011-10-01710e100223510.1371/journal.pcbi.1002235Grid cells, place cells, and geodesic generalization for spatial reinforcement learning.Nicholas J GustafsonNathaniel D DawReinforcement learning (RL) provides an influential characterization of the brain's mechanisms for learning to make advantageous choices. An important problem, though, is how complex tasks can be represented in a way that enables efficient learning. We consider this problem through the lens of spatial navigation, examining how two of the brain's location representations--hippocampal place cells and entorhinal grid cells--are adapted to serve as basis functions for approximating value over space for RL. Although much previous work has focused on these systems' roles in combining upstream sensory cues to track location, revisiting these representations with a focus on how they support this downstream decision function offers complementary insights into their characteristics. Rather than localization, the key problem in learning is generalization between past and present situations, which may not match perfectly. Accordingly, although neural populations collectively offer a precise representation of position, our simulations of navigational tasks verify the suggestion that RL gains efficiency from the more diffuse tuning of individual neurons, which allows learning about rewards to generalize over longer distances given fewer training experiences. However, work on generalization in RL suggests the underlying representation should respect the environment's layout. In particular, although it is often assumed that neurons track location in Euclidean coordinates (that a place cell's activity declines "as the crow flies" away from its peak), the relevant metric for value is geodesic: the distance along a path, around any obstacles. We formalize this intuition and present simulations showing how Euclidean, but not geodesic, representations can interfere with RL by generalizing inappropriately across barriers. Our proposal that place and grid responses should be modulated by geodesic distances suggests novel predictions about how obstacles should affect spatial firing fields, which provides a new viewpoint on data concerning both spatial codes.http://europepmc.org/articles/PMC3203050?pdf=render |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Nicholas J Gustafson Nathaniel D Daw |
spellingShingle |
Nicholas J Gustafson Nathaniel D Daw Grid cells, place cells, and geodesic generalization for spatial reinforcement learning. PLoS Computational Biology |
author_facet |
Nicholas J Gustafson Nathaniel D Daw |
author_sort |
Nicholas J Gustafson |
title |
Grid cells, place cells, and geodesic generalization for spatial reinforcement learning. |
title_short |
Grid cells, place cells, and geodesic generalization for spatial reinforcement learning. |
title_full |
Grid cells, place cells, and geodesic generalization for spatial reinforcement learning. |
title_fullStr |
Grid cells, place cells, and geodesic generalization for spatial reinforcement learning. |
title_full_unstemmed |
Grid cells, place cells, and geodesic generalization for spatial reinforcement learning. |
title_sort |
grid cells, place cells, and geodesic generalization for spatial reinforcement learning. |
publisher |
Public Library of Science (PLoS) |
series |
PLoS Computational Biology |
issn |
1553-734X 1553-7358 |
publishDate |
2011-10-01 |
description |
Reinforcement learning (RL) provides an influential characterization of the brain's mechanisms for learning to make advantageous choices. An important problem, though, is how complex tasks can be represented in a way that enables efficient learning. We consider this problem through the lens of spatial navigation, examining how two of the brain's location representations--hippocampal place cells and entorhinal grid cells--are adapted to serve as basis functions for approximating value over space for RL. Although much previous work has focused on these systems' roles in combining upstream sensory cues to track location, revisiting these representations with a focus on how they support this downstream decision function offers complementary insights into their characteristics. Rather than localization, the key problem in learning is generalization between past and present situations, which may not match perfectly. Accordingly, although neural populations collectively offer a precise representation of position, our simulations of navigational tasks verify the suggestion that RL gains efficiency from the more diffuse tuning of individual neurons, which allows learning about rewards to generalize over longer distances given fewer training experiences. However, work on generalization in RL suggests the underlying representation should respect the environment's layout. In particular, although it is often assumed that neurons track location in Euclidean coordinates (that a place cell's activity declines "as the crow flies" away from its peak), the relevant metric for value is geodesic: the distance along a path, around any obstacles. We formalize this intuition and present simulations showing how Euclidean, but not geodesic, representations can interfere with RL by generalizing inappropriately across barriers. Our proposal that place and grid responses should be modulated by geodesic distances suggests novel predictions about how obstacles should affect spatial firing fields, which provides a new viewpoint on data concerning both spatial codes. |
url |
http://europepmc.org/articles/PMC3203050?pdf=render |
work_keys_str_mv |
AT nicholasjgustafson gridcellsplacecellsandgeodesicgeneralizationforspatialreinforcementlearning AT nathanielddaw gridcellsplacecellsandgeodesicgeneralizationforspatialreinforcementlearning |
_version_ |
1724990881361035264 |