Towards a theory for the emergence of grid and place cell codes

Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, February, 2020 === Manuscript. === Includes bibliographical references (pages 227-238). === This work utilizes theoretical approaches to answer the question: which functions grid and place cells perfor...

Full description

Bibliographic Details
Main Author: Ma, Tzuhsuan.
Other Authors: Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences.
Format: Others
Language:English
Published: Massachusetts Institute of Technology 2021
Subjects:
Online Access:https://hdl.handle.net/1721.1/138514
Description
Summary:Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, February, 2020 === Manuscript. === Includes bibliographical references (pages 227-238). === This work utilizes theoretical approaches to answer the question: which functions grid and place cells perform that directly lead to their own emergence? To answer such a question, an approach that goes beyond a simple modelling is necessary given the fact that there could be circuit solutions other than grid or place cells that better perform these functions. With this reasoning, I adopted a systematic guideline that aims for an optimization principle attempting to find the optimal solution for performing the hypothesized functions while reproducing the correct phenomenology. Within the optimization principle framework, I applied both recurrent neural network (RNN) training and coding-theoretic approaches to set up appropriate optimization problems for testing a given function hypotheses. The descriptive function hypotheses: 1) Grid cells exist for having a high-capacity and robust path-integrating code and 2) Place cells exist for having a sequentially-learnable and highly-separable path-integrating code were adopted. The non-converging performance in training an RNN to perform a hard navigation task suggests that the attractor dynamics forbids a network to simultaneously possess online learnability and high coding capacity. Because of this dynamical constraint in learning, a grid cell circuit has to be hardwired through some developmental process and cannot be easily modified by an experience-based synaptic rule without compromising its capacity. On the contrary, a place cell circuit being able to continually learn a novel environment inevitably have a mere linear capacity. These results imply that the functional separation of grid and place cell systems observed in the brain could be a result of an unavoidable dynamical constraint from their underlying RNNs. Lastly, a fundamental principle called the tuning-learnability correspondence was uncovered in pursuit of a sequentially learnable neural implementation for place cells. It explains that the seemingly incidental existence of conjunctive tuning property is in fact caused by a necessary metastable attractor dynamics for having sequential learnability rather than by another functional need attached to a particular tuning property. In addition, from the unique property of metastable attractor dynamics, I also predicted that the biased place field propensity recently observed in CA1 sub-region should originate from CA3 due to an inevitable biased activation in the RNN as a side effect of such a dynamical property. In sum, both this principle and the subsequent prediction thus provide a new perspective that contradicts the conventional wisdom which often assumed that a certain nonspatial tuning property exists for performing a relevant task. === by Tzuhsuan Ma. === Ph. D. === Ph. D. Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences