A virtual retina for studying population coding.
At every level of the visual system - from retina to cortex - information is encoded in the activity of large populations of cells. The populations are not uniform, but contain many different types of cells, each with its own sensitivities to visual stimuli. Understanding the roles of the cell types...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2013-01-01
|
Series: | PLoS ONE |
Online Access: | http://europepmc.org/articles/PMC3544815?pdf=render |
id |
doaj-d2c9b478fc324035aeb700ec4dc9da77 |
---|---|
record_format |
Article |
spelling |
doaj-d2c9b478fc324035aeb700ec4dc9da772020-11-25T01:28:51ZengPublic Library of Science (PLoS)PLoS ONE1932-62032013-01-0181e5336310.1371/journal.pone.0053363A virtual retina for studying population coding.Illya BomashYasser RoudiSheila NirenbergAt every level of the visual system - from retina to cortex - information is encoded in the activity of large populations of cells. The populations are not uniform, but contain many different types of cells, each with its own sensitivities to visual stimuli. Understanding the roles of the cell types and how they work together to form collective representations has been a long-standing goal. This goal, though, has been difficult to advance, and, to a large extent, the reason is data limitation. Large numbers of stimulus/response relationships need to be explored, and obtaining enough data to examine even a fraction of them requires a great deal of experiments and animals. Here we describe a tool for addressing this, specifically, at the level of the retina. The tool is a data-driven model of retinal input/output relationships that is effective on a broad range of stimuli - essentially, a virtual retina. The results show that it is highly reliable: (1) the model cells carry the same amount of information as their real cell counterparts, (2) the quality of the information is the same - that is, the posterior stimulus distributions produced by the model cells closely match those of their real cell counterparts, and (3) the model cells are able to make very reliable predictions about the functions of the different retinal output cell types, as measured using Bayesian decoding (electrophysiology) and optomotor performance (behavior). In sum, we present a new tool for studying population coding and test it experimentally. It provides a way to rapidly probe the actions of different cell classes and develop testable predictions. The overall aim is to build constrained theories about population coding and keep the number of experiments and animals to a minimum.http://europepmc.org/articles/PMC3544815?pdf=render |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Illya Bomash Yasser Roudi Sheila Nirenberg |
spellingShingle |
Illya Bomash Yasser Roudi Sheila Nirenberg A virtual retina for studying population coding. PLoS ONE |
author_facet |
Illya Bomash Yasser Roudi Sheila Nirenberg |
author_sort |
Illya Bomash |
title |
A virtual retina for studying population coding. |
title_short |
A virtual retina for studying population coding. |
title_full |
A virtual retina for studying population coding. |
title_fullStr |
A virtual retina for studying population coding. |
title_full_unstemmed |
A virtual retina for studying population coding. |
title_sort |
virtual retina for studying population coding. |
publisher |
Public Library of Science (PLoS) |
series |
PLoS ONE |
issn |
1932-6203 |
publishDate |
2013-01-01 |
description |
At every level of the visual system - from retina to cortex - information is encoded in the activity of large populations of cells. The populations are not uniform, but contain many different types of cells, each with its own sensitivities to visual stimuli. Understanding the roles of the cell types and how they work together to form collective representations has been a long-standing goal. This goal, though, has been difficult to advance, and, to a large extent, the reason is data limitation. Large numbers of stimulus/response relationships need to be explored, and obtaining enough data to examine even a fraction of them requires a great deal of experiments and animals. Here we describe a tool for addressing this, specifically, at the level of the retina. The tool is a data-driven model of retinal input/output relationships that is effective on a broad range of stimuli - essentially, a virtual retina. The results show that it is highly reliable: (1) the model cells carry the same amount of information as their real cell counterparts, (2) the quality of the information is the same - that is, the posterior stimulus distributions produced by the model cells closely match those of their real cell counterparts, and (3) the model cells are able to make very reliable predictions about the functions of the different retinal output cell types, as measured using Bayesian decoding (electrophysiology) and optomotor performance (behavior). In sum, we present a new tool for studying population coding and test it experimentally. It provides a way to rapidly probe the actions of different cell classes and develop testable predictions. The overall aim is to build constrained theories about population coding and keep the number of experiments and animals to a minimum. |
url |
http://europepmc.org/articles/PMC3544815?pdf=render |
work_keys_str_mv |
AT illyabomash avirtualretinaforstudyingpopulationcoding AT yasserroudi avirtualretinaforstudyingpopulationcoding AT sheilanirenberg avirtualretinaforstudyingpopulationcoding AT illyabomash virtualretinaforstudyingpopulationcoding AT yasserroudi virtualretinaforstudyingpopulationcoding AT sheilanirenberg virtualretinaforstudyingpopulationcoding |
_version_ |
1725099871957942272 |