A modular theory of multisensory integration for motor control
To control targeted movements, such as reaching to grasp an object or hammering a nail, the brain can use divers sources of sensory information, such as vision and proprioception. Although a variety of studies have shown that sensory signals are optimally combined according to principles of maximum...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2014-01-01
|
Series: | Frontiers in Computational Neuroscience |
Subjects: | |
Online Access: | http://journal.frontiersin.org/Journal/10.3389/fncom.2014.00001/full |
id |
doaj-d4bc4f7c2af24a1b9a81f7947351dd3f |
---|---|
record_format |
Article |
spelling |
doaj-d4bc4f7c2af24a1b9a81f7947351dd3f2020-11-25T00:19:11ZengFrontiers Media S.A.Frontiers in Computational Neuroscience1662-51882014-01-01810.3389/fncom.2014.0000170522A modular theory of multisensory integration for motor controlMichele eTagliabue0Joseph eMcIntyre1Université Paris DescartesUniversité Paris DescartesTo control targeted movements, such as reaching to grasp an object or hammering a nail, the brain can use divers sources of sensory information, such as vision and proprioception. Although a variety of studies have shown that sensory signals are optimally combined according to principles of maximum likelihood, increasing evidence indicates that the CNS does not compute a single, optimal estimation of the target's position to be compared with a single optimal estimation of the hand. Rather, it employs a more modular approach in which the overall behavior is built by computing multiple concurrent comparisons carried out simultaneously in a number of different reference frames. The results of these individual comparisons are then optimally combined in order to drive the hand. <br/>In this article we examine at a computational level two formulations of concurrent models for sensory integration and compare this to the more conventional model of converging multi-sensory signals. Through a review of published studies, both our own and those performed by others, we produce evidence favoring the concurrent formulations. We then examine in detail the effects of additive signal noise as information flows through the sensorimotor system. By taking into account the noise added by sensorimotor transformations, one can explain why the CNS may shift its reliance on one sensory modality toward a greater reliance on another and investigate under what conditions those sensory transformations occur. Careful consideration of how transformed signals will co-vary with the original source also provides insight into how the CNS chooses one sensory modality over another. These concepts can be used to explain why the CNS might, for instance, create a visual representation of a task that is otherwise limited to the kinesthetic domain (e.g. pointing with one hand to a finger on the other) and why the CNS might choose to recode sensory information in an external reference frame.http://journal.frontiersin.org/Journal/10.3389/fncom.2014.00001/fullHumansmotor controlsensory integrationmaximum likelihoodreference framessensory encoding |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Michele eTagliabue Joseph eMcIntyre |
spellingShingle |
Michele eTagliabue Joseph eMcIntyre A modular theory of multisensory integration for motor control Frontiers in Computational Neuroscience Humans motor control sensory integration maximum likelihood reference frames sensory encoding |
author_facet |
Michele eTagliabue Joseph eMcIntyre |
author_sort |
Michele eTagliabue |
title |
A modular theory of multisensory integration for motor control |
title_short |
A modular theory of multisensory integration for motor control |
title_full |
A modular theory of multisensory integration for motor control |
title_fullStr |
A modular theory of multisensory integration for motor control |
title_full_unstemmed |
A modular theory of multisensory integration for motor control |
title_sort |
modular theory of multisensory integration for motor control |
publisher |
Frontiers Media S.A. |
series |
Frontiers in Computational Neuroscience |
issn |
1662-5188 |
publishDate |
2014-01-01 |
description |
To control targeted movements, such as reaching to grasp an object or hammering a nail, the brain can use divers sources of sensory information, such as vision and proprioception. Although a variety of studies have shown that sensory signals are optimally combined according to principles of maximum likelihood, increasing evidence indicates that the CNS does not compute a single, optimal estimation of the target's position to be compared with a single optimal estimation of the hand. Rather, it employs a more modular approach in which the overall behavior is built by computing multiple concurrent comparisons carried out simultaneously in a number of different reference frames. The results of these individual comparisons are then optimally combined in order to drive the hand. <br/>In this article we examine at a computational level two formulations of concurrent models for sensory integration and compare this to the more conventional model of converging multi-sensory signals. Through a review of published studies, both our own and those performed by others, we produce evidence favoring the concurrent formulations. We then examine in detail the effects of additive signal noise as information flows through the sensorimotor system. By taking into account the noise added by sensorimotor transformations, one can explain why the CNS may shift its reliance on one sensory modality toward a greater reliance on another and investigate under what conditions those sensory transformations occur. Careful consideration of how transformed signals will co-vary with the original source also provides insight into how the CNS chooses one sensory modality over another. These concepts can be used to explain why the CNS might, for instance, create a visual representation of a task that is otherwise limited to the kinesthetic domain (e.g. pointing with one hand to a finger on the other) and why the CNS might choose to recode sensory information in an external reference frame. |
topic |
Humans motor control sensory integration maximum likelihood reference frames sensory encoding |
url |
http://journal.frontiersin.org/Journal/10.3389/fncom.2014.00001/full |
work_keys_str_mv |
AT micheleetagliabue amodulartheoryofmultisensoryintegrationformotorcontrol AT josephemcintyre amodulartheoryofmultisensoryintegrationformotorcontrol AT micheleetagliabue modulartheoryofmultisensoryintegrationformotorcontrol AT josephemcintyre modulartheoryofmultisensoryintegrationformotorcontrol |
_version_ |
1725372821635334144 |