Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks.
The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2015-10-01
|
Series: | PLoS Computational Biology |
Online Access: | http://europepmc.org/articles/PMC4619762?pdf=render |
id |
doaj-fe8116f6fc68442fbe97381f4ece5790 |
---|---|
record_format |
Article |
spelling |
doaj-fe8116f6fc68442fbe97381f4ece57902020-11-24T21:56:05ZengPublic Library of Science (PLoS)PLoS Computational Biology1553-734X1553-73582015-10-011110e100448910.1371/journal.pcbi.1004489Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks.Tobias BroschHeiko NeumannPieter R RoelfsemaThe processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies.http://europepmc.org/articles/PMC4619762?pdf=render |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Tobias Brosch Heiko Neumann Pieter R Roelfsema |
spellingShingle |
Tobias Brosch Heiko Neumann Pieter R Roelfsema Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks. PLoS Computational Biology |
author_facet |
Tobias Brosch Heiko Neumann Pieter R Roelfsema |
author_sort |
Tobias Brosch |
title |
Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks. |
title_short |
Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks. |
title_full |
Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks. |
title_fullStr |
Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks. |
title_full_unstemmed |
Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks. |
title_sort |
reinforcement learning of linking and tracing contours in recurrent neural networks. |
publisher |
Public Library of Science (PLoS) |
series |
PLoS Computational Biology |
issn |
1553-734X 1553-7358 |
publishDate |
2015-10-01 |
description |
The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies. |
url |
http://europepmc.org/articles/PMC4619762?pdf=render |
work_keys_str_mv |
AT tobiasbrosch reinforcementlearningoflinkingandtracingcontoursinrecurrentneuralnetworks AT heikoneumann reinforcementlearningoflinkingandtracingcontoursinrecurrentneuralnetworks AT pieterrroelfsema reinforcementlearningoflinkingandtracingcontoursinrecurrentneuralnetworks |
_version_ |
1725859602400018432 |