Recurrent network of perceptrons with three state synapsesachieves competitive classification on real inputs
We describe an attractor network of binary perceptrons receiving inputs from a retinotopicvisual feature layer. Each class is represented by a random subpopulation of the attractor layer,which is turned on in a supervised manner during learning of the feed forward connections. Theseare discrete thre...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2012-06-01
|
Series: | Frontiers in Computational Neuroscience |
Subjects: | |
Online Access: | http://journal.frontiersin.org/Journal/10.3389/fncom.2012.00039/full |
id |
doaj-ab96a04adf5a440bb069c894fccb88ee |
---|---|
record_format |
Article |
spelling |
doaj-ab96a04adf5a440bb069c894fccb88ee2020-11-24T22:52:06ZengFrontiers Media S.A.Frontiers in Computational Neuroscience1662-51882012-06-01610.3389/fncom.2012.0003925198Recurrent network of perceptrons with three state synapsesachieves competitive classification on real inputsYali eAmit0Jacob eWalker1University of ChicagoMichigan State UniversityWe describe an attractor network of binary perceptrons receiving inputs from a retinotopicvisual feature layer. Each class is represented by a random subpopulation of the attractor layer,which is turned on in a supervised manner during learning of the feed forward connections. Theseare discrete three state synapses and are updated based on a simple field dependent Hebbian rule.For testing, the attractor layer is initialized by the feedforward inputs and then undergoes asynchronousrandom updating until convergence to a stable state. Classification is indicated by thesub-population that is persistently activated. The contribution of this paper is twofold. First,this is the first example of competitive classification rates of real data being achieved throughrecurrent dynamics in the attractor layer, which is only stable if recurrent inhibition is introduced.Second, we demonstrate that employing three state synapses with feedforward inhibition is essentialfor achieving the competitive classification rates due to the ability to effectively employboth positive and negative informative features.http://journal.frontiersin.org/Journal/10.3389/fncom.2012.00039/fullfeedforward inhibitionattractor networksRandomized classifiers |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Yali eAmit Jacob eWalker |
spellingShingle |
Yali eAmit Jacob eWalker Recurrent network of perceptrons with three state synapsesachieves competitive classification on real inputs Frontiers in Computational Neuroscience feedforward inhibition attractor networks Randomized classifiers |
author_facet |
Yali eAmit Jacob eWalker |
author_sort |
Yali eAmit |
title |
Recurrent network of perceptrons with three state synapsesachieves competitive classification on real inputs |
title_short |
Recurrent network of perceptrons with three state synapsesachieves competitive classification on real inputs |
title_full |
Recurrent network of perceptrons with three state synapsesachieves competitive classification on real inputs |
title_fullStr |
Recurrent network of perceptrons with three state synapsesachieves competitive classification on real inputs |
title_full_unstemmed |
Recurrent network of perceptrons with three state synapsesachieves competitive classification on real inputs |
title_sort |
recurrent network of perceptrons with three state synapsesachieves competitive classification on real inputs |
publisher |
Frontiers Media S.A. |
series |
Frontiers in Computational Neuroscience |
issn |
1662-5188 |
publishDate |
2012-06-01 |
description |
We describe an attractor network of binary perceptrons receiving inputs from a retinotopicvisual feature layer. Each class is represented by a random subpopulation of the attractor layer,which is turned on in a supervised manner during learning of the feed forward connections. Theseare discrete three state synapses and are updated based on a simple field dependent Hebbian rule.For testing, the attractor layer is initialized by the feedforward inputs and then undergoes asynchronousrandom updating until convergence to a stable state. Classification is indicated by thesub-population that is persistently activated. The contribution of this paper is twofold. First,this is the first example of competitive classification rates of real data being achieved throughrecurrent dynamics in the attractor layer, which is only stable if recurrent inhibition is introduced.Second, we demonstrate that employing three state synapses with feedforward inhibition is essentialfor achieving the competitive classification rates due to the ability to effectively employboth positive and negative informative features. |
topic |
feedforward inhibition attractor networks Randomized classifiers |
url |
http://journal.frontiersin.org/Journal/10.3389/fncom.2012.00039/full |
work_keys_str_mv |
AT yalieamit recurrentnetworkofperceptronswiththreestatesynapsesachievescompetitiveclassificationonrealinputs AT jacobewalker recurrentnetworkofperceptronswiththreestatesynapsesachievescompetitiveclassificationonrealinputs |
_version_ |
1725667151293972480 |