Reinforcement Learning With Low-Complexity Liquid State Machines
We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters. Such sparse and randomly interconnected recurrent spiking networks exhibit highly non-linear...
Main Authors: | Wachirawit Ponghiran, Gopalakrishnan Srinivasan, Kaushik Roy |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2019-08-01
|
Series: | Frontiers in Neuroscience |
Subjects: | |
Online Access: | https://www.frontiersin.org/article/10.3389/fnins.2019.00883/full |
Similar Items
-
ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing
by: Gopalakrishnan Srinivasan, et al.
Published: (2019-03-01) -
Improving Liquid State Machines Through Iterative Refinement of the Reservoir
by: Norton, R David
Published: (2008) -
Editorial: Spiking Neural Network Learning, Benchmarking, Programming and Executing
by: Guoqi Li, et al.
Published: (2020-04-01) -
Analysis of Liquid Ensembles for Enhancing the Performance and Accuracy of Liquid State Machines
by: Parami Wijesinghe, et al.
Published: (2019-05-01) -
A Spike Time-Dependent Online Learning Algorithm Derived From Biological Olfaction
by: Ayon Borthakur, et al.
Published: (2019-06-01)