Reinforcement Learning With Low-Complexity Liquid State Machines

We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters. Such sparse and randomly interconnected recurrent spiking networks exhibit highly non-linear...

Full description

Bibliographic Details
Main Authors: Wachirawit Ponghiran, Gopalakrishnan Srinivasan, Kaushik Roy
Format: Article
Language:English
Published: Frontiers Media S.A. 2019-08-01
Series:Frontiers in Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/article/10.3389/fnins.2019.00883/full
id doaj-1d3caac2f2dd4387bc10b1da9c733c80
record_format Article
spelling doaj-1d3caac2f2dd4387bc10b1da9c733c802020-11-25T02:39:51ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2019-08-011310.3389/fnins.2019.00883465263Reinforcement Learning With Low-Complexity Liquid State MachinesWachirawit PonghiranGopalakrishnan SrinivasanKaushik RoyWe propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters. Such sparse and randomly interconnected recurrent spiking networks exhibit highly non-linear dynamics that transform the inputs into rich high-dimensional representations based on the current and past context. The random input representations can be efficiently interpreted by an output (or readout) layer with trainable parameters. Systematic initialization of the random connections and training of the readout layer using Q-learning algorithm enable such small random spiking networks to learn optimally and achieve the same learning efficiency as humans on complex reinforcement learning (RL) tasks like Atari games. In fact, the sparse recurrent connections cause these networks to retain fading memory of past inputs, thereby enabling them to perform temporal integration across successive RL time-steps and learn with partial state inputs. The spike-based approach using small random recurrent networks provides a computationally efficient alternative to state-of-the-art deep reinforcement learning networks with several layers of trainable parameters.https://www.frontiersin.org/article/10.3389/fnins.2019.00883/fullliquid state machinerecurrent SNNlearning without stable statesspiking reinforcement learningQ-learning
collection DOAJ
language English
format Article
sources DOAJ
author Wachirawit Ponghiran
Gopalakrishnan Srinivasan
Kaushik Roy
spellingShingle Wachirawit Ponghiran
Gopalakrishnan Srinivasan
Kaushik Roy
Reinforcement Learning With Low-Complexity Liquid State Machines
Frontiers in Neuroscience
liquid state machine
recurrent SNN
learning without stable states
spiking reinforcement learning
Q-learning
author_facet Wachirawit Ponghiran
Gopalakrishnan Srinivasan
Kaushik Roy
author_sort Wachirawit Ponghiran
title Reinforcement Learning With Low-Complexity Liquid State Machines
title_short Reinforcement Learning With Low-Complexity Liquid State Machines
title_full Reinforcement Learning With Low-Complexity Liquid State Machines
title_fullStr Reinforcement Learning With Low-Complexity Liquid State Machines
title_full_unstemmed Reinforcement Learning With Low-Complexity Liquid State Machines
title_sort reinforcement learning with low-complexity liquid state machines
publisher Frontiers Media S.A.
series Frontiers in Neuroscience
issn 1662-453X
publishDate 2019-08-01
description We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters. Such sparse and randomly interconnected recurrent spiking networks exhibit highly non-linear dynamics that transform the inputs into rich high-dimensional representations based on the current and past context. The random input representations can be efficiently interpreted by an output (or readout) layer with trainable parameters. Systematic initialization of the random connections and training of the readout layer using Q-learning algorithm enable such small random spiking networks to learn optimally and achieve the same learning efficiency as humans on complex reinforcement learning (RL) tasks like Atari games. In fact, the sparse recurrent connections cause these networks to retain fading memory of past inputs, thereby enabling them to perform temporal integration across successive RL time-steps and learn with partial state inputs. The spike-based approach using small random recurrent networks provides a computationally efficient alternative to state-of-the-art deep reinforcement learning networks with several layers of trainable parameters.
topic liquid state machine
recurrent SNN
learning without stable states
spiking reinforcement learning
Q-learning
url https://www.frontiersin.org/article/10.3389/fnins.2019.00883/full
work_keys_str_mv AT wachirawitponghiran reinforcementlearningwithlowcomplexityliquidstatemachines
AT gopalakrishnansrinivasan reinforcementlearningwithlowcomplexityliquidstatemachines
AT kaushikroy reinforcementlearningwithlowcomplexityliquidstatemachines
_version_ 1724784462001078272