Improving Liquid State Machines Through Iterative Refinement of the Reservoir
Liquid State Machines (LSMs) exploit the power of recurrent spiking neural networks (SNNs) without training the SNN. Instead, a reservoir, or liquid, is randomly created which acts as a filter for a readout function. We develop three methods for iteratively refining a randomly generated liquid to cr...
Main Author: | Norton, R David |
---|---|
Format: | Others |
Published: |
BYU ScholarsArchive
2008
|
Subjects: | |
Online Access: | https://scholarsarchive.byu.edu/etd/1354 https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=2353&context=etd |
Similar Items
-
Adaptive SNN for Anthropomorphic Finger Control
by: Mircea Hulea, et al.
Published: (2021-04-01) -
Editorial: Understanding and Bridging the Gap Between Neuromorphic Computing and Machine Learning
by: Lei Deng, et al.
Published: (2021-03-01) -
Reinforcement Learning With Low-Complexity Liquid State Machines
by: Wachirawit Ponghiran, et al.
Published: (2019-08-01) -
Optimizing BCPNN Learning Rule for Memory Access
by: Yu Yang, et al.
Published: (2020-08-01) -
A Memristor-Based Liquid State Machine for Auditory Signal Recognition
by: Henderson, Stephen Alexander, Jr.
Published: (2021)