Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding
Spiking neural networks (SNN) increasingly attract attention for their similarity to the biological neural system. Hardware implementation of spiking neural networks, however, remains a great challenge due to their excessive complexity and circuit size. This work introduces a novel optimization meth...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-09-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/9/10/1599 |
id |
doaj-2a2c540eb11546bb9e57babafcc3dffc |
---|---|
record_format |
Article |
spelling |
doaj-2a2c540eb11546bb9e57babafcc3dffc2020-11-25T03:40:06ZengMDPI AGElectronics2079-92922020-09-0191599159910.3390/electronics9101599Optimization of Spiking Neural Networks Based on Binary Streamed Rate CodingAli A. Al-Hamid0HyungWon Kim1Department of Electronics, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, KoreaDepartment of Electronics, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, KoreaSpiking neural networks (SNN) increasingly attract attention for their similarity to the biological neural system. Hardware implementation of spiking neural networks, however, remains a great challenge due to their excessive complexity and circuit size. This work introduces a novel optimization method for hardware friendly SNN architecture based on a modified rate coding scheme called Binary Streamed Rate Coding (BSRC). BSRC combines the features of both rate and temporal coding. In addition, by employing a built-in randomizer, the BSRC SNN model provides a higher accuracy and faster training. We also present SNN optimization methods including structure optimization and weight quantization. Extensive evaluations with MNIST SNNs demonstrate that the structure optimization of SNN (81-30-20-10) provides 183.19 times reduction in hardware compared with SNN (784-800-10), while providing an accuracy of 95.25%, a small loss compared with 98.89% and 98.93% reported in the previous works. Our weight quantization reduces 32-bit weights to 4-bit integers leading to further hardware reduction of 4 times with only 0.56% accuracy loss. Overall, the SNN model (81-30-20-10) optimized by our method shrinks the SNN’s circuit area from 3089.49 mm2 for SNN (784-800-10) to 4.04 mm2—a reduction of 765 times.https://www.mdpi.com/2079-9292/9/10/1599Spiking Neural Network (SNN)Spike Rate CodingMNIST datasetweight quantizationSNN hardware |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Ali A. Al-Hamid HyungWon Kim |
spellingShingle |
Ali A. Al-Hamid HyungWon Kim Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding Electronics Spiking Neural Network (SNN) Spike Rate Coding MNIST dataset weight quantization SNN hardware |
author_facet |
Ali A. Al-Hamid HyungWon Kim |
author_sort |
Ali A. Al-Hamid |
title |
Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding |
title_short |
Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding |
title_full |
Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding |
title_fullStr |
Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding |
title_full_unstemmed |
Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding |
title_sort |
optimization of spiking neural networks based on binary streamed rate coding |
publisher |
MDPI AG |
series |
Electronics |
issn |
2079-9292 |
publishDate |
2020-09-01 |
description |
Spiking neural networks (SNN) increasingly attract attention for their similarity to the biological neural system. Hardware implementation of spiking neural networks, however, remains a great challenge due to their excessive complexity and circuit size. This work introduces a novel optimization method for hardware friendly SNN architecture based on a modified rate coding scheme called Binary Streamed Rate Coding (BSRC). BSRC combines the features of both rate and temporal coding. In addition, by employing a built-in randomizer, the BSRC SNN model provides a higher accuracy and faster training. We also present SNN optimization methods including structure optimization and weight quantization. Extensive evaluations with MNIST SNNs demonstrate that the structure optimization of SNN (81-30-20-10) provides 183.19 times reduction in hardware compared with SNN (784-800-10), while providing an accuracy of 95.25%, a small loss compared with 98.89% and 98.93% reported in the previous works. Our weight quantization reduces 32-bit weights to 4-bit integers leading to further hardware reduction of 4 times with only 0.56% accuracy loss. Overall, the SNN model (81-30-20-10) optimized by our method shrinks the SNN’s circuit area from 3089.49 mm2 for SNN (784-800-10) to 4.04 mm2—a reduction of 765 times. |
topic |
Spiking Neural Network (SNN) Spike Rate Coding MNIST dataset weight quantization SNN hardware |
url |
https://www.mdpi.com/2079-9292/9/10/1599 |
work_keys_str_mv |
AT aliaalhamid optimizationofspikingneuralnetworksbasedonbinarystreamedratecoding AT hyungwonkim optimizationofspikingneuralnetworksbasedonbinarystreamedratecoding |
_version_ |
1724536375179476992 |