Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms

In recent times, the usage of modern neuromorphic hardware for brain-inspired SNNs has grown exponentially. In the context of sparse input data, they are undertaking low power consumption for event-based neuromorphic hardware, specifically in the deeper layers. However, using deep ANNs for training...

Full description

Bibliographic Details
Main Authors: Tehreem Syed, Vijay Kakani, Xuenan Cui, Hakil Kim
Format: Article
Language:English
Published: MDPI AG 2021-05-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/21/9/3240
id doaj-c8855a6ea8e2499cbbce305fb6e43fe5
record_format Article
spelling doaj-c8855a6ea8e2499cbbce305fb6e43fe52021-05-31T23:24:01ZengMDPI AGSensors1424-82202021-05-01213240324010.3390/s21093240Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded PlatformsTehreem Syed0Vijay Kakani1Xuenan Cui2Hakil Kim3Electrical and Computer Engineering, Inha University, 100 Inha-ro, Nam-gu, Incheon 22212, KoreaIntegrated System and Engineering, School of Global Convergence Studies, Inha University, 100 Inha-ro, Nam-gu, Incheon 22212, KoreaInformation and Communication Engineering, Inha University, 100 Inha-ro, Nam-gu, Incheon 22212, KoreaElectrical and Computer Engineering, Inha University, 100 Inha-ro, Nam-gu, Incheon 22212, KoreaIn recent times, the usage of modern neuromorphic hardware for brain-inspired SNNs has grown exponentially. In the context of sparse input data, they are undertaking low power consumption for event-based neuromorphic hardware, specifically in the deeper layers. However, using deep ANNs for training spiking models is still considered as a tedious task. Until recently, various ANN to SNN conversion methods in the literature have been proposed to train deep SNN models. Nevertheless, these methods require hundreds to thousands of time-steps for training and still cannot attain good SNN performance. This work proposes a customized model (VGG, ResNet) architecture to train deep convolutional spiking neural networks. In this current study, the training is carried out using deep convolutional spiking neural networks with surrogate gradient descent backpropagation in a customized layer architecture similar to deep artificial neural networks. Moreover, this work also proposes fewer time-steps for training SNNs with surrogate gradient descent. During the training with surrogate gradient descent backpropagation, overfitting problems have been encountered. To overcome these problems, this work refines the SNN based dropout technique with surrogate gradient descent. The proposed customized SNN models achieve good classification results on both private and public datasets. In this work, several experiments have been carried out on an embedded platform (NVIDIA JETSON TX2 board), where the deployment of customized SNN models has been extensively conducted. Performance validations have been carried out in terms of processing time and inference accuracy between PC and embedded platforms, showing that the proposed customized models and training techniques are feasible for achieving a better performance on various datasets such as CIFAR-10, MNIST, SVHN, and private KITTI and Korean License plate dataset.https://www.mdpi.com/1424-8220/21/9/3240deep convolutional spiking neural networksspiking neuron modelsurrogate gradient descenttime-stepsembedded platform
collection DOAJ
language English
format Article
sources DOAJ
author Tehreem Syed
Vijay Kakani
Xuenan Cui
Hakil Kim
spellingShingle Tehreem Syed
Vijay Kakani
Xuenan Cui
Hakil Kim
Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms
Sensors
deep convolutional spiking neural networks
spiking neuron model
surrogate gradient descent
time-steps
embedded platform
author_facet Tehreem Syed
Vijay Kakani
Xuenan Cui
Hakil Kim
author_sort Tehreem Syed
title Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms
title_short Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms
title_full Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms
title_fullStr Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms
title_full_unstemmed Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms
title_sort exploring optimized spiking neural network architectures for classification tasks on embedded platforms
publisher MDPI AG
series Sensors
issn 1424-8220
publishDate 2021-05-01
description In recent times, the usage of modern neuromorphic hardware for brain-inspired SNNs has grown exponentially. In the context of sparse input data, they are undertaking low power consumption for event-based neuromorphic hardware, specifically in the deeper layers. However, using deep ANNs for training spiking models is still considered as a tedious task. Until recently, various ANN to SNN conversion methods in the literature have been proposed to train deep SNN models. Nevertheless, these methods require hundreds to thousands of time-steps for training and still cannot attain good SNN performance. This work proposes a customized model (VGG, ResNet) architecture to train deep convolutional spiking neural networks. In this current study, the training is carried out using deep convolutional spiking neural networks with surrogate gradient descent backpropagation in a customized layer architecture similar to deep artificial neural networks. Moreover, this work also proposes fewer time-steps for training SNNs with surrogate gradient descent. During the training with surrogate gradient descent backpropagation, overfitting problems have been encountered. To overcome these problems, this work refines the SNN based dropout technique with surrogate gradient descent. The proposed customized SNN models achieve good classification results on both private and public datasets. In this work, several experiments have been carried out on an embedded platform (NVIDIA JETSON TX2 board), where the deployment of customized SNN models has been extensively conducted. Performance validations have been carried out in terms of processing time and inference accuracy between PC and embedded platforms, showing that the proposed customized models and training techniques are feasible for achieving a better performance on various datasets such as CIFAR-10, MNIST, SVHN, and private KITTI and Korean License plate dataset.
topic deep convolutional spiking neural networks
spiking neuron model
surrogate gradient descent
time-steps
embedded platform
url https://www.mdpi.com/1424-8220/21/9/3240
work_keys_str_mv AT tehreemsyed exploringoptimizedspikingneuralnetworkarchitecturesforclassificationtasksonembeddedplatforms
AT vijaykakani exploringoptimizedspikingneuralnetworkarchitecturesforclassificationtasksonembeddedplatforms
AT xuenancui exploringoptimizedspikingneuralnetworkarchitecturesforclassificationtasksonembeddedplatforms
AT hakilkim exploringoptimizedspikingneuralnetworkarchitecturesforclassificationtasksonembeddedplatforms
_version_ 1721417657056493568