Neural Network Training Acceleration With RRAM-Based Hybrid Synapses

Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can...

Full description

Bibliographic Details
Main Authors: Wooseok Choi, Myonghoon Kwak, Seyoung Kim, Hyunsang Hwang
Format: Article
Language:English
Published: Frontiers Media S.A. 2021-06-01
Series:Frontiers in Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fnins.2021.690418/full
id doaj-a9b0edfebd4e4f6fad57ba40fdda8767
record_format Article
spelling doaj-a9b0edfebd4e4f6fad57ba40fdda87672021-06-24T05:37:54ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2021-06-011510.3389/fnins.2021.690418690418Neural Network Training Acceleration With RRAM-Based Hybrid SynapsesWooseok ChoiMyonghoon KwakSeyoung KimHyunsang HwangHardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiOx RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices.https://www.frontiersin.org/articles/10.3389/fnins.2021.690418/fullhardware neural networksonline trainingresistive memoryhybrid synapsecrossbar array
collection DOAJ
language English
format Article
sources DOAJ
author Wooseok Choi
Myonghoon Kwak
Seyoung Kim
Hyunsang Hwang
spellingShingle Wooseok Choi
Myonghoon Kwak
Seyoung Kim
Hyunsang Hwang
Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
Frontiers in Neuroscience
hardware neural networks
online training
resistive memory
hybrid synapse
crossbar array
author_facet Wooseok Choi
Myonghoon Kwak
Seyoung Kim
Hyunsang Hwang
author_sort Wooseok Choi
title Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
title_short Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
title_full Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
title_fullStr Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
title_full_unstemmed Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
title_sort neural network training acceleration with rram-based hybrid synapses
publisher Frontiers Media S.A.
series Frontiers in Neuroscience
issn 1662-453X
publishDate 2021-06-01
description Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiOx RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices.
topic hardware neural networks
online training
resistive memory
hybrid synapse
crossbar array
url https://www.frontiersin.org/articles/10.3389/fnins.2021.690418/full
work_keys_str_mv AT wooseokchoi neuralnetworktrainingaccelerationwithrrambasedhybridsynapses
AT myonghoonkwak neuralnetworktrainingaccelerationwithrrambasedhybridsynapses
AT seyoungkim neuralnetworktrainingaccelerationwithrrambasedhybridsynapses
AT hyunsanghwang neuralnetworktrainingaccelerationwithrrambasedhybridsynapses
_version_ 1721361582762491904