Algorithm for Training Neural Networks on Resistive Device Arrays
Hardware architectures composed of resistive cross-point device arrays can provide significant power and speed benefits for deep neural network training workloads using stochastic gradient descent (SGD) and backpropagation (BP) algorithm. The training accuracy on this imminent analog hardware, howev...
Main Authors: | Tayfun Gokmen, Wilfried Haensch |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2020-02-01
|
Series: | Frontiers in Neuroscience |
Subjects: | |
Online Access: | https://www.frontiersin.org/article/10.3389/fnins.2020.00103/full |
Similar Items
-
Training LSTM Networks With Resistive Cross-Point Devices
by: Tayfun Gokmen, et al.
Published: (2018-10-01) -
RAPA-ConvNets: Modified Convolutional Networks for Accelerated Training on Architectures With Analog Arrays
by: Malte J. Rasch, et al.
Published: (2019-07-01) -
Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices
by: Tayfun Gokmen, et al.
Published: (2017-10-01) -
Plasticity in memristive devices for Spiking Neural Networks
by: Sylvain eSaïghi, et al.
Published: (2015-03-01) -
Enabling Training of Neural Networks on Noisy Hardware
by: Tayfun Gokmen
Published: (2021-09-01)