Design Considerations for Efficient Deep Neural Networks on Processing-in-Memory Accelerators
© 2019 IEEE. This paper describes various design considerations for deep neural networks that enable them to operate efficiently and accurately on processing-in-memory accelerators. We highlight important properties of these accelerators and the resulting design considerations using experiments cond...
Main Authors: | Yang, Tien-Ju (Author), Sze, Vivienne (Author) |
---|---|
Other Authors: | Massachusetts Institute of Technology. Microsystems Technology Laboratories (Contributor) |
Format: | Article |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers (IEEE),
2021-11-15T20:37:23Z.
|
Subjects: | |
Online Access: | Get fulltext |
Similar Items
-
Design Considerations for Efficient Deep Neural Networks on Processing-in-Memory Accelerators
Published: (2021) -
Using Dataflow to Optimize Energy Efficiency of Deep Neural Network Accelerators
by: Chen, Yu-Hsin, et al.
Published: (2021) -
Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks
by: Chen, Yu-Hsin, et al.
Published: (2016) -
A method to estimate the energy consumption of deep neural networks
by: Yang, Tien-Ju, et al.
Published: (2021) -
An Architecture-Level Energy and Area Estimator for Processing-In-Memory Accelerator Designs
by: Wu, Yannan Nellie, et al.
Published: (2021)