Design Considerations for Efficient Deep Neural Networks on Processing-in-Memory Accelerators
© 2019 IEEE. This paper describes various design considerations for deep neural networks that enable them to operate efficiently and accurately on processing-in-memory accelerators. We highlight important properties of these accelerators and the resulting design considerations using experiments cond...
Format: | Article |
---|---|
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers (IEEE),
2021-11-03T14:10:52Z.
|
Subjects: | |
Online Access: | Get fulltext |
Similar Items
-
Design Considerations for Efficient Deep Neural Networks on Processing-in-Memory Accelerators
by: Yang, Tien-Ju, et al.
Published: (2021) -
Automated optimization for memory‐efficient high‐performance deep neural network accelerators
by: HyunMi Kim, et al.
Published: (2020-07-01) -
Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations
by: Tayfun Gokmen, et al.
Published: (2016-07-01) -
MOSDA: On-Chip Memory Optimized Sparse Deep Neural Network Accelerator With Efficient Index Matching
by: Hongjie Xu, et al.
Published: (2021-01-01) -
Architecture design for highly flexible and energy-efficient deep neural network accelerators
by: Chen, Yu-Hsin, Ph. D. Massachusetts Institute of Technology
Published: (2018)