Design Considerations for Efficient Deep Neural Networks on Processing-in-Memory Accelerators

© 2019 IEEE. This paper describes various design considerations for deep neural networks that enable them to operate efficiently and accurately on processing-in-memory accelerators. We highlight important properties of these accelerators and the resulting design considerations using experiments cond...

Full description

Bibliographic Details
Main Authors: Yang, Tien-Ju (Author), Sze, Vivienne (Author)
Other Authors: Massachusetts Institute of Technology. Microsystems Technology Laboratories (Contributor)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE), 2021-11-15T20:37:23Z.
Subjects:
Online Access:Get fulltext
LEADER 01101 am a22001813u 4500
001 137180.2
042 |a dc 
100 1 0 |a Yang, Tien-Ju  |e author 
100 1 0 |a Massachusetts Institute of Technology. Microsystems Technology Laboratories  |e contributor 
700 1 0 |a Sze, Vivienne  |e author 
245 0 0 |a Design Considerations for Efficient Deep Neural Networks on Processing-in-Memory Accelerators 
260 |b Institute of Electrical and Electronics Engineers (IEEE),   |c 2021-11-15T20:37:23Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/137180.2 
520 |a © 2019 IEEE. This paper describes various design considerations for deep neural networks that enable them to operate efficiently and accurately on processing-in-memory accelerators. We highlight important properties of these accelerators and the resulting design considerations using experiments conducted on various state-of-the- art deep neural networks with the large-scale ImageNet dataset. 
520 |a NSF (Grant E2CDA 1639921) 
546 |a en 
655 7 |a Article 
773 |t Technical Digest - International Electron Devices Meeting, IEDM