McDRAM v2: In-Dynamic Random Access Memory Systolic Array Accelerator to Address the Large Model Problem in Deep Neural Networks on the Edge
The energy efficiency of accelerating hundreds of MB-large deep neural networks (DNNs) in a mobile environment is less than that of a server-class big chip accelerator because of the limited power budget, silicon area, and smaller buffer size of static random access memory associated with mobile sys...
Main Authors: | Seunghwan Cho, Haerang Choi, Eunhyeok Park, Hyunsung Shin, Sungjoo Yoo |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9146167/ |
Similar Items
-
AB9: A neural processor for inference acceleration
by: Yong Cheol Peter Cho, et al.
Published: (2020-08-01) -
CENNA: Cost-Effective Neural Network Accelerator
by: Sang-Soo Park, et al.
Published: (2020-01-01) -
Accelerating Neural Network Inference on FPGA-Based Platforms—A Survey
by: Ran Wu, et al.
Published: (2021-04-01) -
Recent Progress on Memristive Convolutional Neural Networks for Edge Intelligence
by: Yi-Fan Qin, et al.
Published: (2020-11-01) -
Performance analysis of local exit for distributed deep neural networks over cloud and edge computing
by: Changsik Lee, et al.
Published: (2020-10-01)