|
|
|
|
LEADER |
01550 am a22001573u 4500 |
001 |
122468 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Biswas, Avishek
|e author
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
|e contributor
|
700 |
1 |
0 |
|a Chandrakasan, Anantha P
|e author
|
245 |
0 |
0 |
|a CONV-SRAM: An Energy-Efficient SRAM With In-Memory Dot-Product Computation for Low-Power Convolutional Neural Networks
|
260 |
|
|
|b Institute of Electrical and Electronics Engineers (IEEE),
|c 2019-10-08T15:44:28Z.
|
856 |
|
|
|z Get fulltext
|u https://hdl.handle.net/1721.1/122468
|
520 |
|
|
|a This paper presents an energy-efficient static random access memory (SRAM) with embedded dot-product computation capability, for binary-weight convolutional neural networks. A 10T bit-cell-based SRAM array is used to store the 1-b filter weights. The array implements dot-product as a weighted average of the bitline voltages, which are proportional to the digital input values. Local integrating analog-to-digital converters compute the digital convolution outputs, corresponding to each filter. We have successfully demonstrated functionality (>98% accuracy) with the 10 000 test images in the MNIST hand-written digit recognition data set, using 6-b inputs/outputs. Compared to conventional full-digital implementations using small bitwidths, we achieve similar or better energy efficiency, by reducing data transfer, due to the highly parallel in-memory analog computations.
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t IEEE Journal of Solid-State Circuits
|