Energy Efficient CNN Inference Accelerator using Fast Fourier Transform

碩士 === 國立交通大學 === 電子研究所 === 107 === In recent years, Deep Convolutional Neural Networks (DCNNs) are state-of-the-art for various classification tasks, but are computationally expensive due to the high-dimensional convolutions. In the thesis, we proposed using FFT-based convolution in frequency domai...

Full description

Bibliographic Details
Main Authors: Chung, Ya-Chin, 鍾亞晉
Other Authors: Liu, Chih-Wei
Format: Others
Language:en_US
Published: 2018
Online Access:http://ndltd.ncl.edu.tw/handle/275wrz
Description
Summary:碩士 === 國立交通大學 === 電子研究所 === 107 === In recent years, Deep Convolutional Neural Networks (DCNNs) are state-of-the-art for various classification tasks, but are computationally expensive due to the high-dimensional convolutions. In the thesis, we proposed using FFT-based convolution in frequency domain instead of traditional direct convolution in time domain. In this way, the computational complexity would be considerably reduced. Moreover, we use the techniques of conjugate symmetry and down-sampling in frequency domain to further reduce computational complexity. We also exploit the property of weight sparsity to eliminate filter weights in CNNs. This can save computational requirement but will be accompanied with accuracy loss. We simulate the trend chart of the classification accuracy in time domain and frequency domain. The simulation results reveal that eliminating filter weights in frequency domain is more accurate and meaningful than that in time domain. We use fixed point arithmetic to implement the proposed FFT-based design, and achieve 51.37% top-1 accuracy performance. Synthesized by TSMC 90 nm CMOS technology and operated at 330 MHz, the proposed design approximately consumes 247.5 mW power and 69.2 ms latency. Compared with the MIT Eyeriss with the same workload of the last four convolution layers within AlexNet, the proposed design is considerably competitive.