Design and Implementation of an Efficient CNN Accelerator with Binary Weights and Activation

碩士 === 國立交通大學 === 資訊科學與工程研究所 === 107 === Recently, convolution neural networks (CNN) has been widely applied on image classification and object detection. Generally, the convolution layer requires lots of arithmetic operations in whole CNN model. Thus, the convolution layer plays an significant role...

Full description

Bibliographic Details
Main Authors: Peng, Hsuan-Hao, 彭宣澔
Other Authors: Chen, Chien
Format: Others
Language:en_US
Published: 2019
Online Access:http://ndltd.ncl.edu.tw/handle/5szwn6
id ndltd-TW-107NCTU5394066
record_format oai_dc
spelling ndltd-TW-107NCTU53940662019-06-27T05:42:50Z http://ndltd.ncl.edu.tw/handle/5szwn6 Design and Implementation of an Efficient CNN Accelerator with Binary Weights and Activation 基於二位元權重及激活函數之高效率卷積類神經網路設計與實現 Peng, Hsuan-Hao 彭宣澔 碩士 國立交通大學 資訊科學與工程研究所 107 Recently, convolution neural networks (CNN) has been widely applied on image classification and object detection. Generally, the convolution layer requires lots of arithmetic operations in whole CNN model. Thus, the convolution layer plays an significant role in CNN hardware. In this thesis, we propose a new CNN accelerator to reach the low area and decrease the operation in convolution layer. To the best of our knowledge, we adopt binary-weights and activation (BWA) method to bound the filter weighted value into binary value in forward propagation part. Based on the BWA method, we found that the multiplication operation can be substituted into XNOR gate operation to significantly lower the calculation complexity. Furthermore, XNOR gate operation not only achieve low space and time complexities but reaches efficient energy. Besides, we provide flexible multi-size filter, .i.e., 1 × 1, 2 × 2, …, 7 × 7. For the storage of input feature maps and filter, we adopt 32x32-based computation mechanism. Chen, Chien 陳健 2019 學位論文 ; thesis 32 en_US
collection NDLTD
language en_US
format Others
sources NDLTD
description 碩士 === 國立交通大學 === 資訊科學與工程研究所 === 107 === Recently, convolution neural networks (CNN) has been widely applied on image classification and object detection. Generally, the convolution layer requires lots of arithmetic operations in whole CNN model. Thus, the convolution layer plays an significant role in CNN hardware. In this thesis, we propose a new CNN accelerator to reach the low area and decrease the operation in convolution layer. To the best of our knowledge, we adopt binary-weights and activation (BWA) method to bound the filter weighted value into binary value in forward propagation part. Based on the BWA method, we found that the multiplication operation can be substituted into XNOR gate operation to significantly lower the calculation complexity. Furthermore, XNOR gate operation not only achieve low space and time complexities but reaches efficient energy. Besides, we provide flexible multi-size filter, .i.e., 1 × 1, 2 × 2, …, 7 × 7. For the storage of input feature maps and filter, we adopt 32x32-based computation mechanism.
author2 Chen, Chien
author_facet Chen, Chien
Peng, Hsuan-Hao
彭宣澔
author Peng, Hsuan-Hao
彭宣澔
spellingShingle Peng, Hsuan-Hao
彭宣澔
Design and Implementation of an Efficient CNN Accelerator with Binary Weights and Activation
author_sort Peng, Hsuan-Hao
title Design and Implementation of an Efficient CNN Accelerator with Binary Weights and Activation
title_short Design and Implementation of an Efficient CNN Accelerator with Binary Weights and Activation
title_full Design and Implementation of an Efficient CNN Accelerator with Binary Weights and Activation
title_fullStr Design and Implementation of an Efficient CNN Accelerator with Binary Weights and Activation
title_full_unstemmed Design and Implementation of an Efficient CNN Accelerator with Binary Weights and Activation
title_sort design and implementation of an efficient cnn accelerator with binary weights and activation
publishDate 2019
url http://ndltd.ncl.edu.tw/handle/5szwn6
work_keys_str_mv AT penghsuanhao designandimplementationofanefficientcnnacceleratorwithbinaryweightsandactivation
AT péngxuānhào designandimplementationofanefficientcnnacceleratorwithbinaryweightsandactivation
AT penghsuanhao jīyúèrwèiyuánquánzhòngjíjīhuóhánshùzhīgāoxiàolǜjuǎnjīlèishénjīngwǎnglùshèjìyǔshíxiàn
AT péngxuānhào jīyúèrwèiyuánquánzhòngjíjīhuóhánshùzhīgāoxiàolǜjuǎnjīlèishénjīngwǎnglùshèjìyǔshíxiàn
_version_ 1719213402458423296