CAxCNN: Towards the Use of Canonic Sign Digit Based Approximation for Hardware-Friendly Convolutional Neural Networks
The design of hardware-friendly architectures with low computational overhead is desirable for low latency realization of CNN on resource-constrained embedded platforms. In this work, we propose CAxCNN, a Canonic Sign Digit (CSD) based approximation methodology for representing the filter weights of...
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9137167/ |
id |
doaj-4f85a81037c040e5bb332cb4fa528ae3 |
---|---|
record_format |
Article |
spelling |
doaj-4f85a81037c040e5bb332cb4fa528ae32021-03-30T02:08:40ZengIEEEIEEE Access2169-35362020-01-01812701412702110.1109/ACCESS.2020.30082569137167CAxCNN: Towards the Use of Canonic Sign Digit Based Approximation for Hardware-Friendly Convolutional Neural NetworksMohsin Riaz0https://orcid.org/0000-0003-0980-5772Rehan Hafiz1https://orcid.org/0000-0002-5062-3068Salman Abdul Khaliq2https://orcid.org/0000-0001-6642-4496Muhammad Faisal3https://orcid.org/0000-0001-5254-4833Hafiz Talha Iqbal4https://orcid.org/0000-0003-3594-950XMohsen Ali5https://orcid.org/0000-0003-4809-8679Muhammad Shafique6https://orcid.org/0000-0002-2607-8135Computer Engineering Department, Information Technology University, Lahore, PakistanComputer Engineering Department, Information Technology University, Lahore, PakistanComputer Engineering Department, Information Technology University, Lahore, PakistanComputer Engineering Department, Information Technology University, Lahore, PakistanComputer Engineering Department, Information Technology University, Lahore, PakistanComputer Engineering Department, Information Technology University, Lahore, PakistanInstitute of Computer Engineering, Vienna University of Technology (TU Wien), Vienna, AustriaThe design of hardware-friendly architectures with low computational overhead is desirable for low latency realization of CNN on resource-constrained embedded platforms. In this work, we propose CAxCNN, a Canonic Sign Digit (CSD) based approximation methodology for representing the filter weights of pre-trained CNNs.The proposed CSD representation allows the use of multipliers with reduced computational complexity. The technique can be applied on top of state-of-the-art CNN quantization schemes in a complementary manner. Our experimental results on a variety of CNNs, trained on MNIST, CIFAR-10 and ImageNet datasets, demonstrate that our methodology provides CNN designs with multiple levels of classification accuracy, without requiring any retraining, and while having a low area and computational overhead. Furthermore, when applied in conjunction with a state-of-art quantization scheme, CAxCNN allows the use of multipliers, which offer 77% logic area reduction, as compared to their accurate counterpart, while incurring a drop in Top-1 accuracy of just 5.63% for a VGG-16 network trained on ImageNet.https://ieeexplore.ieee.org/document/9137167/Convolution neural networksdedicated acceleratorsapproximate computingcanonic sign digits |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Mohsin Riaz Rehan Hafiz Salman Abdul Khaliq Muhammad Faisal Hafiz Talha Iqbal Mohsen Ali Muhammad Shafique |
spellingShingle |
Mohsin Riaz Rehan Hafiz Salman Abdul Khaliq Muhammad Faisal Hafiz Talha Iqbal Mohsen Ali Muhammad Shafique CAxCNN: Towards the Use of Canonic Sign Digit Based Approximation for Hardware-Friendly Convolutional Neural Networks IEEE Access Convolution neural networks dedicated accelerators approximate computing canonic sign digits |
author_facet |
Mohsin Riaz Rehan Hafiz Salman Abdul Khaliq Muhammad Faisal Hafiz Talha Iqbal Mohsen Ali Muhammad Shafique |
author_sort |
Mohsin Riaz |
title |
CAxCNN: Towards the Use of Canonic Sign Digit Based Approximation for Hardware-Friendly Convolutional Neural Networks |
title_short |
CAxCNN: Towards the Use of Canonic Sign Digit Based Approximation for Hardware-Friendly Convolutional Neural Networks |
title_full |
CAxCNN: Towards the Use of Canonic Sign Digit Based Approximation for Hardware-Friendly Convolutional Neural Networks |
title_fullStr |
CAxCNN: Towards the Use of Canonic Sign Digit Based Approximation for Hardware-Friendly Convolutional Neural Networks |
title_full_unstemmed |
CAxCNN: Towards the Use of Canonic Sign Digit Based Approximation for Hardware-Friendly Convolutional Neural Networks |
title_sort |
caxcnn: towards the use of canonic sign digit based approximation for hardware-friendly convolutional neural networks |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
The design of hardware-friendly architectures with low computational overhead is desirable for low latency realization of CNN on resource-constrained embedded platforms. In this work, we propose CAxCNN, a Canonic Sign Digit (CSD) based approximation methodology for representing the filter weights of pre-trained CNNs.The proposed CSD representation allows the use of multipliers with reduced computational complexity. The technique can be applied on top of state-of-the-art CNN quantization schemes in a complementary manner. Our experimental results on a variety of CNNs, trained on MNIST, CIFAR-10 and ImageNet datasets, demonstrate that our methodology provides CNN designs with multiple levels of classification accuracy, without requiring any retraining, and while having a low area and computational overhead. Furthermore, when applied in conjunction with a state-of-art quantization scheme, CAxCNN allows the use of multipliers, which offer 77% logic area reduction, as compared to their accurate counterpart, while incurring a drop in Top-1 accuracy of just 5.63% for a VGG-16 network trained on ImageNet. |
topic |
Convolution neural networks dedicated accelerators approximate computing canonic sign digits |
url |
https://ieeexplore.ieee.org/document/9137167/ |
work_keys_str_mv |
AT mohsinriaz caxcnntowardstheuseofcanonicsigndigitbasedapproximationforhardwarefriendlyconvolutionalneuralnetworks AT rehanhafiz caxcnntowardstheuseofcanonicsigndigitbasedapproximationforhardwarefriendlyconvolutionalneuralnetworks AT salmanabdulkhaliq caxcnntowardstheuseofcanonicsigndigitbasedapproximationforhardwarefriendlyconvolutionalneuralnetworks AT muhammadfaisal caxcnntowardstheuseofcanonicsigndigitbasedapproximationforhardwarefriendlyconvolutionalneuralnetworks AT hafiztalhaiqbal caxcnntowardstheuseofcanonicsigndigitbasedapproximationforhardwarefriendlyconvolutionalneuralnetworks AT mohsenali caxcnntowardstheuseofcanonicsigndigitbasedapproximationforhardwarefriendlyconvolutionalneuralnetworks AT muhammadshafique caxcnntowardstheuseofcanonicsigndigitbasedapproximationforhardwarefriendlyconvolutionalneuralnetworks |
_version_ |
1724185729109590016 |