FPGA Accelerating Core Design Based on XNOR Neural Network Algorithm

The current deep learning application scenario is more and more extensive. In terms of computing platforms, the widely used GPU platforms have lower computational efficiency. The flexibility of APU-dedicated processors is difficult to deal with evolving algorithms, and the FPGA platform takes into a...

Full description

Bibliographic Details
Main Authors: Yi Su, Xiao Hu, Yongjie Sun
Format: Article
Language:English
Published: EDP Sciences 2018-01-01
Series:MATEC Web of Conferences
Online Access:https://doi.org/10.1051/matecconf/201817301024
Description
Summary:The current deep learning application scenario is more and more extensive. In terms of computing platforms, the widely used GPU platforms have lower computational efficiency. The flexibility of APU-dedicated processors is difficult to deal with evolving algorithms, and the FPGA platform takes into account both computational flexibility and computational efficiency. At present, one of the bottlenecks for limiting large-scale deep learning algorithms on FPGA platforms is the large-scale floating-point computing. Therefore, this article studies single-bit parameterized quantized neural network algorithm (XNOR), and optimizes the neural network algorithm based on the structural characteristics of the FPGA platform., Design and implementation of the FPGA acceleration core, the experimental results show that the acceleration effect is obvious.
ISSN:2261-236X