Layer-wise Fixed Point Quantization for Deep Convolutional Neural Networks and Implementation of YOLOv3 Inference Engine
碩士 === 國立成功大學 === 電腦與通信工程研究所 === 107 === With the increasing popularity of mobile devices and the effectiveness of deep learning-based algorithms, people try to put deep learning models on mobile devices. However, it is limited by the complexity of computational and software overhead. We propose an...
Main Authors: | Wei-ChungTseng, 曾微中 |
---|---|
Other Authors: | Chung-Ho Chen |
Format: | Others |
Language: | zh-TW |
Published: |
2019
|
Online Access: | http://ndltd.ncl.edu.tw/handle/x46nq6 |
Similar Items
-
Zero-Centered Fixed-Point Quantization With Iterative Retraining for Deep Convolutional Neural Network-Based Object Detectors
by: Sungrae Kim, et al.
Published: (2021-01-01) -
Spatial Shift Point-Wise Quantization
by: Eunhui Kim, et al.
Published: (2020-01-01) -
Optimizing Spatial Shift Point-Wise Quantization
by: Eunhui Kim, et al.
Published: (2021-01-01) -
Sensitivity-Oriented Layer-Wise Acceleration and Compression for Convolutional Neural Network
by: Wei Zhou, et al.
Published: (2019-01-01) -
Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression
by: Huabin Diao, et al.
Published: (2021-05-01)