Optimization of YOLOv3 Inference Engine for Edge Device

碩士 === 國立成功大學 === 電機工程學系 === 107 === For neural networks used in low-end edge devices, there are several approaches to dealing with, such that compressing model, quantifying model and designing hardware accelerators. However, the number of parameters of the current NN (neural network) models is incr...

Full description

Bibliographic Details
Main Authors: Min-ZhiJi, 紀旻志
Other Authors: Chung-Ho Chen
Format: Others
Language:zh-TW
Published: 2019
Online Access:http://ndltd.ncl.edu.tw/handle/7kj82c
Description
Summary:碩士 === 國立成功大學 === 電機工程學系 === 107 === For neural networks used in low-end edge devices, there are several approaches to dealing with, such that compressing model, quantifying model and designing hardware accelerators. However, the number of parameters of the current NN (neural network) models is increasing, and the current NN frameworks typically initialize the entire NN model in the initial stage. So, memory requirement will be very huge. In order to reduce memory requirement, we propose layer-wise memory management based on Darknet. But NN models maybe have complex network structures with residual connections or routing connections for better training results. So, we propose a layer-dependency counter mechanism. Finally, we named the modified framework MDFI (Micro Darknet for Inference). According to our experimental result, the average memory consumption of MDFI is reduced by 76% compared to Darknet, and the average processing time of MDFI is reduced by 8%.