Optimization of YOLOv3 Inference Engine for Edge Device
碩士 === 國立成功大學 === 電機工程學系 === 107 === For neural networks used in low-end edge devices, there are several approaches to dealing with, such that compressing model, quantifying model and designing hardware accelerators. However, the number of parameters of the current NN (neural network) models is incr...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2019
|
Online Access: | http://ndltd.ncl.edu.tw/handle/7kj82c |
id |
ndltd-TW-107NCKU5442015 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-107NCKU54420152019-10-25T05:24:17Z http://ndltd.ncl.edu.tw/handle/7kj82c Optimization of YOLOv3 Inference Engine for Edge Device 優化 YOLOv3 推論引擎並實現於終端裝置 Min-ZhiJi 紀旻志 碩士 國立成功大學 電機工程學系 107 For neural networks used in low-end edge devices, there are several approaches to dealing with, such that compressing model, quantifying model and designing hardware accelerators. However, the number of parameters of the current NN (neural network) models is increasing, and the current NN frameworks typically initialize the entire NN model in the initial stage. So, memory requirement will be very huge. In order to reduce memory requirement, we propose layer-wise memory management based on Darknet. But NN models maybe have complex network structures with residual connections or routing connections for better training results. So, we propose a layer-dependency counter mechanism. Finally, we named the modified framework MDFI (Micro Darknet for Inference). According to our experimental result, the average memory consumption of MDFI is reduced by 76% compared to Darknet, and the average processing time of MDFI is reduced by 8%. Chung-Ho Chen 陳中和 2019 學位論文 ; thesis 58 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立成功大學 === 電機工程學系 === 107 === For neural networks used in low-end edge devices, there are several approaches to dealing with, such that compressing model, quantifying model and designing hardware accelerators. However, the number of parameters of the current NN (neural network) models is increasing, and the current NN frameworks typically initialize the entire NN model in the initial stage. So, memory requirement will be very huge. In order to reduce memory requirement, we propose layer-wise memory management based on Darknet. But NN models maybe have complex network structures with residual connections or routing connections for better training results. So, we propose a layer-dependency counter mechanism. Finally, we named the modified framework MDFI (Micro Darknet for Inference). According to our experimental result, the average memory consumption of MDFI is reduced by 76% compared to Darknet, and the average processing time of MDFI is reduced by 8%.
|
author2 |
Chung-Ho Chen |
author_facet |
Chung-Ho Chen Min-ZhiJi 紀旻志 |
author |
Min-ZhiJi 紀旻志 |
spellingShingle |
Min-ZhiJi 紀旻志 Optimization of YOLOv3 Inference Engine for Edge Device |
author_sort |
Min-ZhiJi |
title |
Optimization of YOLOv3 Inference Engine for Edge Device |
title_short |
Optimization of YOLOv3 Inference Engine for Edge Device |
title_full |
Optimization of YOLOv3 Inference Engine for Edge Device |
title_fullStr |
Optimization of YOLOv3 Inference Engine for Edge Device |
title_full_unstemmed |
Optimization of YOLOv3 Inference Engine for Edge Device |
title_sort |
optimization of yolov3 inference engine for edge device |
publishDate |
2019 |
url |
http://ndltd.ncl.edu.tw/handle/7kj82c |
work_keys_str_mv |
AT minzhiji optimizationofyolov3inferenceengineforedgedevice AT jìmínzhì optimizationofyolov3inferenceengineforedgedevice AT minzhiji yōuhuàyolov3tuīlùnyǐnqíngbìngshíxiànyúzhōngduānzhuāngzhì AT jìmínzhì yōuhuàyolov3tuīlùnyǐnqíngbìngshíxiànyúzhōngduānzhuāngzhì |
_version_ |
1719277896218968064 |