Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems

Deep neural networks (DNNs) have been quite successful in solving many complex learning problems. However, DNNs tend to have a large number of learning parameters, leading to a large memory and computation requirement. In this paper, we propose a model compression framework for efficient training an...

Full description

Bibliographic Details
Main Authors: Sangkyun Lee, Jeonghyun Lee
Format: Article
Language:English
Published: MDPI AG 2019-04-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/9/8/1669
id doaj-e67dea07a963455896ae9c88f2659f75
record_format Article
spelling doaj-e67dea07a963455896ae9c88f2659f752020-11-24T21:21:15ZengMDPI AGApplied Sciences2076-34172019-04-0198166910.3390/app9081669app9081669Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded SystemsSangkyun Lee0Jeonghyun Lee1Computer Science, Hanyang University ERICA, Ansan 15588, KoreaComputer Science and Engineering, Hanyang University ERICA, Ansan 15588, KoreaDeep neural networks (DNNs) have been quite successful in solving many complex learning problems. However, DNNs tend to have a large number of learning parameters, leading to a large memory and computation requirement. In this paper, we propose a model compression framework for efficient training and inference of deep neural networks on embedded systems. Our framework provides data structures and kernels for OpenCL-based parallel forward and backward computation in a compressed form. In particular, our method learns sparse representations of parameters using <inline-formula> <math display="inline"> <semantics> <msub> <mi>ℓ</mi> <mn>1</mn> </msub> </semantics> </math> </inline-formula>-based sparse coding while training, storing them in compressed sparse matrices. Unlike the previous works, our method does not require a pre-trained model as an input and therefore can be more versatile for different application environments. Even though the use of <inline-formula> <math display="inline"> <semantics> <msub> <mi>ℓ</mi> <mn>1</mn> </msub> </semantics> </math> </inline-formula>-based sparse coding for model compression is not new, we show that it can be far more effective than previously reported when we use proximal point algorithms and the technique of debiasing. Our experiments show that our method can produce minimal learning models suitable for small embedded devices.https://www.mdpi.com/2076-3417/9/8/1669compressed learningregularizationproximal point algorithmdebiasingembedded systemsOpenCL
collection DOAJ
language English
format Article
sources DOAJ
author Sangkyun Lee
Jeonghyun Lee
spellingShingle Sangkyun Lee
Jeonghyun Lee
Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems
Applied Sciences
compressed learning
regularization
proximal point algorithm
debiasing
embedded systems
OpenCL
author_facet Sangkyun Lee
Jeonghyun Lee
author_sort Sangkyun Lee
title Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems
title_short Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems
title_full Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems
title_fullStr Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems
title_full_unstemmed Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems
title_sort compressed learning of deep neural networks for opencl-capable embedded systems
publisher MDPI AG
series Applied Sciences
issn 2076-3417
publishDate 2019-04-01
description Deep neural networks (DNNs) have been quite successful in solving many complex learning problems. However, DNNs tend to have a large number of learning parameters, leading to a large memory and computation requirement. In this paper, we propose a model compression framework for efficient training and inference of deep neural networks on embedded systems. Our framework provides data structures and kernels for OpenCL-based parallel forward and backward computation in a compressed form. In particular, our method learns sparse representations of parameters using <inline-formula> <math display="inline"> <semantics> <msub> <mi>ℓ</mi> <mn>1</mn> </msub> </semantics> </math> </inline-formula>-based sparse coding while training, storing them in compressed sparse matrices. Unlike the previous works, our method does not require a pre-trained model as an input and therefore can be more versatile for different application environments. Even though the use of <inline-formula> <math display="inline"> <semantics> <msub> <mi>ℓ</mi> <mn>1</mn> </msub> </semantics> </math> </inline-formula>-based sparse coding for model compression is not new, we show that it can be far more effective than previously reported when we use proximal point algorithms and the technique of debiasing. Our experiments show that our method can produce minimal learning models suitable for small embedded devices.
topic compressed learning
regularization
proximal point algorithm
debiasing
embedded systems
OpenCL
url https://www.mdpi.com/2076-3417/9/8/1669
work_keys_str_mv AT sangkyunlee compressedlearningofdeepneuralnetworksforopenclcapableembeddedsystems
AT jeonghyunlee compressedlearningofdeepneuralnetworksforopenclcapableembeddedsystems
_version_ 1726000126558732288