Robust CNN Compression Framework for Security-Sensitive Embedded Systems

Convolutional neural networks (CNNs) have achieved tremendous success in solving complex classification problems. Motivated by this success, there have been proposed various compression methods for downsizing the CNNs to deploy them on resource-constrained embedded systems. However, a new type of vu...

Full description

Bibliographic Details
Main Authors: Jeonghyun Lee, Sangkyun Lee
Format: Article
Language:English
Published: MDPI AG 2021-01-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/11/3/1093
id doaj-b45f22b52bb84f94b5b84ceab19834fc
record_format Article
spelling doaj-b45f22b52bb84f94b5b84ceab19834fc2021-01-26T00:05:45ZengMDPI AGApplied Sciences2076-34172021-01-01111093109310.3390/app11031093Robust CNN Compression Framework for Security-Sensitive Embedded SystemsJeonghyun Lee0Sangkyun Lee1School of Cybersecurity, Korea University, Seoul 02841, KoreaSchool of Cybersecurity, Korea University, Seoul 02841, KoreaConvolutional neural networks (CNNs) have achieved tremendous success in solving complex classification problems. Motivated by this success, there have been proposed various compression methods for downsizing the CNNs to deploy them on resource-constrained embedded systems. However, a new type of vulnerability of compressed CNNs known as the adversarial examples has been discovered recently, which is critical for security-sensitive systems because the adversarial examples can cause malfunction of CNNs and can be crafted easily in many cases. In this paper, we proposed a compression framework to produce compressed CNNs robust against such adversarial examples. To achieve the goal, our framework uses both pruning and knowledge distillation with adversarial training. We formulate our framework as an optimization problem and provide a solution algorithm based on the proximal gradient method, which is more memory-efficient than the popular ADMM-based compression approaches. In experiments, we show that our framework can improve the trade-off between adversarial robustness and compression rate compared to the existing state-of-the-art adversarial pruning approach.https://www.mdpi.com/2076-3417/11/3/1093model compressionadversarial robustnessweight pruningadversarial trainingdistillationembedded system
collection DOAJ
language English
format Article
sources DOAJ
author Jeonghyun Lee
Sangkyun Lee
spellingShingle Jeonghyun Lee
Sangkyun Lee
Robust CNN Compression Framework for Security-Sensitive Embedded Systems
Applied Sciences
model compression
adversarial robustness
weight pruning
adversarial training
distillation
embedded system
author_facet Jeonghyun Lee
Sangkyun Lee
author_sort Jeonghyun Lee
title Robust CNN Compression Framework for Security-Sensitive Embedded Systems
title_short Robust CNN Compression Framework for Security-Sensitive Embedded Systems
title_full Robust CNN Compression Framework for Security-Sensitive Embedded Systems
title_fullStr Robust CNN Compression Framework for Security-Sensitive Embedded Systems
title_full_unstemmed Robust CNN Compression Framework for Security-Sensitive Embedded Systems
title_sort robust cnn compression framework for security-sensitive embedded systems
publisher MDPI AG
series Applied Sciences
issn 2076-3417
publishDate 2021-01-01
description Convolutional neural networks (CNNs) have achieved tremendous success in solving complex classification problems. Motivated by this success, there have been proposed various compression methods for downsizing the CNNs to deploy them on resource-constrained embedded systems. However, a new type of vulnerability of compressed CNNs known as the adversarial examples has been discovered recently, which is critical for security-sensitive systems because the adversarial examples can cause malfunction of CNNs and can be crafted easily in many cases. In this paper, we proposed a compression framework to produce compressed CNNs robust against such adversarial examples. To achieve the goal, our framework uses both pruning and knowledge distillation with adversarial training. We formulate our framework as an optimization problem and provide a solution algorithm based on the proximal gradient method, which is more memory-efficient than the popular ADMM-based compression approaches. In experiments, we show that our framework can improve the trade-off between adversarial robustness and compression rate compared to the existing state-of-the-art adversarial pruning approach.
topic model compression
adversarial robustness
weight pruning
adversarial training
distillation
embedded system
url https://www.mdpi.com/2076-3417/11/3/1093
work_keys_str_mv AT jeonghyunlee robustcnncompressionframeworkforsecuritysensitiveembeddedsystems
AT sangkyunlee robustcnncompressionframeworkforsecuritysensitiveembeddedsystems
_version_ 1724323620960862208