Deep neural rejection against adversarial examples
Abstract Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at test time. In this work, we propose a deep neural rej...
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2020-04-01
|
Series: | EURASIP Journal on Information Security |
Subjects: | |
Online Access: | http://link.springer.com/article/10.1186/s13635-020-00105-y |
id |
doaj-26aa91538cec41a1898b25e1e12d9db9 |
---|---|
record_format |
Article |
spelling |
doaj-26aa91538cec41a1898b25e1e12d9db92020-11-25T03:18:18ZengSpringerOpenEURASIP Journal on Information Security2510-523X2020-04-012020111010.1186/s13635-020-00105-yDeep neural rejection against adversarial examplesAngelo Sotgiu0Ambra Demontis1Marco Melis2Battista Biggio3Giorgio Fumera4Xiaoyi Feng5Fabio Roli6DIEE, University of CagliariDIEE, University of CagliariDIEE, University of CagliariDIEE, University of CagliariDIEE, University of CagliariNorthwestern Polytechnical UniversityDIEE, University of CagliariAbstract Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at test time. In this work, we propose a deep neural rejection mechanism to detect adversarial examples, based on the idea of rejecting samples that exhibit anomalous feature representations at different network layers. With respect to competing approaches, our method does not require generating adversarial examples at training time, and it is less computationally demanding. To properly evaluate our method, we define an adaptive white-box attack that is aware of the defense mechanism and aims to bypass it. Under this worst-case setting, we empirically show that our approach outperforms previously proposed methods that detect adversarial examples by only analyzing the feature representation provided by the output network layer.http://link.springer.com/article/10.1186/s13635-020-00105-yAdversarial machine learningDeep neural networksAdversarial examples |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Angelo Sotgiu Ambra Demontis Marco Melis Battista Biggio Giorgio Fumera Xiaoyi Feng Fabio Roli |
spellingShingle |
Angelo Sotgiu Ambra Demontis Marco Melis Battista Biggio Giorgio Fumera Xiaoyi Feng Fabio Roli Deep neural rejection against adversarial examples EURASIP Journal on Information Security Adversarial machine learning Deep neural networks Adversarial examples |
author_facet |
Angelo Sotgiu Ambra Demontis Marco Melis Battista Biggio Giorgio Fumera Xiaoyi Feng Fabio Roli |
author_sort |
Angelo Sotgiu |
title |
Deep neural rejection against adversarial examples |
title_short |
Deep neural rejection against adversarial examples |
title_full |
Deep neural rejection against adversarial examples |
title_fullStr |
Deep neural rejection against adversarial examples |
title_full_unstemmed |
Deep neural rejection against adversarial examples |
title_sort |
deep neural rejection against adversarial examples |
publisher |
SpringerOpen |
series |
EURASIP Journal on Information Security |
issn |
2510-523X |
publishDate |
2020-04-01 |
description |
Abstract Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at test time. In this work, we propose a deep neural rejection mechanism to detect adversarial examples, based on the idea of rejecting samples that exhibit anomalous feature representations at different network layers. With respect to competing approaches, our method does not require generating adversarial examples at training time, and it is less computationally demanding. To properly evaluate our method, we define an adaptive white-box attack that is aware of the defense mechanism and aims to bypass it. Under this worst-case setting, we empirically show that our approach outperforms previously proposed methods that detect adversarial examples by only analyzing the feature representation provided by the output network layer. |
topic |
Adversarial machine learning Deep neural networks Adversarial examples |
url |
http://link.springer.com/article/10.1186/s13635-020-00105-y |
work_keys_str_mv |
AT angelosotgiu deepneuralrejectionagainstadversarialexamples AT ambrademontis deepneuralrejectionagainstadversarialexamples AT marcomelis deepneuralrejectionagainstadversarialexamples AT battistabiggio deepneuralrejectionagainstadversarialexamples AT giorgiofumera deepneuralrejectionagainstadversarialexamples AT xiaoyifeng deepneuralrejectionagainstadversarialexamples AT fabioroli deepneuralrejectionagainstadversarialexamples |
_version_ |
1724627643371880448 |