Deep neural rejection against adversarial examples
Abstract Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at test time. In this work, we propose a deep neural rej...
Main Authors: | Angelo Sotgiu, Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Xiaoyi Feng, Fabio Roli |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2020-04-01
|
Series: | EURASIP Journal on Information Security |
Subjects: | |
Online Access: | http://link.springer.com/article/10.1186/s13635-020-00105-y |
Similar Items
-
Random Untargeted Adversarial Example on Deep Neural Network
by: Hyun Kwon, et al.
Published: (2018-12-01) -
SecureAS: A Vulnerability Assessment System for Deep Neural Network Based on Adversarial Examples
by: Yan Chu, et al.
Published: (2020-01-01) -
Generating adversarial examples without specifying a target model
by: Gaoming Yang, et al.
Published: (2021-09-01) -
Enhancing the Security of Deep Learning Steganography via Adversarial Examples
by: Yueyun Shang, et al.
Published: (2020-08-01) -
Adversarial Attack and Defense on Deep Neural Network-Based Voice Processing Systems: An Overview
by: Xiaojiao Chen, et al.
Published: (2021-09-01)