Deep neural rejection against adversarial examples

Abstract Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at test time. In this work, we propose a deep neural rej...

Full description

Bibliographic Details
Main Authors: Angelo Sotgiu, Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Xiaoyi Feng, Fabio Roli
Format: Article
Language:English
Published: SpringerOpen 2020-04-01
Series:EURASIP Journal on Information Security
Subjects:
Online Access:http://link.springer.com/article/10.1186/s13635-020-00105-y