Unsupervised Adversarial Defense through Tandem Deep Image Priors
Deep neural networks are vulnerable to the adversarial example synthesized by adding imperceptible perturbations to the original image but can fool the classifier to provide wrong prediction outputs. This paper proposes an image restoration approach which provides a strong defense mechanism to provi...
Main Authors: | Yu Shi, Cien Fan, Lian Zou, Caixia Sun, Yifeng Liu |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-11-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/9/11/1957 |
Similar Items
-
Deep neural rejection against adversarial examples
by: Angelo Sotgiu, et al.
Published: (2020-04-01) -
SecureAS: A Vulnerability Assessment System for Deep Neural Network Based on Adversarial Examples
by: Yan Chu, et al.
Published: (2020-01-01) -
Generating adversarial examples without specifying a target model
by: Gaoming Yang, et al.
Published: (2021-09-01) -
Orthogonal Deep Models as Defense Against Black-Box Attacks
by: Mohammad A. A. K. Jalwana, et al.
Published: (2020-01-01) -
Enhancing the Security of Deep Learning Steganography via Adversarial Examples
by: Yueyun Shang, et al.
Published: (2020-08-01)