The Defense of Adversarial Example with Conditional Generative Adversarial Networks

Deep neural network approaches have made remarkable progress in many machine learning tasks. However, the latest research indicates that they are vulnerable to adversarial perturbations. An adversary can easily mislead the network models by adding well-designed perturbations to the input. The cause...

Full description

Bibliographic Details
Main Authors: Fangchao Yu, Li Wang, Xianjin Fang, Youwen Zhang
Format: Article
Language:English
Published: Hindawi-Wiley 2020-01-01
Series:Security and Communication Networks
Online Access:http://dx.doi.org/10.1155/2020/3932584