Crafting Adversarial Example Using Visualization of Neural Network
碩士 === 國立交通大學 === 資訊科學與工程研究所 === 107 === In recent years, the breakthrough development of deep neural networks has performed well in the fields of image recognition, voice recognition, and language translation. Even the ability of deep neural networks to solve certain complex problems has exceeded t...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2019
|
Online Access: | http://ndltd.ncl.edu.tw/handle/8h69sx |
Summary: | 碩士 === 國立交通大學 === 資訊科學與工程研究所 === 107 === In recent years, the breakthrough development of deep neural networks has performed well in the fields of image recognition, voice recognition, and language translation. Even the ability of deep neural networks to solve certain complex problems has exceeded the human level.
But recent research shows that deep neural networks also face many security threats. In 2014, the adversarial example attack proposed by Szegedy et al. [1] only required a slight perturbation of the image to make the image recognition system to fully misclassify.
This study proposes a method for generating adversarial examples using the visualizing and understanding of deep neural networks and evaluating them. Experiments show that this method can reduce the perturbations of the image while ensuring the attack success rate.
|
---|