Crafting Adversarial Example Using Visualization of Neural Network

碩士 === 國立交通大學 === 資訊科學與工程研究所 === 107 === In recent years, the breakthrough development of deep neural networks has performed well in the fields of image recognition, voice recognition, and language translation. Even the ability of deep neural networks to solve certain complex problems has exceeded t...

Full description

Bibliographic Details
Main Authors: Qiu, Zi-Xiang, 邱子翔
Other Authors: Sun, Chuen-Tsai
Format: Others
Language:zh-TW
Published: 2019
Online Access:http://ndltd.ncl.edu.tw/handle/8h69sx
id ndltd-TW-107NCTU5394095
record_format oai_dc
spelling ndltd-TW-107NCTU53940952019-11-26T05:16:48Z http://ndltd.ncl.edu.tw/handle/8h69sx Crafting Adversarial Example Using Visualization of Neural Network 以神經網路可視化進行對抗樣本生成 Qiu, Zi-Xiang 邱子翔 碩士 國立交通大學 資訊科學與工程研究所 107 In recent years, the breakthrough development of deep neural networks has performed well in the fields of image recognition, voice recognition, and language translation. Even the ability of deep neural networks to solve certain complex problems has exceeded the human level. But recent research shows that deep neural networks also face many security threats. In 2014, the adversarial example attack proposed by Szegedy et al. [1] only required a slight perturbation of the image to make the image recognition system to fully misclassify. This study proposes a method for generating adversarial examples using the visualizing and understanding of deep neural networks and evaluating them. Experiments show that this method can reduce the perturbations of the image while ensuring the attack success rate. Sun, Chuen-Tsai 孫春在 2019 學位論文 ; thesis 32 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立交通大學 === 資訊科學與工程研究所 === 107 === In recent years, the breakthrough development of deep neural networks has performed well in the fields of image recognition, voice recognition, and language translation. Even the ability of deep neural networks to solve certain complex problems has exceeded the human level. But recent research shows that deep neural networks also face many security threats. In 2014, the adversarial example attack proposed by Szegedy et al. [1] only required a slight perturbation of the image to make the image recognition system to fully misclassify. This study proposes a method for generating adversarial examples using the visualizing and understanding of deep neural networks and evaluating them. Experiments show that this method can reduce the perturbations of the image while ensuring the attack success rate.
author2 Sun, Chuen-Tsai
author_facet Sun, Chuen-Tsai
Qiu, Zi-Xiang
邱子翔
author Qiu, Zi-Xiang
邱子翔
spellingShingle Qiu, Zi-Xiang
邱子翔
Crafting Adversarial Example Using Visualization of Neural Network
author_sort Qiu, Zi-Xiang
title Crafting Adversarial Example Using Visualization of Neural Network
title_short Crafting Adversarial Example Using Visualization of Neural Network
title_full Crafting Adversarial Example Using Visualization of Neural Network
title_fullStr Crafting Adversarial Example Using Visualization of Neural Network
title_full_unstemmed Crafting Adversarial Example Using Visualization of Neural Network
title_sort crafting adversarial example using visualization of neural network
publishDate 2019
url http://ndltd.ncl.edu.tw/handle/8h69sx
work_keys_str_mv AT qiuzixiang craftingadversarialexampleusingvisualizationofneuralnetwork
AT qiūzixiáng craftingadversarialexampleusingvisualizationofneuralnetwork
AT qiuzixiang yǐshénjīngwǎnglùkěshìhuàjìnxíngduìkàngyàngběnshēngchéng
AT qiūzixiáng yǐshénjīngwǎnglùkěshìhuàjìnxíngduìkàngyàngběnshēngchéng
_version_ 1719295867146469376