Enhancing adversarial robustness of deep neural networks

This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 === Cataloged from student-sub...

Full description

Bibliographic Details
Main Author: Zhang, Jeffrey,M. Eng.Massachusetts Institute of Technology.
Other Authors: Aleksander Madry.
Format: Others
Language:English
Published: Massachusetts Institute of Technology 2019
Subjects:
Online Access:https://hdl.handle.net/1721.1/122994
id ndltd-MIT-oai-dspace.mit.edu-1721.1-122994
record_format oai_dc
spelling ndltd-MIT-oai-dspace.mit.edu-1721.1-1229942019-11-23T03:51:22Z Enhancing adversarial robustness of deep neural networks Zhang, Jeffrey,M. Eng.Massachusetts Institute of Technology. Aleksander Madry. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Electrical Engineering and Computer Science. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 57-58). Logit-based regularization and pretrain-then-tune are two approaches that have recently been shown to enhance adversarial robustness of machine learning models. In the realm of regularization, Zhang et al. (2019) proposed TRADES, a logit-based regularization optimization function that has been shown to improve upon the robust optimization framework developed by Madry et al. (2018) [14, 9]. They were able to achieve state-of-the-art adversarial accuracy on CIFAR10. In the realm of pretrain- then-tune models, Hendrycks el al. (2019) demonstrated that adversarially pretraining a model on ImageNet then adversarially tuning on CIFAR10 greatly improves the adversarial robustness of machine learning models. In this work, we propose Adversarial Regularization, another logit-based regularization optimization framework that surpasses TRADES in adversarial generalization. Furthermore, we explore the impact of trying different types of adversarial training on the pretrain-then-tune paradigm. by Jeffry Zhang. M. Eng. M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science 2019-11-22T00:00:46Z 2019-11-22T00:00:46Z 2019 2019 Thesis https://hdl.handle.net/1721.1/122994 1127291827 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 58 pages application/pdf Massachusetts Institute of Technology
collection NDLTD
language English
format Others
sources NDLTD
topic Electrical Engineering and Computer Science.
spellingShingle Electrical Engineering and Computer Science.
Zhang, Jeffrey,M. Eng.Massachusetts Institute of Technology.
Enhancing adversarial robustness of deep neural networks
description This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 === Cataloged from student-submitted PDF version of thesis. === Includes bibliographical references (pages 57-58). === Logit-based regularization and pretrain-then-tune are two approaches that have recently been shown to enhance adversarial robustness of machine learning models. In the realm of regularization, Zhang et al. (2019) proposed TRADES, a logit-based regularization optimization function that has been shown to improve upon the robust optimization framework developed by Madry et al. (2018) [14, 9]. They were able to achieve state-of-the-art adversarial accuracy on CIFAR10. In the realm of pretrain- then-tune models, Hendrycks el al. (2019) demonstrated that adversarially pretraining a model on ImageNet then adversarially tuning on CIFAR10 greatly improves the adversarial robustness of machine learning models. In this work, we propose Adversarial Regularization, another logit-based regularization optimization framework that surpasses TRADES in adversarial generalization. Furthermore, we explore the impact of trying different types of adversarial training on the pretrain-then-tune paradigm. === by Jeffry Zhang. === M. Eng. === M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
author2 Aleksander Madry.
author_facet Aleksander Madry.
Zhang, Jeffrey,M. Eng.Massachusetts Institute of Technology.
author Zhang, Jeffrey,M. Eng.Massachusetts Institute of Technology.
author_sort Zhang, Jeffrey,M. Eng.Massachusetts Institute of Technology.
title Enhancing adversarial robustness of deep neural networks
title_short Enhancing adversarial robustness of deep neural networks
title_full Enhancing adversarial robustness of deep neural networks
title_fullStr Enhancing adversarial robustness of deep neural networks
title_full_unstemmed Enhancing adversarial robustness of deep neural networks
title_sort enhancing adversarial robustness of deep neural networks
publisher Massachusetts Institute of Technology
publishDate 2019
url https://hdl.handle.net/1721.1/122994
work_keys_str_mv AT zhangjeffreymengmassachusettsinstituteoftechnology enhancingadversarialrobustnessofdeepneuralnetworks
_version_ 1719295387603304448