Deep Learning-Based Intrusion Detection With Adversaries

Deep neural networks have demonstrated their effectiveness in most machine learning tasks, with intrusion detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities...

Full description

Bibliographic Details
Main Author: Zheng Wang
Format: Article
Language:English
Published: IEEE 2018-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8408779/
id doaj-b0246d706d6d4758acaf70eadd19e877
record_format Article
spelling doaj-b0246d706d6d4758acaf70eadd19e8772021-03-29T20:42:57ZengIEEEIEEE Access2169-35362018-01-016383673838410.1109/ACCESS.2018.28545998408779Deep Learning-Based Intrusion Detection With AdversariesZheng Wang0https://orcid.org/0000-0003-2744-9345National Institute of Standards and Technology, Gaithersburg, MD, USADeep neural networks have demonstrated their effectiveness in most machine learning tasks, with intrusion detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities for an attacker to fool the networks into misclassification by introducing imperceptible changes to the original pixels in an image. The vulnerability raises some concerns in applying deep neural networks in security-critical areas, such as intrusion detection. In this paper, we investigate the performances of the state-of-the-art attack algorithms against deep learning-based intrusion detection on the NSL-KDD data set. The vulnerabilities of neural networks employed by the intrusion detection systems are experimentally validated. The roles of individual features in generating adversarial examples are explored. Based on our findings, the feasibility and applicability of the attack methodologies are discussed.https://ieeexplore.ieee.org/document/8408779/Intrusion detectionneural networksclassification algorithmsdata security
collection DOAJ
language English
format Article
sources DOAJ
author Zheng Wang
spellingShingle Zheng Wang
Deep Learning-Based Intrusion Detection With Adversaries
IEEE Access
Intrusion detection
neural networks
classification algorithms
data security
author_facet Zheng Wang
author_sort Zheng Wang
title Deep Learning-Based Intrusion Detection With Adversaries
title_short Deep Learning-Based Intrusion Detection With Adversaries
title_full Deep Learning-Based Intrusion Detection With Adversaries
title_fullStr Deep Learning-Based Intrusion Detection With Adversaries
title_full_unstemmed Deep Learning-Based Intrusion Detection With Adversaries
title_sort deep learning-based intrusion detection with adversaries
publisher IEEE
series IEEE Access
issn 2169-3536
publishDate 2018-01-01
description Deep neural networks have demonstrated their effectiveness in most machine learning tasks, with intrusion detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities for an attacker to fool the networks into misclassification by introducing imperceptible changes to the original pixels in an image. The vulnerability raises some concerns in applying deep neural networks in security-critical areas, such as intrusion detection. In this paper, we investigate the performances of the state-of-the-art attack algorithms against deep learning-based intrusion detection on the NSL-KDD data set. The vulnerabilities of neural networks employed by the intrusion detection systems are experimentally validated. The roles of individual features in generating adversarial examples are explored. Based on our findings, the feasibility and applicability of the attack methodologies are discussed.
topic Intrusion detection
neural networks
classification algorithms
data security
url https://ieeexplore.ieee.org/document/8408779/
work_keys_str_mv AT zhengwang deeplearningbasedintrusiondetectionwithadversaries
_version_ 1724194282006380544