DeepMal: maliciousness-Preserving adversarial instruction learning against static malware detection
Abstract Outside the explosive successful applications of deep learning (DL) in natural language processing, computer vision, and information retrieval, there have been numerous Deep Neural Networks (DNNs) based alternatives for common security-related scenarios with malware detection among more pop...
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2021-05-01
|
Series: | Cybersecurity |
Subjects: | |
Online Access: | https://doi.org/10.1186/s42400-021-00079-5 |
id |
doaj-057dd1b1a29e432a8b982887cd13bcb5 |
---|---|
record_format |
Article |
spelling |
doaj-057dd1b1a29e432a8b982887cd13bcb52021-05-16T11:03:23ZengSpringerOpenCybersecurity2523-32462021-05-014111410.1186/s42400-021-00079-5DeepMal: maliciousness-Preserving adversarial instruction learning against static malware detectionChun Yang0Jinghui Xu1Shuangshuang Liang2Yanna Wu3Yu Wen4Boyang Zhang5Dan Meng6Institute of Information Engineering (IIE), Chinese Academy of Sciences (CAS), North of YiyuanInstitute of Information Engineering (IIE), Chinese Academy of Sciences (CAS), North of YiyuanInstitute of Information Engineering (IIE), Chinese Academy of Sciences (CAS), North of YiyuanInstitute of Information Engineering (IIE), Chinese Academy of Sciences (CAS), North of YiyuanInstitute of Information Engineering (IIE), Chinese Academy of Sciences (CAS), North of YiyuanInstitute of Information Engineering (IIE), Chinese Academy of Sciences (CAS), North of YiyuanInstitute of Information Engineering (IIE), Chinese Academy of Sciences (CAS), North of YiyuanAbstract Outside the explosive successful applications of deep learning (DL) in natural language processing, computer vision, and information retrieval, there have been numerous Deep Neural Networks (DNNs) based alternatives for common security-related scenarios with malware detection among more popular. Recently, adversarial learning has gained much focus. However, unlike computer vision applications, malware adversarial attack is expected to guarantee malwares’ original maliciousness semantics. This paper proposes a novel adversarial instruction learning technique, DeepMal, based on an adversarial instruction learning approach for static malware detection. So far as we know, DeepMal is the first practical and systematical adversarial learning method, which could directly produce adversarial samples and effectively bypass static malware detectors powered by DL and machine learning (ML) models while preserving attack functionality in the real world. Moreover, our method conducts small-scale attacks, which could evade typical malware variants analysis (e.g., duplication check). We evaluate DeepMal on two real-world datasets, six typical DL models, and three typical ML models. Experimental results demonstrate that, on both datasets, DeepMal can attack typical malware detectors with the mean F1-score and F1-score decreasing maximal 93.94% and 82.86% respectively. Besides, three typical types of malware samples (Trojan horses, Backdoors, Ransomware) prove to preserve original attack functionality, and the mean duplication check ratio of malware adversarial samples is below 2.0%. Besides, DeepMal can evade dynamic detectors and be easily enhanced by learning more dynamic features with specific constraints.https://doi.org/10.1186/s42400-021-00079-5Adversarial instruction learningMalwareStatic malware detectionSmall-scale |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Chun Yang Jinghui Xu Shuangshuang Liang Yanna Wu Yu Wen Boyang Zhang Dan Meng |
spellingShingle |
Chun Yang Jinghui Xu Shuangshuang Liang Yanna Wu Yu Wen Boyang Zhang Dan Meng DeepMal: maliciousness-Preserving adversarial instruction learning against static malware detection Cybersecurity Adversarial instruction learning Malware Static malware detection Small-scale |
author_facet |
Chun Yang Jinghui Xu Shuangshuang Liang Yanna Wu Yu Wen Boyang Zhang Dan Meng |
author_sort |
Chun Yang |
title |
DeepMal: maliciousness-Preserving adversarial instruction learning against static malware detection |
title_short |
DeepMal: maliciousness-Preserving adversarial instruction learning against static malware detection |
title_full |
DeepMal: maliciousness-Preserving adversarial instruction learning against static malware detection |
title_fullStr |
DeepMal: maliciousness-Preserving adversarial instruction learning against static malware detection |
title_full_unstemmed |
DeepMal: maliciousness-Preserving adversarial instruction learning against static malware detection |
title_sort |
deepmal: maliciousness-preserving adversarial instruction learning against static malware detection |
publisher |
SpringerOpen |
series |
Cybersecurity |
issn |
2523-3246 |
publishDate |
2021-05-01 |
description |
Abstract Outside the explosive successful applications of deep learning (DL) in natural language processing, computer vision, and information retrieval, there have been numerous Deep Neural Networks (DNNs) based alternatives for common security-related scenarios with malware detection among more popular. Recently, adversarial learning has gained much focus. However, unlike computer vision applications, malware adversarial attack is expected to guarantee malwares’ original maliciousness semantics. This paper proposes a novel adversarial instruction learning technique, DeepMal, based on an adversarial instruction learning approach for static malware detection. So far as we know, DeepMal is the first practical and systematical adversarial learning method, which could directly produce adversarial samples and effectively bypass static malware detectors powered by DL and machine learning (ML) models while preserving attack functionality in the real world. Moreover, our method conducts small-scale attacks, which could evade typical malware variants analysis (e.g., duplication check). We evaluate DeepMal on two real-world datasets, six typical DL models, and three typical ML models. Experimental results demonstrate that, on both datasets, DeepMal can attack typical malware detectors with the mean F1-score and F1-score decreasing maximal 93.94% and 82.86% respectively. Besides, three typical types of malware samples (Trojan horses, Backdoors, Ransomware) prove to preserve original attack functionality, and the mean duplication check ratio of malware adversarial samples is below 2.0%. Besides, DeepMal can evade dynamic detectors and be easily enhanced by learning more dynamic features with specific constraints. |
topic |
Adversarial instruction learning Malware Static malware detection Small-scale |
url |
https://doi.org/10.1186/s42400-021-00079-5 |
work_keys_str_mv |
AT chunyang deepmalmaliciousnesspreservingadversarialinstructionlearningagainststaticmalwaredetection AT jinghuixu deepmalmaliciousnesspreservingadversarialinstructionlearningagainststaticmalwaredetection AT shuangshuangliang deepmalmaliciousnesspreservingadversarialinstructionlearningagainststaticmalwaredetection AT yannawu deepmalmaliciousnesspreservingadversarialinstructionlearningagainststaticmalwaredetection AT yuwen deepmalmaliciousnesspreservingadversarialinstructionlearningagainststaticmalwaredetection AT boyangzhang deepmalmaliciousnesspreservingadversarialinstructionlearningagainststaticmalwaredetection AT danmeng deepmalmaliciousnesspreservingadversarialinstructionlearningagainststaticmalwaredetection |
_version_ |
1721439875726573568 |