Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
Backdoor attacks mislead machine-learning models to output an attacker-specified class when presented a specific trigger at test time. These attacks require poisoning the training data to compromise the learning algorithm, e.g., by injecting poisoning samples containing the trigger into the training...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier Ltd
2022
|
Subjects: | |
Online Access: | View Fulltext in Publisher |