Backdoor smoothing: Demystifying backdoor attacks on deep neural networks

Backdoor attacks mislead machine-learning models to output an attacker-specified class when presented a specific trigger at test time. These attacks require poisoning the training data to compromise the learning algorithm, e.g., by injecting poisoning samples containing the trigger into the training...

Full description

Bibliographic Details
Main Authors: Backes, M. (Author), Biggio, B. (Author), Grosse, K. (Author), Lee, T. (Author), Molloy, I. (Author), Park, Y. (Author)
Format: Article
Language:English
Published: Elsevier Ltd 2022
Subjects:
Online Access:View Fulltext in Publisher