Deep Model Poisoning Attack on Federated Learning

Federated learning is a novel distributed learning framework, which enables thousands of participants to collaboratively construct a deep learning model. In order to protect confidentiality of the training data, the shared information between server and participants are only limited to model paramet...

Full description

Bibliographic Details
Main Authors: Xingchen Zhou, Ming Xu, Yiming Wu, Ning Zheng
Format: Article
Language:English
Published: MDPI AG 2021-03-01
Series:Future Internet
Subjects:
Online Access:https://www.mdpi.com/1999-5903/13/3/73
Description
Summary:Federated learning is a novel distributed learning framework, which enables thousands of participants to collaboratively construct a deep learning model. In order to protect confidentiality of the training data, the shared information between server and participants are only limited to model parameters. However, this setting is vulnerable to model poisoning attack, since the participants have permission to modify the model parameters. In this paper, we perform systematic investigation for such threats in federated learning and propose a novel optimization-based model poisoning attack. Different from existing methods, we primarily focus on the effectiveness, persistence and stealth of attacks. Numerical experiments demonstrate that the proposed method can not only achieve high attack success rate, but it is also stealthy enough to bypass two existing defense methods.
ISSN:1999-5903