CE-Fed: Communication efficient multi-party computation enabled federated learning

Federated learning (FL) allows a number of parties collectively train models without revealing private datasets. There is a possibility of extracting personal or confidential data from the shared models even-though sharing of raw data is prevented by federated learning. Secure Multi Party Computatio...

Full description

Bibliographic Details
Main Authors: Goh, R.S.M (Author), Kanagavelu, R. (Author), Li, Z. (Author), Samsudin, J. (Author), Wang, S. (Author), Wei, Q. (Author), Yang, Y. (Author), Zhang, H. (Author)
Format: Article
Language:English
Published: Elsevier B.V. 2022
Subjects:
Online Access:View Fulltext in Publisher
LEADER 02293nam a2200409Ia 4500
001 10.1016-j.array.2022.100207
008 220718s2022 CNT 000 0 und d
020 |a 25900056 (ISSN) 
245 1 0 |a CE-Fed: Communication efficient multi-party computation enabled federated learning 
260 0 |b Elsevier B.V.  |c 2022 
856 |z View Fulltext in Publisher  |u https://doi.org/10.1016/j.array.2022.100207 
520 3 |a Federated learning (FL) allows a number of parties collectively train models without revealing private datasets. There is a possibility of extracting personal or confidential data from the shared models even-though sharing of raw data is prevented by federated learning. Secure Multi Party Computation (MPC) is leveraged to aggregate the locally-trained models in a privacy preserving manner. However, it results in high communication cost and poor scalability in a decentralized environment. We design a novel communication-efficient MPC enabled federated learning called CE-Fed. In particular, the proposed CE-Fed is a hierarchical mechanism which forms model aggregation committee with a small number of members and aggregates the global model only among committee members, instead of all participants. We develop a prototype and demonstrate the effectiveness of our mechanism with different datasets. Our proposed CE-Fed achieves high accuracy, communication efficiency and scalability without compromising privacy. © 2022 The Author(s) 
650 0 4 |a Aggregates 
650 0 4 |a Committee selection 
650 0 4 |a Communication cost 
650 0 4 |a Computational efficiency 
650 0 4 |a Confidential data 
650 0 4 |a Edge computing 
650 0 4 |a Federated learning 
650 0 4 |a Learning systems 
650 0 4 |a Multiparty computation 
650 0 4 |a Multi-party computation 
650 0 4 |a Privacy preserving 
650 0 4 |a Privacy-preserving techniques 
650 0 4 |a Scalability 
650 0 4 |a Secure multi-party computation 
650 0 4 |a Shared model 
650 0 4 |a Train model 
700 1 |a Goh, R.S.M.  |e author 
700 1 |a Kanagavelu, R.  |e author 
700 1 |a Li, Z.  |e author 
700 1 |a Samsudin, J.  |e author 
700 1 |a Wang, S.  |e author 
700 1 |a Wei, Q.  |e author 
700 1 |a Yang, Y.  |e author 
700 1 |a Zhang, H.  |e author 
773 |t Array