Communication Cost Reduction with Partial Structure in Federated Learning
Federated learning is a distributed learning algorithm designed to train a single server model on a server using different clients and their local data. To improve the performance of the server model, continuous communication with clients is required, and since the number of clients is very large, t...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-08-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/10/17/2081 |
id |
doaj-5f9f5c38c16a4f5a89efe6bee878e88d |
---|---|
record_format |
Article |
spelling |
doaj-5f9f5c38c16a4f5a89efe6bee878e88d2021-09-09T13:42:00ZengMDPI AGElectronics2079-92922021-08-01102081208110.3390/electronics10172081Communication Cost Reduction with Partial Structure in Federated LearningDongseok Kang0Chang Wook Ahn1AI Graduate School, Gwangju Institute of Science and Technology, 123 Cheomdangwagi-ro, Buk-gu, Gwangju 61005, KoreaAI Graduate School, Gwangju Institute of Science and Technology, 123 Cheomdangwagi-ro, Buk-gu, Gwangju 61005, KoreaFederated learning is a distributed learning algorithm designed to train a single server model on a server using different clients and their local data. To improve the performance of the server model, continuous communication with clients is required, and since the number of clients is very large, the algorithm must be designed in consideration of the cost required for communication. In this paper, we propose a method for distributing a model with a structure different from that of the server model, distributing a model suitable for clients with different data sizes, and training a server model using the reconstructed model trained by the client. In this way, the server model deploys only a subset of the sequential model, collects gradient updates, and selectively applies updates to the server model. This method of delivering the server model at a lower cost to clients who only need smaller models can reduce the communication cost of training server models compared to standard methods. An image classification model was designed to verify the effectiveness of the proposed method via three data distribution situations and two datasets, and it was confirmed that training was accomplished only with a cost 0.229 times smaller than the standard method.https://www.mdpi.com/2079-9292/10/17/2081federated learningartificial intelligenceneural network |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Dongseok Kang Chang Wook Ahn |
spellingShingle |
Dongseok Kang Chang Wook Ahn Communication Cost Reduction with Partial Structure in Federated Learning Electronics federated learning artificial intelligence neural network |
author_facet |
Dongseok Kang Chang Wook Ahn |
author_sort |
Dongseok Kang |
title |
Communication Cost Reduction with Partial Structure in Federated Learning |
title_short |
Communication Cost Reduction with Partial Structure in Federated Learning |
title_full |
Communication Cost Reduction with Partial Structure in Federated Learning |
title_fullStr |
Communication Cost Reduction with Partial Structure in Federated Learning |
title_full_unstemmed |
Communication Cost Reduction with Partial Structure in Federated Learning |
title_sort |
communication cost reduction with partial structure in federated learning |
publisher |
MDPI AG |
series |
Electronics |
issn |
2079-9292 |
publishDate |
2021-08-01 |
description |
Federated learning is a distributed learning algorithm designed to train a single server model on a server using different clients and their local data. To improve the performance of the server model, continuous communication with clients is required, and since the number of clients is very large, the algorithm must be designed in consideration of the cost required for communication. In this paper, we propose a method for distributing a model with a structure different from that of the server model, distributing a model suitable for clients with different data sizes, and training a server model using the reconstructed model trained by the client. In this way, the server model deploys only a subset of the sequential model, collects gradient updates, and selectively applies updates to the server model. This method of delivering the server model at a lower cost to clients who only need smaller models can reduce the communication cost of training server models compared to standard methods. An image classification model was designed to verify the effectiveness of the proposed method via three data distribution situations and two datasets, and it was confirmed that training was accomplished only with a cost 0.229 times smaller than the standard method. |
topic |
federated learning artificial intelligence neural network |
url |
https://www.mdpi.com/2079-9292/10/17/2081 |
work_keys_str_mv |
AT dongseokkang communicationcostreductionwithpartialstructureinfederatedlearning AT changwookahn communicationcostreductionwithpartialstructureinfederatedlearning |
_version_ |
1717760624042180608 |