A Partition Based Gradient Compression Algorithm for Distributed Training in AIoT
Running Deep Neural Networks (DNNs) in distributed Internet of Things (IoT) nodes is a promising scheme to enhance the performance of IoT systems. However, due to the limited computing and communication resources of the IoT nodes, the communication efficiency of the distributed DNN training strategy...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-03-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/21/6/1943 |
id |
doaj-f45314e8a1f24c40b7c8fd7361a4f95d |
---|---|
record_format |
Article |
spelling |
doaj-f45314e8a1f24c40b7c8fd7361a4f95d2021-03-11T00:03:04ZengMDPI AGSensors1424-82202021-03-01211943194310.3390/s21061943A Partition Based Gradient Compression Algorithm for Distributed Training in AIoTBingjun Guo0Yazhi Liu1Chunyang Zhang2Department of Computer Science and Technology, North China University of Science and Technology, Tangshan 063210, ChinaDepartment of Computer Science and Technology, North China University of Science and Technology, Tangshan 063210, ChinaDepartment of Computer Science and Technology, North China University of Science and Technology, Tangshan 063210, ChinaRunning Deep Neural Networks (DNNs) in distributed Internet of Things (IoT) nodes is a promising scheme to enhance the performance of IoT systems. However, due to the limited computing and communication resources of the IoT nodes, the communication efficiency of the distributed DNN training strategy is a problem demanding a prompt solution. In this paper, an adaptive compression strategy based on gradient partition is proposed to solve the problem of high communication overhead between nodes during the distributed training procedure. Firstly, a neural network is trained to predict the gradient distribution of its parameters. According to the distribution characteristics of the gradient, the gradient is divided into the key region and the sparse region. At the same time, combined with the information entropy of gradient distribution, a reasonable threshold is selected to filter the gradient value in the partition, and only the gradient value greater than the threshold is transmitted and updated, to reduce the traffic and improve the distributed training efficiency. The strategy uses gradient sparsity to achieve the maximum compression ratio of 37.1 times, which improves the training efficiency to a certain extent.https://www.mdpi.com/1424-8220/21/6/1943AIoTdistributed traininggradient compressiontraining efficiency |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Bingjun Guo Yazhi Liu Chunyang Zhang |
spellingShingle |
Bingjun Guo Yazhi Liu Chunyang Zhang A Partition Based Gradient Compression Algorithm for Distributed Training in AIoT Sensors AIoT distributed training gradient compression training efficiency |
author_facet |
Bingjun Guo Yazhi Liu Chunyang Zhang |
author_sort |
Bingjun Guo |
title |
A Partition Based Gradient Compression Algorithm for Distributed Training in AIoT |
title_short |
A Partition Based Gradient Compression Algorithm for Distributed Training in AIoT |
title_full |
A Partition Based Gradient Compression Algorithm for Distributed Training in AIoT |
title_fullStr |
A Partition Based Gradient Compression Algorithm for Distributed Training in AIoT |
title_full_unstemmed |
A Partition Based Gradient Compression Algorithm for Distributed Training in AIoT |
title_sort |
partition based gradient compression algorithm for distributed training in aiot |
publisher |
MDPI AG |
series |
Sensors |
issn |
1424-8220 |
publishDate |
2021-03-01 |
description |
Running Deep Neural Networks (DNNs) in distributed Internet of Things (IoT) nodes is a promising scheme to enhance the performance of IoT systems. However, due to the limited computing and communication resources of the IoT nodes, the communication efficiency of the distributed DNN training strategy is a problem demanding a prompt solution. In this paper, an adaptive compression strategy based on gradient partition is proposed to solve the problem of high communication overhead between nodes during the distributed training procedure. Firstly, a neural network is trained to predict the gradient distribution of its parameters. According to the distribution characteristics of the gradient, the gradient is divided into the key region and the sparse region. At the same time, combined with the information entropy of gradient distribution, a reasonable threshold is selected to filter the gradient value in the partition, and only the gradient value greater than the threshold is transmitted and updated, to reduce the traffic and improve the distributed training efficiency. The strategy uses gradient sparsity to achieve the maximum compression ratio of 37.1 times, which improves the training efficiency to a certain extent. |
topic |
AIoT distributed training gradient compression training efficiency |
url |
https://www.mdpi.com/1424-8220/21/6/1943 |
work_keys_str_mv |
AT bingjunguo apartitionbasedgradientcompressionalgorithmfordistributedtraininginaiot AT yazhiliu apartitionbasedgradientcompressionalgorithmfordistributedtraininginaiot AT chunyangzhang apartitionbasedgradientcompressionalgorithmfordistributedtraininginaiot AT bingjunguo partitionbasedgradientcompressionalgorithmfordistributedtraininginaiot AT yazhiliu partitionbasedgradientcompressionalgorithmfordistributedtraininginaiot AT chunyangzhang partitionbasedgradientcompressionalgorithmfordistributedtraininginaiot |
_version_ |
1724226410877288448 |