Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification
Deep Neural Networks (DNNs) have established themselves as a fundamental tool in numerous computational modeling applications, overcoming the challenge of defining use-case-specific feature extraction processing by incorporating this stage into unified end-to-end trainable models. Despite their capa...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-08-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | https://www.mdpi.com/2072-4292/12/17/2670 |
id |
doaj-53c1a0ce919d48bd806107ed7d75a433 |
---|---|
record_format |
Article |
spelling |
doaj-53c1a0ce919d48bd806107ed7d75a4332020-11-25T03:52:30ZengMDPI AGRemote Sensing2072-42922020-08-01122670267010.3390/rs12172670Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover ClassificationMaria Aspri0Grigorios Tsagkatakis1Panagiotis Tsakalides2Institute of Computer Science, Foundation for Research and Technology-Hellas (FORTH), GR70013 Heraklion, GreeceComputer Science Department, University of Crete, GR70013 Heraklion, GreeceInstitute of Computer Science, Foundation for Research and Technology-Hellas (FORTH), GR70013 Heraklion, GreeceDeep Neural Networks (DNNs) have established themselves as a fundamental tool in numerous computational modeling applications, overcoming the challenge of defining use-case-specific feature extraction processing by incorporating this stage into unified end-to-end trainable models. Despite their capabilities in modeling, training large-scale DNN models is a very computation-intensive task that most single machines are often incapable of accomplishing. To address this issue, different parallelization schemes were proposed. Nevertheless, network overheads as well as optimal resource allocation pose as major challenges, since network communication is generally slower than intra-machine communication while some layers are more computationally expensive than others. In this work, we consider a novel multimodal DNN based on the Convolutional Neural Network architecture and explore several different ways to optimize its performance when training is executed on an Apache Spark Cluster. We evaluate the performance of different architectures via the metrics of network traffic and processing power, considering the case of land cover classification from remote sensing observations. Furthermore, we compare our architectures with an identical DNN architecture modeled after a data parallelization approach by using the metrics of classification accuracy and inference execution time. The experiments show that the way a model is parallelized has tremendous effect on resource allocation and hyperparameter tuning can reduce network overheads. Experimental results also demonstrate that proposed model parallelization schemes achieve more efficient resource use and more accurate predictions compared to data parallelization approaches.https://www.mdpi.com/2072-4292/12/17/2670distributed deep learningmodel parallelizationconvolutional neural networksmulti-modal observation classificationland cover classification |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Maria Aspri Grigorios Tsagkatakis Panagiotis Tsakalides |
spellingShingle |
Maria Aspri Grigorios Tsagkatakis Panagiotis Tsakalides Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification Remote Sensing distributed deep learning model parallelization convolutional neural networks multi-modal observation classification land cover classification |
author_facet |
Maria Aspri Grigorios Tsagkatakis Panagiotis Tsakalides |
author_sort |
Maria Aspri |
title |
Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification |
title_short |
Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification |
title_full |
Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification |
title_fullStr |
Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification |
title_full_unstemmed |
Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification |
title_sort |
distributed training and inference of deep learning models for multi-modal land cover classification |
publisher |
MDPI AG |
series |
Remote Sensing |
issn |
2072-4292 |
publishDate |
2020-08-01 |
description |
Deep Neural Networks (DNNs) have established themselves as a fundamental tool in numerous computational modeling applications, overcoming the challenge of defining use-case-specific feature extraction processing by incorporating this stage into unified end-to-end trainable models. Despite their capabilities in modeling, training large-scale DNN models is a very computation-intensive task that most single machines are often incapable of accomplishing. To address this issue, different parallelization schemes were proposed. Nevertheless, network overheads as well as optimal resource allocation pose as major challenges, since network communication is generally slower than intra-machine communication while some layers are more computationally expensive than others. In this work, we consider a novel multimodal DNN based on the Convolutional Neural Network architecture and explore several different ways to optimize its performance when training is executed on an Apache Spark Cluster. We evaluate the performance of different architectures via the metrics of network traffic and processing power, considering the case of land cover classification from remote sensing observations. Furthermore, we compare our architectures with an identical DNN architecture modeled after a data parallelization approach by using the metrics of classification accuracy and inference execution time. The experiments show that the way a model is parallelized has tremendous effect on resource allocation and hyperparameter tuning can reduce network overheads. Experimental results also demonstrate that proposed model parallelization schemes achieve more efficient resource use and more accurate predictions compared to data parallelization approaches. |
topic |
distributed deep learning model parallelization convolutional neural networks multi-modal observation classification land cover classification |
url |
https://www.mdpi.com/2072-4292/12/17/2670 |
work_keys_str_mv |
AT mariaaspri distributedtrainingandinferenceofdeeplearningmodelsformultimodallandcoverclassification AT grigoriostsagkatakis distributedtrainingandinferenceofdeeplearningmodelsformultimodallandcoverclassification AT panagiotistsakalides distributedtrainingandinferenceofdeeplearningmodelsformultimodallandcoverclassification |
_version_ |
1724482543060779008 |