Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification

Deep Neural Networks (DNNs) have established themselves as a fundamental tool in numerous computational modeling applications, overcoming the challenge of defining use-case-specific feature extraction processing by incorporating this stage into unified end-to-end trainable models. Despite their capa...

Full description

Bibliographic Details
Main Authors: Maria Aspri, Grigorios Tsagkatakis, Panagiotis Tsakalides
Format: Article
Language:English
Published: MDPI AG 2020-08-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/12/17/2670
_version_ 1797557138485149696
author Maria Aspri
Grigorios Tsagkatakis
Panagiotis Tsakalides
author_facet Maria Aspri
Grigorios Tsagkatakis
Panagiotis Tsakalides
author_sort Maria Aspri
collection DOAJ
description Deep Neural Networks (DNNs) have established themselves as a fundamental tool in numerous computational modeling applications, overcoming the challenge of defining use-case-specific feature extraction processing by incorporating this stage into unified end-to-end trainable models. Despite their capabilities in modeling, training large-scale DNN models is a very computation-intensive task that most single machines are often incapable of accomplishing. To address this issue, different parallelization schemes were proposed. Nevertheless, network overheads as well as optimal resource allocation pose as major challenges, since network communication is generally slower than intra-machine communication while some layers are more computationally expensive than others. In this work, we consider a novel multimodal DNN based on the Convolutional Neural Network architecture and explore several different ways to optimize its performance when training is executed on an Apache Spark Cluster. We evaluate the performance of different architectures via the metrics of network traffic and processing power, considering the case of land cover classification from remote sensing observations. Furthermore, we compare our architectures with an identical DNN architecture modeled after a data parallelization approach by using the metrics of classification accuracy and inference execution time. The experiments show that the way a model is parallelized has tremendous effect on resource allocation and hyperparameter tuning can reduce network overheads. Experimental results also demonstrate that proposed model parallelization schemes achieve more efficient resource use and more accurate predictions compared to data parallelization approaches.
first_indexed 2024-03-10T17:13:06Z
format Article
id doaj.art-53c1a0ce919d48bd806107ed7d75a433
institution Directory Open Access Journal
issn 2072-4292
language English
last_indexed 2024-03-10T17:13:06Z
publishDate 2020-08-01
publisher MDPI AG
record_format Article
series Remote Sensing
spelling doaj.art-53c1a0ce919d48bd806107ed7d75a4332023-11-20T10:36:15ZengMDPI AGRemote Sensing2072-42922020-08-011217267010.3390/rs12172670Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover ClassificationMaria Aspri0Grigorios Tsagkatakis1Panagiotis Tsakalides2Institute of Computer Science, Foundation for Research and Technology-Hellas (FORTH), GR70013 Heraklion, GreeceComputer Science Department, University of Crete, GR70013 Heraklion, GreeceInstitute of Computer Science, Foundation for Research and Technology-Hellas (FORTH), GR70013 Heraklion, GreeceDeep Neural Networks (DNNs) have established themselves as a fundamental tool in numerous computational modeling applications, overcoming the challenge of defining use-case-specific feature extraction processing by incorporating this stage into unified end-to-end trainable models. Despite their capabilities in modeling, training large-scale DNN models is a very computation-intensive task that most single machines are often incapable of accomplishing. To address this issue, different parallelization schemes were proposed. Nevertheless, network overheads as well as optimal resource allocation pose as major challenges, since network communication is generally slower than intra-machine communication while some layers are more computationally expensive than others. In this work, we consider a novel multimodal DNN based on the Convolutional Neural Network architecture and explore several different ways to optimize its performance when training is executed on an Apache Spark Cluster. We evaluate the performance of different architectures via the metrics of network traffic and processing power, considering the case of land cover classification from remote sensing observations. Furthermore, we compare our architectures with an identical DNN architecture modeled after a data parallelization approach by using the metrics of classification accuracy and inference execution time. The experiments show that the way a model is parallelized has tremendous effect on resource allocation and hyperparameter tuning can reduce network overheads. Experimental results also demonstrate that proposed model parallelization schemes achieve more efficient resource use and more accurate predictions compared to data parallelization approaches.https://www.mdpi.com/2072-4292/12/17/2670distributed deep learningmodel parallelizationconvolutional neural networksmulti-modal observation classificationland cover classification
spellingShingle Maria Aspri
Grigorios Tsagkatakis
Panagiotis Tsakalides
Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification
Remote Sensing
distributed deep learning
model parallelization
convolutional neural networks
multi-modal observation classification
land cover classification
title Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification
title_full Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification
title_fullStr Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification
title_full_unstemmed Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification
title_short Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification
title_sort distributed training and inference of deep learning models for multi modal land cover classification
topic distributed deep learning
model parallelization
convolutional neural networks
multi-modal observation classification
land cover classification
url https://www.mdpi.com/2072-4292/12/17/2670
work_keys_str_mv AT mariaaspri distributedtrainingandinferenceofdeeplearningmodelsformultimodallandcoverclassification
AT grigoriostsagkatakis distributedtrainingandinferenceofdeeplearningmodelsformultimodallandcoverclassification
AT panagiotistsakalides distributedtrainingandinferenceofdeeplearningmodelsformultimodallandcoverclassification