Hardware Resource Analysis in Distributed Training with Edge Devices

When training a deep learning model with distributed training, the hardware resource utilization of each device depends on the model structure and the number of devices used for training. Distributed training has recently been applied to edge computing. Since edge devices have hardware resource limi...

Full description

Bibliographic Details
Main Authors: Sihyeong Park, Jemin Lee, Hyungshin Kim
Format: Article
Language:English
Published: MDPI AG 2019-12-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/9/1/28
_version_ 1811187586048196608
author Sihyeong Park
Jemin Lee
Hyungshin Kim
author_facet Sihyeong Park
Jemin Lee
Hyungshin Kim
author_sort Sihyeong Park
collection DOAJ
description When training a deep learning model with distributed training, the hardware resource utilization of each device depends on the model structure and the number of devices used for training. Distributed training has recently been applied to edge computing. Since edge devices have hardware resource limitations such as memory, there is a need for training methods that use hardware resources efficiently. Previous research focused on reducing training time by optimizing the synchronization process between edge devices or by compressing the models. In this paper, we monitored hardware resource usage based on the number of layers and the batch size of the model during distributed training with edge devices. We analyzed memory usage and training time variability as the batch size and number of layers increased. Experimental results demonstrated that, the larger the batch size, the fewer synchronizations between devices, resulting in less accurate training. In the shallow model, training time increased as the number of devices used for training increased because the synchronization between devices took more time than the computation time of training. This paper finds that efficient use of hardware resources for distributed training requires selecting devices in the context of model complexity and that fewer layers and smaller batches are required for efficient hardware use.
first_indexed 2024-04-11T14:04:57Z
format Article
id doaj.art-8ca7b3ef761a48098ff02242778f47f9
institution Directory Open Access Journal
issn 2079-9292
language English
last_indexed 2024-04-11T14:04:57Z
publishDate 2019-12-01
publisher MDPI AG
record_format Article
series Electronics
spelling doaj.art-8ca7b3ef761a48098ff02242778f47f92022-12-22T04:19:55ZengMDPI AGElectronics2079-92922019-12-01912810.3390/electronics9010028electronics9010028Hardware Resource Analysis in Distributed Training with Edge DevicesSihyeong Park0Jemin Lee1Hyungshin Kim2Department of Computer Science and Engineering, Chungnam National University, Daejeon 34134, KoreaFuture Computing Research Division, Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute (ETRI), Daejeon 34129, KoreaThe Division of Computer Convergence, Chungnam National University, Daejeon 34134, KoreaWhen training a deep learning model with distributed training, the hardware resource utilization of each device depends on the model structure and the number of devices used for training. Distributed training has recently been applied to edge computing. Since edge devices have hardware resource limitations such as memory, there is a need for training methods that use hardware resources efficiently. Previous research focused on reducing training time by optimizing the synchronization process between edge devices or by compressing the models. In this paper, we monitored hardware resource usage based on the number of layers and the batch size of the model during distributed training with edge devices. We analyzed memory usage and training time variability as the batch size and number of layers increased. Experimental results demonstrated that, the larger the batch size, the fewer synchronizations between devices, resulting in less accurate training. In the shallow model, training time increased as the number of devices used for training increased because the synchronization between devices took more time than the computation time of training. This paper finds that efficient use of hardware resources for distributed training requires selecting devices in the context of model complexity and that fewer layers and smaller batches are required for efficient hardware use.https://www.mdpi.com/2079-9292/9/1/28deep learningdistributed trainingedge computinginternet of thingsperformance monitoring
spellingShingle Sihyeong Park
Jemin Lee
Hyungshin Kim
Hardware Resource Analysis in Distributed Training with Edge Devices
Electronics
deep learning
distributed training
edge computing
internet of things
performance monitoring
title Hardware Resource Analysis in Distributed Training with Edge Devices
title_full Hardware Resource Analysis in Distributed Training with Edge Devices
title_fullStr Hardware Resource Analysis in Distributed Training with Edge Devices
title_full_unstemmed Hardware Resource Analysis in Distributed Training with Edge Devices
title_short Hardware Resource Analysis in Distributed Training with Edge Devices
title_sort hardware resource analysis in distributed training with edge devices
topic deep learning
distributed training
edge computing
internet of things
performance monitoring
url https://www.mdpi.com/2079-9292/9/1/28
work_keys_str_mv AT sihyeongpark hardwareresourceanalysisindistributedtrainingwithedgedevices
AT jeminlee hardwareresourceanalysisindistributedtrainingwithedgedevices
AT hyungshinkim hardwareresourceanalysisindistributedtrainingwithedgedevices