Accelerating Distributed SGD With Group Hybrid Parallelism
The scale of model parameters and datasets is rapidly growing for high accuracy in various areas. To train a large-scale deep neural network (DNN) model, a huge amount of computation and memory is required; therefore, a parallelization technique for training large-scale DNN models has attracted atte...
Main Authors: | Kyung-No Joo, Chan-Hyun Youn |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9391652/ |
Similar Items
-
A Hybrid Parallelization Approach for Distributed and Scalable Deep Learning
by: Samson B. Akintoye, et al.
Published: (2022-01-01) -
A Scheduling Method of Moldable Parallel Tasks Considering Speedup and System Load on the Cloud
by: Jianmin Li, et al.
Published: (2019-01-01) -
Research of Fuzzy Control Strategy of Coaxial Parallel Hybrid Electric Vehicle
by: Gao Longfei, et al.
Published: (2019-01-01) -
Accelerated Synchronous Model Parallelism Using Cooperative Process for Training Compute-Intensive Models
by: Chanhee Yu, et al.
Published: (2023-01-01) -
PSciLab: An Unified Distributed and Parallel Software Framework for Data Analysis, Simulation and Machine Learning—Design Practice, Software Architecture, and User Experience
by: Stefan Bosse
Published: (2022-03-01)