Accelerating Distributed SGD With Group Hybrid Parallelism

The scale of model parameters and datasets is rapidly growing for high accuracy in various areas. To train a large-scale deep neural network (DNN) model, a huge amount of computation and memory is required; therefore, a parallelization technique for training large-scale DNN models has attracted atte...

Full description

Bibliographic Details
Main Authors: Kyung-No Joo, Chan-Hyun Youn
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9391652/