Pipeline Parallelism With Elastic Averaging

To accelerate the training speed of massive DNN models on large-scale datasets, distributed training techniques, including data parallelism and model parallelism, have been extensively studied. In particular, pipeline parallelism, which is derived from model parallelism, has been attracting attentio...

Full description

Bibliographic Details
Main Authors: Bongwon Jang, In-Chul Yoo, Dongsuk Yook
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10381706/