<tt>SHAT</tt>: A Novel Asynchronous Training Algorithm That Provides Fast Model Convergence in Distributed Deep Learning

The recent unprecedented success of deep learning (DL) in various fields is underlied by its use of large-scale data and models. Training a large-scale deep neural network (DNN) model with large-scale data, however, is time-consuming. To speed up the training of massive DNN models, data-parallel dis...

Full description

Bibliographic Details
Main Authors: Yunyong Ko, Sang-Wook Kim
Format: Article
Language:English
Published: MDPI AG 2021-12-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/12/1/292