<tt>SHAT</tt>: A Novel Asynchronous Training Algorithm That Provides Fast Model Convergence in Distributed Deep Learning
The recent unprecedented success of deep learning (DL) in various fields is underlied by its use of large-scale data and models. Training a large-scale deep neural network (DNN) model with large-scale data, however, is time-consuming. To speed up the training of massive DNN models, data-parallel dis...
Main Authors: | Yunyong Ko, Sang-Wook Kim |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-12-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/12/1/292 |
Similar Items
-
Bayesian Estimation of Latent Space Item Response Models with <tt>JAGS</tt>, <tt>Stan</tt>, and <tt>NIMBLE</tt> in <tt>R</tt>
by: Jinwen Luo, et al.
Published: (2023-05-01) -
TT-MLP: Tensor Train Decomposition on Deep MLPs
by: Jiale Yan, et al.
Published: (2023-01-01) -
<tt>FASTSET</tt>: A Fast Data Structure for the Representation of Sets of Integers
by: Giuseppe Lancia, et al.
Published: (2019-05-01) -
Evidence for tt̅ γ production and measurement of σtt̅ γ/σtt
by: Gomez-Ceballos, Guillelmo, et al.
Published: (2012) -
Evidence for tt̄γ production and measurement of σtt̄γ/σtt̄
by: Aaltonen, T, et al.
Published: (2011)