Why random reshuffling beats stochastic gradient descent
Abstract We analyze the convergence rate of the random reshuffling (RR) method, which is a randomized first-order incremental algorithm for minimizing a finite sum of convex component functions. RR proceeds in cycles, picking a uniformly random order (permutation) and processing the c...
Main Authors: | Gürbüzbalaban, M., Ozdaglar, A., Parrilo, P. A |
---|---|
Other Authors: | Massachusetts Institute of Technology. Laboratory for Information and Decision Systems |
Format: | Article |
Language: | English |
Published: |
Springer Berlin Heidelberg
2021
|
Online Access: | https://hdl.handle.net/1721.1/132030 |
Similar Items
-
When cyclic coordinate descent outperforms randomized coordinate descent
by: Gurbuzbalaban, Mert, et al.
Published: (2019) -
Randomness and permutations in coordinate descent methods
by: Gürbüzbalaban, Mert, et al.
Published: (2021) -
Jokowi's cabinet reshuffle : will it beat COVID-19?
by: Borsuk, Richard
Published: (2021) -
A universally optimal multistage accelerated stochastic gradient method
by: Aybat, NS, et al.
Published: (2021) -
Carathéodory sampling for stochastic gradient descent
by: Cosentino, F, et al.
Published: (2020)