Why random reshuffling beats stochastic gradient descent

Abstract We analyze the convergence rate of the random reshuffling (RR) method, which is a randomized first-order incremental algorithm for minimizing a finite sum of convex component functions. RR proceeds in cycles, picking a uniformly random order (permutation) and processing the c...

Full description

Bibliographic Details
Main Authors: Gürbüzbalaban, M., Ozdaglar, A., Parrilo, P. A
Other Authors: Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
Format: Article
Language:English
Published: Springer Berlin Heidelberg 2021
Online Access:https://hdl.handle.net/1721.1/132030
_version_ 1811080446656643072
author Gürbüzbalaban, M.
Ozdaglar, A.
Parrilo, P. A
author2 Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
author_facet Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
Gürbüzbalaban, M.
Ozdaglar, A.
Parrilo, P. A
author_sort Gürbüzbalaban, M.
collection MIT
description Abstract We analyze the convergence rate of the random reshuffling (RR) method, which is a randomized first-order incremental algorithm for minimizing a finite sum of convex component functions. RR proceeds in cycles, picking a uniformly random order (permutation) and processing the component functions one at a time according to this order, i.e., at each cycle, each component function is sampled without replacement from the collection. Though RR has been numerically observed to outperform its with-replacement counterpart stochastic gradient descent (SGD), characterization of its convergence rate has been a long standing open question. In this paper, we answer this question by providing various convergence rate results for RR and variants when the sum function is strongly convex. We first focus on quadratic component functions and show that the expected distance of the iterates generated by RR with stepsize $$\alpha _k=\varTheta (1/k^s)$$ α k = Θ ( 1 / k s ) for $$s\in (0,1]$$ s ∈ ( 0 , 1 ] converges to zero at rate $$\mathcal{O}(1/k^s)$$ O ( 1 / k s ) (with $$s=1$$ s = 1 requiring adjusting the stepsize to the strong convexity constant). Our main result shows that when the component functions are quadratics or smooth (with a Lipschitz assumption on the Hessian matrices), RR with iterate averaging and a diminishing stepsize $$\alpha _k=\varTheta (1/k^s)$$ α k = Θ ( 1 / k s ) for $$s\in (1/2,1)$$ s ∈ ( 1 / 2 , 1 ) converges at rate $$\varTheta (1/k^{2s})$$ Θ ( 1 / k 2 s ) with probability one in the suboptimality of the objective value, thus improving upon the $$\varOmega (1/k)$$ Ω ( 1 / k ) rate of SGD. Our analysis draws on the theory of Polyak–Ruppert averaging and relies on decoupling the dependent cycle gradient error into an independent term over cycles and another term dominated by $$\alpha _k^2$$ α k 2 . This allows us to apply law of large numbers to an appropriately weighted version of the cycle gradient errors, where the weights depend on the stepsize. We also provide high probability convergence rate estimates that shows decay rate of different terms and allows us to propose a modification of RR with convergence rate $$\mathcal{O}(\frac{1}{k^2})$$ O ( 1 k 2 ) .
first_indexed 2024-09-23T11:31:49Z
format Article
id mit-1721.1/132030
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T11:31:49Z
publishDate 2021
publisher Springer Berlin Heidelberg
record_format dspace
spelling mit-1721.1/1320302023-09-25T20:48:09Z Why random reshuffling beats stochastic gradient descent Gürbüzbalaban, M. Ozdaglar, A. Parrilo, P. A Massachusetts Institute of Technology. Laboratory for Information and Decision Systems Abstract We analyze the convergence rate of the random reshuffling (RR) method, which is a randomized first-order incremental algorithm for minimizing a finite sum of convex component functions. RR proceeds in cycles, picking a uniformly random order (permutation) and processing the component functions one at a time according to this order, i.e., at each cycle, each component function is sampled without replacement from the collection. Though RR has been numerically observed to outperform its with-replacement counterpart stochastic gradient descent (SGD), characterization of its convergence rate has been a long standing open question. In this paper, we answer this question by providing various convergence rate results for RR and variants when the sum function is strongly convex. We first focus on quadratic component functions and show that the expected distance of the iterates generated by RR with stepsize $$\alpha _k=\varTheta (1/k^s)$$ α k = Θ ( 1 / k s ) for $$s\in (0,1]$$ s ∈ ( 0 , 1 ] converges to zero at rate $$\mathcal{O}(1/k^s)$$ O ( 1 / k s ) (with $$s=1$$ s = 1 requiring adjusting the stepsize to the strong convexity constant). Our main result shows that when the component functions are quadratics or smooth (with a Lipschitz assumption on the Hessian matrices), RR with iterate averaging and a diminishing stepsize $$\alpha _k=\varTheta (1/k^s)$$ α k = Θ ( 1 / k s ) for $$s\in (1/2,1)$$ s ∈ ( 1 / 2 , 1 ) converges at rate $$\varTheta (1/k^{2s})$$ Θ ( 1 / k 2 s ) with probability one in the suboptimality of the objective value, thus improving upon the $$\varOmega (1/k)$$ Ω ( 1 / k ) rate of SGD. Our analysis draws on the theory of Polyak–Ruppert averaging and relies on decoupling the dependent cycle gradient error into an independent term over cycles and another term dominated by $$\alpha _k^2$$ α k 2 . This allows us to apply law of large numbers to an appropriately weighted version of the cycle gradient errors, where the weights depend on the stepsize. We also provide high probability convergence rate estimates that shows decay rate of different terms and allows us to propose a modification of RR with convergence rate $$\mathcal{O}(\frac{1}{k^2})$$ O ( 1 k 2 ) . 2021-09-20T17:41:33Z 2021-09-20T17:41:33Z 2019-10-29 2021-02-11T16:00:07Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/132030 en https://doi.org/10.1007/s10107-019-01440-w Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society application/pdf Springer Berlin Heidelberg Springer Berlin Heidelberg
spellingShingle Gürbüzbalaban, M.
Ozdaglar, A.
Parrilo, P. A
Why random reshuffling beats stochastic gradient descent
title Why random reshuffling beats stochastic gradient descent
title_full Why random reshuffling beats stochastic gradient descent
title_fullStr Why random reshuffling beats stochastic gradient descent
title_full_unstemmed Why random reshuffling beats stochastic gradient descent
title_short Why random reshuffling beats stochastic gradient descent
title_sort why random reshuffling beats stochastic gradient descent
url https://hdl.handle.net/1721.1/132030
work_keys_str_mv AT gurbuzbalabanm whyrandomreshufflingbeatsstochasticgradientdescent
AT ozdaglara whyrandomreshufflingbeatsstochasticgradientdescent
AT parrilopa whyrandomreshufflingbeatsstochasticgradientdescent