A universally optimal multistage accelerated stochastic gradient method
© 2019 Neural information processing systems foundation. All rights reserved. We study the problem of minimizing a strongly convex, smooth function when we have noisy estimates of its gradient. We propose a novel multistage accelerated algorithm that is universally optimal in the sense that it achie...
Main Authors: | Aybat, NS, Fallah, A, Gürbüzbalaban, M, Ozdaglar, A |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science |
Format: | Article |
Language: | English |
Published: |
2021
|
Online Access: | https://hdl.handle.net/1721.1/137365 |
Similar Items
-
Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions
by: Aybat, Necdet Serhat, et al.
Published: (2021) -
An Optimal Multistage Stochastic Gradient Method for Minimax Problems
by: Fallah, Alireza, et al.
Published: (2022) -
Why random reshuffling beats stochastic gradient descent
by: Gürbüzbalaban, M., et al.
Published: (2021) -
Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods
by: Vanli, Nuri Denizcan, et al.
Published: (2019) -
Robust accelerated gradient methods for machine learning
by: Fallah, Alireza.
Published: (2019)