Fast global convergence of gradient methods for high-dimensional statistical recovery
Many statistical M-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a high-di...
Main Authors: | Agarwal, Alekh, Negahban, Sahand N., Wainwright, Martin J. |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science |
Format: | Article |
Language: | en_US |
Published: |
Institute of Mathematical Statistics
2013
|
Online Access: | http://hdl.handle.net/1721.1/78602 |
Similar Items
-
Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions
by: Agarwal, Alekh, et al.
Published: (2013) -
Gradient convergence in gradient methods
Published: (2003) -
Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods
by: Vanli, Nuri Denizcan, et al.
Published: (2019) -
Global convergence rate of incremental aggregated gradient methods for nonsmooth problems
by: Vanli, Nuri Denizcan, et al.
Published: (2017) -
Beyond convexity—Contraction and global convergence of gradient descent
by: Wensing, Patrick M, et al.
Published: (2022)