Global convergence rate of incremental aggregated gradient methods for nonsmooth problems
We analyze the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions f(x) = Σ[subscript i=1]m f[subscript i](x) and a convex function r(x). Such composite optimization problems arise in a number of machine learning applications...
Main Authors: | Vanli, Nuri Denizcan, Gurbuzbalaban, Mert, Koksal, Asuman E. |
---|---|
Other Authors: | Massachusetts Institute of Technology. Laboratory for Information and Decision Systems |
Format: | Article |
Language: | en_US |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2017
|
Online Access: | http://hdl.handle.net/1721.1/111781 https://orcid.org/0000-0002-0575-2450 https://orcid.org/0000-0002-1827-1285 |
Similar Items
-
Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods
by: Vanli, Nuri Denizcan, et al.
Published: (2019) -
On the Convergence Rate of Incremental Aggregated Gradient Algorithms
by: Gurbuzbalaban, Mert, et al.
Published: (2018) -
A globally convergent incremental Newton method
by: Gurbuzbalaban, Mert, et al.
Published: (2016) -
When cyclic coordinate descent outperforms randomized coordinate descent
by: Gurbuzbalaban, Mert, et al.
Published: (2019) -
Convergence rate of block-coordinate maximization Burer–Monteiro method for solving large SDPs
by: Erdogdu, Murat A, et al.
Published: (2022)