Global convergence rate of incremental aggregated gradient methods for nonsmooth problems

We analyze the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions f(x) = Σ[subscript i=1]m f[subscript i](x) and a convex function r(x). Such composite optimization problems arise in a number of machine learning applications...

Full description

Bibliographic Details
Main Authors: Vanli, Nuri Denizcan, Gurbuzbalaban, Mert, Koksal, Asuman E.
Other Authors: Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
Format: Article
Language:en_US
Published: Institute of Electrical and Electronics Engineers (IEEE) 2017
Online Access:http://hdl.handle.net/1721.1/111781
https://orcid.org/0000-0002-0575-2450
https://orcid.org/0000-0002-1827-1285