Achieving acceleration in distributed optimization via direct discretization of the heavy-ball ODE

© 2019 American Automatic Control Council. We develop a distributed algorithm for convex Empirical Risk Minimization, the problem of minimizing large but finite sum of convex functions over networks. The proposed algorithm is derived from directly discretizing the second-order heavy-ball differentia...

Full description

Bibliographic Details
Main Authors: Zhang, J, Uribe, CA, Mokhtari, A, Jadbabaie, A
Other Authors: Massachusetts Institute of Technology. Department of Civil and Environmental Engineering
Format: Article
Language:English
Published: IEEE 2023
Online Access:https://hdl.handle.net/1721.1/148596
Description
Summary:© 2019 American Automatic Control Council. We develop a distributed algorithm for convex Empirical Risk Minimization, the problem of minimizing large but finite sum of convex functions over networks. The proposed algorithm is derived from directly discretizing the second-order heavy-ball differential equation and results in an accelerated convergence rate, i.e., faster than distributed gradient descent-based methods for strongly convex objectives that may not be smooth. Notably, we achieve acceleration without resorting to the well-known Nesterov's momentum approach. We provide numerical experiments and contrast the proposed method with recently proposed optimal distributed optimization algorithms.