Convergence Analysis of Distributed Subgradient Methods over Random Networks
We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology. For...
Main Authors: | Lobel, Ilan, Ozdaglar, Asuman E |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science |
Format: | Article |
Language: | en_US |
Published: |
Institute of Electrical and Electronics Engineers
2010
|
Online Access: | http://hdl.handle.net/1721.1/60033 https://orcid.org/0000-0002-1827-1285 |
Similar Items
-
Graph balancing for distributed subgradient methods over directed graphs
by: Makhdoumi Kakhaki, Ali, et al.
Published: (2017) -
Rate of Convergence of Learning in Social Networks
by: Lobel, Ilan, et al.
Published: (2011) -
Convergence Rate of Distributed ADMM over Networks
by: Makhdoumi Kakhaki, Ali, et al.
Published: (2019) -
On Dual Convergence of the Distributed Newton Method for Network Utility Maximization
by: Wei, Ermin, et al.
Published: (2012) -
Distributed multi-agent optimization with state-dependent communication
by: Lobel, Ilan, et al.
Published: (2012)