Convergence Analysis of Distributed Subgradient Methods over Random Networks
We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology. For...
Main Authors: | Lobel, Ilan (Contributor), Ozdaglar, Asuman E. (Contributor) |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor), Massachusetts Institute of Technology. Operations Research Center (Contributor) |
Format: | Article |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers,
2010-11-23T19:13:45Z.
|
Subjects: | |
Online Access: | Get fulltext |
Similar Items
-
Graph balancing for distributed subgradient methods over directed graphs
by: Makhdoumi Kakhaki, Ali, et al.
Published: (2017) -
Convergence Rate of Distributed ADMM over Networks
by: Makhdoumi Kakhaki, Ali, et al.
Published: (2019) -
Rate of Convergence of Learning in Social Networks
by: Lobel, Ilan, et al.
Published: (2011) -
Distributed Constrained Stochastic Subgradient Algorithms Based on Random Projection and Asynchronous Broadcast over Networks
by: Junlong Zhu, et al.
Published: (2017-01-01) -
On Dual Convergence of the Distributed Newton Method for Network Utility Maximization
by: Wei, Ermin, et al.
Published: (2012)