Hybrid Distributed Optimization for Learning Over Networks With Heterogeneous Agents

This paper considers distributed optimization for learning problems over networks with heterogeneous agents having different computational capabilities. The heterogeneity of computational capabilities implies that a subset of the agents may run computationally-intensive learning algorithms like Newt...

Full description

Bibliographic Details
Main Authors: Mohammad H. Nassralla, Naeem Akl, Zaher Dawy
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10255708/
Description
Summary:This paper considers distributed optimization for learning problems over networks with heterogeneous agents having different computational capabilities. The heterogeneity of computational capabilities implies that a subset of the agents may run computationally-intensive learning algorithms like Newton&#x2019;s method or full gradient descent, while the other agents can only run lower-complexity algorithms like stochastic gradient descent. This leads to opportunities for designing hybrid distributed optimization algorithms that rely on cooperation among the network agents in order to enhance overall performance, improve the rate of convergence, and reduce the communication overhead. We show in this work that hybrid learning with cooperation among heterogeneous agents attains a stable solution. For small step-sizes <inline-formula> <tex-math notation="LaTeX">$\mu $ </tex-math></inline-formula>, the proposed approach leads to small estimation error in the order of O(<inline-formula> <tex-math notation="LaTeX">$\mu $ </tex-math></inline-formula>). We also provide the theoretical analysis of the stability of the first, second, and fourth order error moments for learning over networks with heterogeneous agents. Finally, results are presented and analyzed for case study scenarios to demonstrate the effectiveness of the proposed approach.
ISSN:2169-3536