On Consensus-Optimality Trade-offs in Collaborative Deep Learning

In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality. In this paper, we build on recent algorithmic progresses in distributed deep learning to explore various consensus-optimality trade-off...

Full description

Bibliographic Details
Main Authors: Zhanhong Jiang, Aditya Balu, Chinmay Hegde, Soumik Sarkar
Format: Article
Language:English
Published: Frontiers Media S.A. 2021-09-01
Series:Frontiers in Artificial Intelligence
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/frai.2021.573731/full
_version_ 1819113764436312064
author Zhanhong Jiang
Aditya Balu
Chinmay Hegde
Soumik Sarkar
author_facet Zhanhong Jiang
Aditya Balu
Chinmay Hegde
Soumik Sarkar
author_sort Zhanhong Jiang
collection DOAJ
description In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality. In this paper, we build on recent algorithmic progresses in distributed deep learning to explore various consensus-optimality trade-offs over a fixed communication topology. First, we propose the incremental consensus-based distributed stochastic gradient descent (i-CDSGD) algorithm, which involves multiple consensus steps (where each agent communicates information with its neighbors) within each SGD iteration. Second, we propose the generalized consensus-based distributed SGD (g-CDSGD) algorithm that enables us to navigate the full spectrum from complete consensus (all agents agree) to complete disagreement (each agent converges to individual model parameters). We analytically establish convergence of the proposed algorithms for strongly convex and nonconvex objective functions; we also analyze the momentum variants of the algorithms for the strongly convex case. We support our algorithms via numerical experiments, and demonstrate significant improvements over existing methods for collaborative deep learning.
first_indexed 2024-12-22T04:34:35Z
format Article
id doaj.art-38e24df48695476585bd17ff21e1f8ab
institution Directory Open Access Journal
issn 2624-8212
language English
last_indexed 2024-12-22T04:34:35Z
publishDate 2021-09-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Artificial Intelligence
spelling doaj.art-38e24df48695476585bd17ff21e1f8ab2022-12-21T18:38:56ZengFrontiers Media S.A.Frontiers in Artificial Intelligence2624-82122021-09-01410.3389/frai.2021.573731573731On Consensus-Optimality Trade-offs in Collaborative Deep LearningZhanhong Jiang0Aditya Balu1Chinmay Hegde2Soumik Sarkar3Self-aware Complex Systems Lab, Department of Mechaical Engineering, Iowa State University, Ames, IA, Unitd StatesSelf-aware Complex Systems Lab, Department of Mechaical Engineering, Iowa State University, Ames, IA, Unitd StatesTandon School of Engineering, New York University, New York, NY, United StatesSelf-aware Complex Systems Lab, Department of Mechaical Engineering, Iowa State University, Ames, IA, Unitd StatesIn distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality. In this paper, we build on recent algorithmic progresses in distributed deep learning to explore various consensus-optimality trade-offs over a fixed communication topology. First, we propose the incremental consensus-based distributed stochastic gradient descent (i-CDSGD) algorithm, which involves multiple consensus steps (where each agent communicates information with its neighbors) within each SGD iteration. Second, we propose the generalized consensus-based distributed SGD (g-CDSGD) algorithm that enables us to navigate the full spectrum from complete consensus (all agents agree) to complete disagreement (each agent converges to individual model parameters). We analytically establish convergence of the proposed algorithms for strongly convex and nonconvex objective functions; we also analyze the momentum variants of the algorithms for the strongly convex case. We support our algorithms via numerical experiments, and demonstrate significant improvements over existing methods for collaborative deep learning.https://www.frontiersin.org/articles/10.3389/frai.2021.573731/fulldistributed optimizationconsensus-optimalitycollaborative deep learningsgdconvergence
spellingShingle Zhanhong Jiang
Aditya Balu
Chinmay Hegde
Soumik Sarkar
On Consensus-Optimality Trade-offs in Collaborative Deep Learning
Frontiers in Artificial Intelligence
distributed optimization
consensus-optimality
collaborative deep learning
sgd
convergence
title On Consensus-Optimality Trade-offs in Collaborative Deep Learning
title_full On Consensus-Optimality Trade-offs in Collaborative Deep Learning
title_fullStr On Consensus-Optimality Trade-offs in Collaborative Deep Learning
title_full_unstemmed On Consensus-Optimality Trade-offs in Collaborative Deep Learning
title_short On Consensus-Optimality Trade-offs in Collaborative Deep Learning
title_sort on consensus optimality trade offs in collaborative deep learning
topic distributed optimization
consensus-optimality
collaborative deep learning
sgd
convergence
url https://www.frontiersin.org/articles/10.3389/frai.2021.573731/full
work_keys_str_mv AT zhanhongjiang onconsensusoptimalitytradeoffsincollaborativedeeplearning
AT adityabalu onconsensusoptimalitytradeoffsincollaborativedeeplearning
AT chinmayhegde onconsensusoptimalitytradeoffsincollaborativedeeplearning
AT soumiksarkar onconsensusoptimalitytradeoffsincollaborativedeeplearning