A Continuous-Time Analysis of Distributed Stochastic Gradient

© 2019 Massachusetts Institute of Technology. We analyze the effect of synchronization on distributed stochastic gradient algorithms. By exploiting an analogy with dynamical models of biological quorum sensing, where synchronization between agents is induced through communication with a common signa...

Full description

Bibliographic Details
Main Authors: Boffi, Nicholas M, Slotine, Jean-Jacques E
Format: Article
Language:English
Published: MIT Press - Journals 2021
Online Access:https://hdl.handle.net/1721.1/136586
_version_ 1826205703517241344
author Boffi, Nicholas M
Slotine, Jean-Jacques E
author_facet Boffi, Nicholas M
Slotine, Jean-Jacques E
author_sort Boffi, Nicholas M
collection MIT
description © 2019 Massachusetts Institute of Technology. We analyze the effect of synchronization on distributed stochastic gradient algorithms. By exploiting an analogy with dynamical models of biological quorum sensing, where synchronization between agents is induced through communication with a common signal, we quantify how synchronization can significantly reduce the magnitude of the noise felt by the individual distributed agents and their spatial mean. This noise reduction is in turn associated with a reduction in the smoothing of the loss function imposed by the stochastic gradient approximation. Through simulations on model nonconvex objectives, we demonstrate that coupling can stabilize higher noise levels and improve convergence. We provide a convergence analysis for strongly convex functions by deriving a bound on the expected deviation of the spatial mean of the agents from the global minimizer for an algorithm based on quorum sensing, the same algorithm with momentum, and the elastic averaging SGD (EASGD) algorithm. We discuss extensions to new algorithms that allow each agent to broadcast its current measure of success and shape the collective computation accordingly. We supplement our theoretical analysis with numerical experiments on convolutional neural networks trained on the CIFAR-10 data set, where we note a surprising regularizing property of EASGD even when applied to the non-distributed case. This observation suggests alternative second-order in time algorithms for nondis-tributed optimization that are competitive with momentum methods.
first_indexed 2024-09-23T13:17:35Z
format Article
id mit-1721.1/136586
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T13:17:35Z
publishDate 2021
publisher MIT Press - Journals
record_format dspace
spelling mit-1721.1/1365862021-10-28T03:54:10Z A Continuous-Time Analysis of Distributed Stochastic Gradient Boffi, Nicholas M Slotine, Jean-Jacques E © 2019 Massachusetts Institute of Technology. We analyze the effect of synchronization on distributed stochastic gradient algorithms. By exploiting an analogy with dynamical models of biological quorum sensing, where synchronization between agents is induced through communication with a common signal, we quantify how synchronization can significantly reduce the magnitude of the noise felt by the individual distributed agents and their spatial mean. This noise reduction is in turn associated with a reduction in the smoothing of the loss function imposed by the stochastic gradient approximation. Through simulations on model nonconvex objectives, we demonstrate that coupling can stabilize higher noise levels and improve convergence. We provide a convergence analysis for strongly convex functions by deriving a bound on the expected deviation of the spatial mean of the agents from the global minimizer for an algorithm based on quorum sensing, the same algorithm with momentum, and the elastic averaging SGD (EASGD) algorithm. We discuss extensions to new algorithms that allow each agent to broadcast its current measure of success and shape the collective computation accordingly. We supplement our theoretical analysis with numerical experiments on convolutional neural networks trained on the CIFAR-10 data set, where we note a surprising regularizing property of EASGD even when applied to the non-distributed case. This observation suggests alternative second-order in time algorithms for nondis-tributed optimization that are competitive with momentum methods. 2021-10-27T20:36:07Z 2021-10-27T20:36:07Z 2020 2020-08-07T15:46:28Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/136586 en 10.1162/NECO_A_01248 Neural Computation Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf MIT Press - Journals MIT Press
spellingShingle Boffi, Nicholas M
Slotine, Jean-Jacques E
A Continuous-Time Analysis of Distributed Stochastic Gradient
title A Continuous-Time Analysis of Distributed Stochastic Gradient
title_full A Continuous-Time Analysis of Distributed Stochastic Gradient
title_fullStr A Continuous-Time Analysis of Distributed Stochastic Gradient
title_full_unstemmed A Continuous-Time Analysis of Distributed Stochastic Gradient
title_short A Continuous-Time Analysis of Distributed Stochastic Gradient
title_sort continuous time analysis of distributed stochastic gradient
url https://hdl.handle.net/1721.1/136586
work_keys_str_mv AT boffinicholasm acontinuoustimeanalysisofdistributedstochasticgradient
AT slotinejeanjacquese acontinuoustimeanalysisofdistributedstochasticgradient
AT boffinicholasm continuoustimeanalysisofdistributedstochasticgradient
AT slotinejeanjacquese continuoustimeanalysisofdistributedstochasticgradient