Decentralised learning with distributed gradient descent and random features
We investigate the generalisation performance of Distributed Gradient Descent with implicit regularisation and random features in the homogenous setting where a network of agents are given data sampled independently from the same unknown distribution. Along with reducing the memory footprint, random...
Päätekijät: | Richards, D, Rebeschini, P, Rosasco, L |
---|---|
Aineistotyyppi: | Conference item |
Kieli: | English |
Julkaistu: |
Proceedings of Machine Learning Research
2020
|
Samankaltaisia teoksia
Carathéodory sampling for stochastic gradient descent
Tekijä: Cosentino, F, et al.
Julkaistu: (2020)
Tekijä: Cosentino, F, et al.
Julkaistu: (2020)
Samankaltaisia teoksia
-
Robust gradient descent for phase retrieval
Tekijä: Buna-Marginean, A, et al.
Julkaistu: (2025) -
Graph-dependent implicit regularisation for distributed stochastic subgradient descent
Tekijä: Richards, D, et al.
Julkaistu: (2020) -
Generalization bounds for label noise stochastic gradient descent
Tekijä: Huh, JE, et al.
Julkaistu: (2023) -
Generalization bounds for label noise stochastic gradient descent
Tekijä: Huh, JE, et al.
Julkaistu: (2024) -
Optimal statistical rates for decentralised non-parametric regression with linear speed-up
Tekijä: Richards, D, et al.
Julkaistu: (2019)