Decentralised learning with distributed gradient descent and random features
We investigate the generalisation performance of Distributed Gradient Descent with implicit regularisation and random features in the homogenous setting where a network of agents are given data sampled independently from the same unknown distribution. Along with reducing the memory footprint, random...
Үндсэн зохиолчид: | Richards, D, Rebeschini, P, Rosasco, L |
---|---|
Формат: | Conference item |
Хэл сонгох: | English |
Хэвлэсэн: |
Proceedings of Machine Learning Research
2020
|
Ижил төстэй зүйлс
-
Robust gradient descent for phase retrieval
-н: Buna-Marginean, A, зэрэг
Хэвлэсэн: (2025) -
Graph-dependent implicit regularisation for distributed stochastic subgradient descent
-н: Richards, D, зэрэг
Хэвлэсэн: (2020) -
Generalization bounds for label noise stochastic gradient descent
-н: Huh, JE, зэрэг
Хэвлэсэн: (2023) -
Generalization bounds for label noise stochastic gradient descent
-н: Huh, JE, зэрэг
Хэвлэсэн: (2024) -
Optimal statistical rates for decentralised non-parametric regression with linear speed-up
-н: Richards, D, зэрэг
Хэвлэсэн: (2019)