Decentralised learning with distributed gradient descent and random features

We investigate the generalisation performance of Distributed Gradient Descent with implicit regularisation and random features in the homogenous setting where a network of agents are given data sampled independently from the same unknown distribution. Along with reducing the memory footprint, random...

Täydet tiedot

Bibliografiset tiedot
Päätekijät: Richards, D, Rebeschini, P, Rosasco, L
Aineistotyyppi: Conference item
Kieli:English
Julkaistu: Proceedings of Machine Learning Research 2020