Decentralised learning with distributed gradient descent and random features

We investigate the generalisation performance of Distributed Gradient Descent with implicit regularisation and random features in the homogenous setting where a network of agents are given data sampled independently from the same unknown distribution. Along with reducing the memory footprint, random...

Mô tả đầy đủ

Chi tiết về thư mục
Những tác giả chính: Richards, D, Rebeschini, P, Rosasco, L
Định dạng: Conference item
Ngôn ngữ:English
Được phát hành: Proceedings of Machine Learning Research 2020