Model Fusion from Unauthorized Clients in Federated Learning

A key feature of federated learning (FL) is that not all clients participate in every communication epoch of each global model update. The rationality for such partial client selection is largely to reduce the communication overhead. However, in many cases, the unselected clients are still able to c...

Full description

Bibliographic Details
Main Authors: Boyuan Li, Shengbo Chen, Keping Yu
Format: Article
Language:English
Published: MDPI AG 2022-10-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/10/20/3751
Description
Summary:A key feature of federated learning (FL) is that not all clients participate in every communication epoch of each global model update. The rationality for such partial client selection is largely to reduce the communication overhead. However, in many cases, the unselected clients are still able to compute their local model updates, but are not “authorized” to upload the updates in this round, which is a waste of computation capacity. In this work, we propose an algorithm <span style="font-variant: small-caps;">FedUmf</span>—Federated Learning with Unauthorized Model Fusion that utilizes the model updates from the unselected clients. More specifically, a client computes the stochastic gradient descent (SGD) even if it is not selected to upload in the current communication epoch. Then, if this client is selected in the next round, it non-trivially merges the outdated SGD stored in the previous round with the current global model before it starts to compute the new local model. A rigorous convergence analysis is established for <span style="font-variant: small-caps;">FedUmf</span>, which shows a faster convergence rate than the vanilla <span style="font-variant: small-caps;">FedAvg</span>. Comprehensive numerical experiments on several standard classification tasks demonstrate its advantages, which corroborate the theoretical results.
ISSN:2227-7390