FedEem: a fairness-based asynchronous federated learning mechanism
Abstract Federated learning is a mechanism for model training in distributed systems, aiming to protect data privacy while achieving collective intelligence. In traditional synchronous federated learning, all participants must update the model synchronously, which may result in a decrease in the ove...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2023-11-01
|
Series: | Journal of Cloud Computing: Advances, Systems and Applications |
Subjects: | |
Online Access: | https://doi.org/10.1186/s13677-023-00535-2 |
_version_ | 1797629977068306432 |
---|---|
author | Wei Gu Yifan Zhang |
author_facet | Wei Gu Yifan Zhang |
author_sort | Wei Gu |
collection | DOAJ |
description | Abstract Federated learning is a mechanism for model training in distributed systems, aiming to protect data privacy while achieving collective intelligence. In traditional synchronous federated learning, all participants must update the model synchronously, which may result in a decrease in the overall model update frequency due to lagging participants. In order to solve this problem, asynchronous federated learning introduces an asynchronous aggregation mechanism, allowing participants to update models at their own time and rate, and then aggregate each updated edge model on the cloud, thus speeding up the training process. However, under the asynchronous aggregation mechanism, federated learning faces new challenges such as convergence difficulties and unfair model accuracy. This paper first proposes a fairness-based asynchronous federated learning mechanism, which reduces the adverse effects of device and data heterogeneity on the convergence process by using outdatedness and interference-aware weight aggregation, and promotes model personalization and fairness through an early exit mechanism. Mathematical analysis derives the upper bound of convergence speed and the necessary conditions for hyperparameters. Experimental results demonstrate the advantages of the proposed method compared to baseline algorithms, indicating the effectiveness of the proposed method in promoting convergence speed and fairness in federated learning. |
first_indexed | 2024-03-11T11:01:28Z |
format | Article |
id | doaj.art-b94b8947bc3544d5a43dd590c65a6143 |
institution | Directory Open Access Journal |
issn | 2192-113X |
language | English |
last_indexed | 2024-03-11T11:01:28Z |
publishDate | 2023-11-01 |
publisher | SpringerOpen |
record_format | Article |
series | Journal of Cloud Computing: Advances, Systems and Applications |
spelling | doaj.art-b94b8947bc3544d5a43dd590c65a61432023-11-12T12:30:19ZengSpringerOpenJournal of Cloud Computing: Advances, Systems and Applications2192-113X2023-11-0112111310.1186/s13677-023-00535-2FedEem: a fairness-based asynchronous federated learning mechanismWei Gu0Yifan Zhang1School of Computer Science, Nanjing University of Information Science and TechnologySchool of Software, Nanjing University of Information Science and TechnologyAbstract Federated learning is a mechanism for model training in distributed systems, aiming to protect data privacy while achieving collective intelligence. In traditional synchronous federated learning, all participants must update the model synchronously, which may result in a decrease in the overall model update frequency due to lagging participants. In order to solve this problem, asynchronous federated learning introduces an asynchronous aggregation mechanism, allowing participants to update models at their own time and rate, and then aggregate each updated edge model on the cloud, thus speeding up the training process. However, under the asynchronous aggregation mechanism, federated learning faces new challenges such as convergence difficulties and unfair model accuracy. This paper first proposes a fairness-based asynchronous federated learning mechanism, which reduces the adverse effects of device and data heterogeneity on the convergence process by using outdatedness and interference-aware weight aggregation, and promotes model personalization and fairness through an early exit mechanism. Mathematical analysis derives the upper bound of convergence speed and the necessary conditions for hyperparameters. Experimental results demonstrate the advantages of the proposed method compared to baseline algorithms, indicating the effectiveness of the proposed method in promoting convergence speed and fairness in federated learning.https://doi.org/10.1186/s13677-023-00535-2Federated learningAISecurityEdge computing |
spellingShingle | Wei Gu Yifan Zhang FedEem: a fairness-based asynchronous federated learning mechanism Journal of Cloud Computing: Advances, Systems and Applications Federated learning AISecurity Edge computing |
title | FedEem: a fairness-based asynchronous federated learning mechanism |
title_full | FedEem: a fairness-based asynchronous federated learning mechanism |
title_fullStr | FedEem: a fairness-based asynchronous federated learning mechanism |
title_full_unstemmed | FedEem: a fairness-based asynchronous federated learning mechanism |
title_short | FedEem: a fairness-based asynchronous federated learning mechanism |
title_sort | fedeem a fairness based asynchronous federated learning mechanism |
topic | Federated learning AISecurity Edge computing |
url | https://doi.org/10.1186/s13677-023-00535-2 |
work_keys_str_mv | AT weigu fedeemafairnessbasedasynchronousfederatedlearningmechanism AT yifanzhang fedeemafairnessbasedasynchronousfederatedlearningmechanism |