LAFD: Local-Differentially Private and Asynchronous Federated Learning With Direct Feedback Alignment
Federated learning is a promising approach for training machine learning models using distributed data from multiple mobile devices. However, privacy concerns arise when sensitive data are used for training. In this paper, we discuss the challenges of applying local differential privacy to federated...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10216288/ |
_version_ | 1797739792397500416 |
---|---|
author | Kijung Jung Incheol Baek Soohyung Kim Yon Dohn Chung |
author_facet | Kijung Jung Incheol Baek Soohyung Kim Yon Dohn Chung |
author_sort | Kijung Jung |
collection | DOAJ |
description | Federated learning is a promising approach for training machine learning models using distributed data from multiple mobile devices. However, privacy concerns arise when sensitive data are used for training. In this paper, we discuss the challenges of applying local differential privacy to federated learning, which is compounded by the limited resources of mobile clients and the asynchronicity of federated learning. To address these challenges, we propose a framework called LAFD, which stands for Local-differentially Private and Asynchronous Federated Learning with Direct Feedback Alignment. LAFD consists of two parts: (a) LFL-DFALS: Local differentially private Federated Learning with Direct Feedback Alignment and Layer Sampling, and (b) AFL-LMTGR: Asynchronous Federated Learning with Local Model Training and Gradient Rebalancing. LFL-DFALS effectively reduces the computation and communication costs via direct feedback alignment and layer sampling during the training process of federated learning. AFL-LMTGR handles the problem of stragglers via local model training and gradient rebalancing. Local model training enables asynchronous federated learning to the participants of the federated learning. In addition, gradient rebalancing mitigates the gap between the local model and aggregated model. We demonstrate the performance of LFL-DFALS and AFL-LMTGR through the experiments using multivariate datasets and image datasets. |
first_indexed | 2024-03-12T14:03:15Z |
format | Article |
id | doaj.art-ea49f310cfa94e868bf3ba0f1f74ca1c |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-03-12T14:03:15Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-ea49f310cfa94e868bf3ba0f1f74ca1c2023-08-21T23:00:36ZengIEEEIEEE Access2169-35362023-01-0111867548676910.1109/ACCESS.2023.330470410216288LAFD: Local-Differentially Private and Asynchronous Federated Learning With Direct Feedback AlignmentKijung Jung0https://orcid.org/0009-0007-6440-1400Incheol Baek1Soohyung Kim2Yon Dohn Chung3https://orcid.org/0000-0003-2070-5123Department of Computer Science and Engineering, Korea University, Seoul, Republic of KoreaDepartment of Computer Science and Engineering, Korea University, Seoul, Republic of KoreaSamsung Research, Samsung Seoul Research and Development Campus, Seoul, Republic of KoreaDepartment of Computer Science and Engineering, Korea University, Seoul, Republic of KoreaFederated learning is a promising approach for training machine learning models using distributed data from multiple mobile devices. However, privacy concerns arise when sensitive data are used for training. In this paper, we discuss the challenges of applying local differential privacy to federated learning, which is compounded by the limited resources of mobile clients and the asynchronicity of federated learning. To address these challenges, we propose a framework called LAFD, which stands for Local-differentially Private and Asynchronous Federated Learning with Direct Feedback Alignment. LAFD consists of two parts: (a) LFL-DFALS: Local differentially private Federated Learning with Direct Feedback Alignment and Layer Sampling, and (b) AFL-LMTGR: Asynchronous Federated Learning with Local Model Training and Gradient Rebalancing. LFL-DFALS effectively reduces the computation and communication costs via direct feedback alignment and layer sampling during the training process of federated learning. AFL-LMTGR handles the problem of stragglers via local model training and gradient rebalancing. Local model training enables asynchronous federated learning to the participants of the federated learning. In addition, gradient rebalancing mitigates the gap between the local model and aggregated model. We demonstrate the performance of LFL-DFALS and AFL-LMTGR through the experiments using multivariate datasets and image datasets.https://ieeexplore.ieee.org/document/10216288/Direct feedback alignmentfederated learninglocal differential privacyprivacy-preserving deep learning |
spellingShingle | Kijung Jung Incheol Baek Soohyung Kim Yon Dohn Chung LAFD: Local-Differentially Private and Asynchronous Federated Learning With Direct Feedback Alignment IEEE Access Direct feedback alignment federated learning local differential privacy privacy-preserving deep learning |
title | LAFD: Local-Differentially Private and Asynchronous Federated Learning With Direct Feedback Alignment |
title_full | LAFD: Local-Differentially Private and Asynchronous Federated Learning With Direct Feedback Alignment |
title_fullStr | LAFD: Local-Differentially Private and Asynchronous Federated Learning With Direct Feedback Alignment |
title_full_unstemmed | LAFD: Local-Differentially Private and Asynchronous Federated Learning With Direct Feedback Alignment |
title_short | LAFD: Local-Differentially Private and Asynchronous Federated Learning With Direct Feedback Alignment |
title_sort | lafd local differentially private and asynchronous federated learning with direct feedback alignment |
topic | Direct feedback alignment federated learning local differential privacy privacy-preserving deep learning |
url | https://ieeexplore.ieee.org/document/10216288/ |
work_keys_str_mv | AT kijungjung lafdlocaldifferentiallyprivateandasynchronousfederatedlearningwithdirectfeedbackalignment AT incheolbaek lafdlocaldifferentiallyprivateandasynchronousfederatedlearningwithdirectfeedbackalignment AT soohyungkim lafdlocaldifferentiallyprivateandasynchronousfederatedlearningwithdirectfeedbackalignment AT yondohnchung lafdlocaldifferentiallyprivateandasynchronousfederatedlearningwithdirectfeedbackalignment |