Differential Privacy for Deep and Federated Learning: A Survey
Users’ privacy is vulnerable at all stages of the deep learning process. Sensitive information of users may be disclosed during data collection, during training, or even after releasing the trained learning model. Differential privacy (DP) is one of the main approaches proven to ensure st...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2022-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9714350/ |
_version_ | 1818908665486245888 |
---|---|
author | Ahmed El Ouadrhiri Ahmed Abdelhadi |
author_facet | Ahmed El Ouadrhiri Ahmed Abdelhadi |
author_sort | Ahmed El Ouadrhiri |
collection | DOAJ |
description | Users’ privacy is vulnerable at all stages of the deep learning process. Sensitive information of users may be disclosed during data collection, during training, or even after releasing the trained learning model. Differential privacy (DP) is one of the main approaches proven to ensure strong privacy protection in data analysis. DP protects the users’ privacy by adding noise to the original dataset or the learning parameters. Thus, an attacker could not retrieve the sensitive information of an individual involved in the training dataset. In this survey paper, we analyze and present the main ideas based on DP to guarantee users’ privacy in deep and federated learning. In addition, we illustrate all types of probability distributions that satisfy the DP mechanism, with their properties and use cases. Furthermore, we bridge the gap in the literature by providing a comprehensive overview of the different variants of DP, highlighting their advantages and limitations. Our study reveals the gap between theory and application, accuracy, and robustness of DP. Finally, we provide several open problems and future research directions. |
first_indexed | 2024-12-19T22:14:38Z |
format | Article |
id | doaj.art-287eff74c26a4c7d994312f2a69b9683 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-19T22:14:38Z |
publishDate | 2022-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-287eff74c26a4c7d994312f2a69b96832022-12-21T20:03:48ZengIEEEIEEE Access2169-35362022-01-0110223592238010.1109/ACCESS.2022.31516709714350Differential Privacy for Deep and Federated Learning: A SurveyAhmed El Ouadrhiri0https://orcid.org/0000-0002-0750-1954Ahmed Abdelhadi1Department of Engineering Technology, University of Houston, Houston, TX, USADepartment of Engineering Technology, University of Houston, Houston, TX, USAUsers’ privacy is vulnerable at all stages of the deep learning process. Sensitive information of users may be disclosed during data collection, during training, or even after releasing the trained learning model. Differential privacy (DP) is one of the main approaches proven to ensure strong privacy protection in data analysis. DP protects the users’ privacy by adding noise to the original dataset or the learning parameters. Thus, an attacker could not retrieve the sensitive information of an individual involved in the training dataset. In this survey paper, we analyze and present the main ideas based on DP to guarantee users’ privacy in deep and federated learning. In addition, we illustrate all types of probability distributions that satisfy the DP mechanism, with their properties and use cases. Furthermore, we bridge the gap in the literature by providing a comprehensive overview of the different variants of DP, highlighting their advantages and limitations. Our study reveals the gap between theory and application, accuracy, and robustness of DP. Finally, we provide several open problems and future research directions.https://ieeexplore.ieee.org/document/9714350/Deep learningfederated learningprivacy protectiondifferential privacyprobability distribution |
spellingShingle | Ahmed El Ouadrhiri Ahmed Abdelhadi Differential Privacy for Deep and Federated Learning: A Survey IEEE Access Deep learning federated learning privacy protection differential privacy probability distribution |
title | Differential Privacy for Deep and Federated Learning: A Survey |
title_full | Differential Privacy for Deep and Federated Learning: A Survey |
title_fullStr | Differential Privacy for Deep and Federated Learning: A Survey |
title_full_unstemmed | Differential Privacy for Deep and Federated Learning: A Survey |
title_short | Differential Privacy for Deep and Federated Learning: A Survey |
title_sort | differential privacy for deep and federated learning a survey |
topic | Deep learning federated learning privacy protection differential privacy probability distribution |
url | https://ieeexplore.ieee.org/document/9714350/ |
work_keys_str_mv | AT ahmedelouadrhiri differentialprivacyfordeepandfederatedlearningasurvey AT ahmedabdelhadi differentialprivacyfordeepandfederatedlearningasurvey |