Differential Privacy for Deep and Federated Learning: A Survey
Users’ privacy is vulnerable at all stages of the deep learning process. Sensitive information of users may be disclosed during data collection, during training, or even after releasing the trained learning model. Differential privacy (DP) is one of the main approaches proven to ensure st...
Main Authors: | Ahmed El Ouadrhiri, Ahmed Abdelhadi |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2022-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9714350/ |
Similar Items
-
Kalman Filter-Based Differential Privacy Federated Learning Method
by: Xiaohui Yang, et al.
Published: (2022-08-01) -
PLDP-FL: Federated Learning with Personalized Local Differential Privacy
by: Xiaoying Shen, et al.
Published: (2023-03-01) -
FL-ODP: An Optimized Differential Privacy Enabled Privacy Preserving Federated Learning
by: Maria Iqbal, et al.
Published: (2023-01-01) -
Hierarchical federated learning with global differential privacy
by: Youqun Long, et al.
Published: (2023-04-01) -
Differential Privacy Preservation in Robust Continual Learning
by: Ahmad Hassanpour, et al.
Published: (2022-01-01)