Wasserstein Distance-Based Deep Leakage from Gradients

Federated learning protects the privacy information in the data set by sharing the average gradient. However, “Deep Leakage from Gradient” (DLG) algorithm as a gradient-based feature reconstruction attack can recover privacy training data using gradients shared in federated learning, resulting in pr...

Full description

Bibliographic Details
Main Authors: Zifan Wang, Changgen Peng, Xing He, Weijie Tan
Format: Article
Language:English
Published: MDPI AG 2023-05-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/25/5/810
_version_ 1827741381794201600
author Zifan Wang
Changgen Peng
Xing He
Weijie Tan
author_facet Zifan Wang
Changgen Peng
Xing He
Weijie Tan
author_sort Zifan Wang
collection DOAJ
description Federated learning protects the privacy information in the data set by sharing the average gradient. However, “Deep Leakage from Gradient” (DLG) algorithm as a gradient-based feature reconstruction attack can recover privacy training data using gradients shared in federated learning, resulting in private information leakage. However, the algorithm has the disadvantages of slow model convergence and poor inverse generated images accuracy. To address these issues, a Wasserstein distance-based DLG method is proposed, named WDLG. The WDLG method uses Wasserstein distance as the training loss function achieved to improve the inverse image quality and the model convergence. The hard-to-calculate Wasserstein distance is converted to be calculated iteratively using the Lipschit condition and Kantorovich–Rubinstein duality. Theoretical analysis proves the differentiability and continuity of Wasserstein distance. Finally, experiment results show that the WDLG algorithm is superior to DLG in training speed and inversion image quality. At the same time, we prove through the experiments that differential privacy can be used for disturbance protection, which provides some ideas for the development of a deep learning framework to protect privacy.
first_indexed 2024-03-11T03:45:49Z
format Article
id doaj.art-8009896dbd2c4ee19bdbb0189b05700b
institution Directory Open Access Journal
issn 1099-4300
language English
last_indexed 2024-03-11T03:45:49Z
publishDate 2023-05-01
publisher MDPI AG
record_format Article
series Entropy
spelling doaj.art-8009896dbd2c4ee19bdbb0189b05700b2023-11-18T01:16:42ZengMDPI AGEntropy1099-43002023-05-0125581010.3390/e25050810Wasserstein Distance-Based Deep Leakage from GradientsZifan Wang0Changgen Peng1Xing He2Weijie Tan3State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, ChinaState Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, ChinaState Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, ChinaState Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, ChinaFederated learning protects the privacy information in the data set by sharing the average gradient. However, “Deep Leakage from Gradient” (DLG) algorithm as a gradient-based feature reconstruction attack can recover privacy training data using gradients shared in federated learning, resulting in private information leakage. However, the algorithm has the disadvantages of slow model convergence and poor inverse generated images accuracy. To address these issues, a Wasserstein distance-based DLG method is proposed, named WDLG. The WDLG method uses Wasserstein distance as the training loss function achieved to improve the inverse image quality and the model convergence. The hard-to-calculate Wasserstein distance is converted to be calculated iteratively using the Lipschit condition and Kantorovich–Rubinstein duality. Theoretical analysis proves the differentiability and continuity of Wasserstein distance. Finally, experiment results show that the WDLG algorithm is superior to DLG in training speed and inversion image quality. At the same time, we prove through the experiments that differential privacy can be used for disturbance protection, which provides some ideas for the development of a deep learning framework to protect privacy.https://www.mdpi.com/1099-4300/25/5/810Wasserstein distancegradientinversionimage reconstruction
spellingShingle Zifan Wang
Changgen Peng
Xing He
Weijie Tan
Wasserstein Distance-Based Deep Leakage from Gradients
Entropy
Wasserstein distance
gradient
inversion
image reconstruction
title Wasserstein Distance-Based Deep Leakage from Gradients
title_full Wasserstein Distance-Based Deep Leakage from Gradients
title_fullStr Wasserstein Distance-Based Deep Leakage from Gradients
title_full_unstemmed Wasserstein Distance-Based Deep Leakage from Gradients
title_short Wasserstein Distance-Based Deep Leakage from Gradients
title_sort wasserstein distance based deep leakage from gradients
topic Wasserstein distance
gradient
inversion
image reconstruction
url https://www.mdpi.com/1099-4300/25/5/810
work_keys_str_mv AT zifanwang wassersteindistancebaseddeepleakagefromgradients
AT changgenpeng wassersteindistancebaseddeepleakagefromgradients
AT xinghe wassersteindistancebaseddeepleakagefromgradients
AT weijietan wassersteindistancebaseddeepleakagefromgradients