A federated learning scheme meets dynamic differential privacy

Abstract Federated learning is a widely used distributed learning approach in recent years, however, despite model training from collecting data become to gathering parameters, privacy violations may occur when publishing and sharing models. A dynamic approach is proposed to add Gaussian noise more...

Full description

Bibliographic Details
Main Authors: Shengnan Guo, Xibin Wang, Shigong Long, Hai Liu, Liu Hai, Toong Hai Sam
Format: Article
Language:English
Published: Wiley 2023-09-01
Series:CAAI Transactions on Intelligence Technology
Subjects:
Online Access:https://doi.org/10.1049/cit2.12187
_version_ 1797682976120635392
author Shengnan Guo
Xibin Wang
Shigong Long
Hai Liu
Liu Hai
Toong Hai Sam
author_facet Shengnan Guo
Xibin Wang
Shigong Long
Hai Liu
Liu Hai
Toong Hai Sam
author_sort Shengnan Guo
collection DOAJ
description Abstract Federated learning is a widely used distributed learning approach in recent years, however, despite model training from collecting data become to gathering parameters, privacy violations may occur when publishing and sharing models. A dynamic approach is proposed to add Gaussian noise more effectively and apply differential privacy to federal deep learning. Concretely, it is abandoning the traditional way of equally distributing the privacy budget ϵ and adjusting the privacy budget to accommodate gradient descent federation learning dynamically, where the parameters depend on computation derived to avoid the impact on the algorithm that hyperparameters are created manually. It also incorporates adaptive threshold cropping to control the sensitivity, and finally, moments accountant is used to counting the ϵ consumed on the privacy‐preserving, and learning is stopped only if the ϵtotal by clients setting is reached, this allows the privacy budget to be adequately explored for model training. The experimental results on real datasets show that the method training has almost the same effect as the model learning of non‐privacy, which is significantly better than the differential privacy method used by TensorFlow.
first_indexed 2024-03-12T00:07:43Z
format Article
id doaj.art-feeb1f2a28144cef86de0f23547d9aa5
institution Directory Open Access Journal
issn 2468-2322
language English
last_indexed 2024-03-12T00:07:43Z
publishDate 2023-09-01
publisher Wiley
record_format Article
series CAAI Transactions on Intelligence Technology
spelling doaj.art-feeb1f2a28144cef86de0f23547d9aa52023-09-16T16:19:35ZengWileyCAAI Transactions on Intelligence Technology2468-23222023-09-01831087110010.1049/cit2.12187A federated learning scheme meets dynamic differential privacyShengnan Guo0Xibin Wang1Shigong Long2Hai Liu3Liu Hai4Toong Hai Sam5State Key Laboratory of Public Big Data College of Computer Science and Technology Guizhou University Guiyang ChinaSchool of Big Data, Key Laboratory of Electric Power Big Data of Guizhou Province Guizhou Institute of Technology Guiyang ChinaState Key Laboratory of Public Big Data College of Computer Science and Technology Guizhou University Guiyang ChinaState Key Laboratory of Public Big Data College of Computer Science and Technology Guizhou University Guiyang ChinaSchool of Information Guizhou University of Finance and Economics Guiyang ChinaFaculty of Business and Communication INTI International University Nilai MalaysiaAbstract Federated learning is a widely used distributed learning approach in recent years, however, despite model training from collecting data become to gathering parameters, privacy violations may occur when publishing and sharing models. A dynamic approach is proposed to add Gaussian noise more effectively and apply differential privacy to federal deep learning. Concretely, it is abandoning the traditional way of equally distributing the privacy budget ϵ and adjusting the privacy budget to accommodate gradient descent federation learning dynamically, where the parameters depend on computation derived to avoid the impact on the algorithm that hyperparameters are created manually. It also incorporates adaptive threshold cropping to control the sensitivity, and finally, moments accountant is used to counting the ϵ consumed on the privacy‐preserving, and learning is stopped only if the ϵtotal by clients setting is reached, this allows the privacy budget to be adequately explored for model training. The experimental results on real datasets show that the method training has almost the same effect as the model learning of non‐privacy, which is significantly better than the differential privacy method used by TensorFlow.https://doi.org/10.1049/cit2.12187data privacymachine learningsecurity of data
spellingShingle Shengnan Guo
Xibin Wang
Shigong Long
Hai Liu
Liu Hai
Toong Hai Sam
A federated learning scheme meets dynamic differential privacy
CAAI Transactions on Intelligence Technology
data privacy
machine learning
security of data
title A federated learning scheme meets dynamic differential privacy
title_full A federated learning scheme meets dynamic differential privacy
title_fullStr A federated learning scheme meets dynamic differential privacy
title_full_unstemmed A federated learning scheme meets dynamic differential privacy
title_short A federated learning scheme meets dynamic differential privacy
title_sort federated learning scheme meets dynamic differential privacy
topic data privacy
machine learning
security of data
url https://doi.org/10.1049/cit2.12187
work_keys_str_mv AT shengnanguo afederatedlearningschememeetsdynamicdifferentialprivacy
AT xibinwang afederatedlearningschememeetsdynamicdifferentialprivacy
AT shigonglong afederatedlearningschememeetsdynamicdifferentialprivacy
AT hailiu afederatedlearningschememeetsdynamicdifferentialprivacy
AT liuhai afederatedlearningschememeetsdynamicdifferentialprivacy
AT toonghaisam afederatedlearningschememeetsdynamicdifferentialprivacy
AT shengnanguo federatedlearningschememeetsdynamicdifferentialprivacy
AT xibinwang federatedlearningschememeetsdynamicdifferentialprivacy
AT shigonglong federatedlearningschememeetsdynamicdifferentialprivacy
AT hailiu federatedlearningschememeetsdynamicdifferentialprivacy
AT liuhai federatedlearningschememeetsdynamicdifferentialprivacy
AT toonghaisam federatedlearningschememeetsdynamicdifferentialprivacy