Insuring against the perils in distributed learning: privacy-preserving empirical risk minimization

Multiple organizations would benefit from collaborative learning models trained over aggregated datasets from various human activity recognition applications without privacy leakages. Two of the prevailing privacy-preserving protocols, secure multi-party computation and differential privacy, however...

Full description

Bibliographic Details
Main Authors: Kwabena Owusu-Agyemang, Zhen Qin, Appiah Benjamin, Hu Xiong, Zhiguang Qin
Format: Article
Language:English
Published: AIMS Press 2021-04-01
Series:Mathematical Biosciences and Engineering
Subjects:
Online Access:https://www.aimspress.com/article/doi/10.3934/mbe.2021151?viewType=HTML
_version_ 1818602523175419904
author Kwabena Owusu-Agyemang
Zhen Qin
Appiah Benjamin
Hu Xiong
Zhiguang Qin
author_facet Kwabena Owusu-Agyemang
Zhen Qin
Appiah Benjamin
Hu Xiong
Zhiguang Qin
author_sort Kwabena Owusu-Agyemang
collection DOAJ
description Multiple organizations would benefit from collaborative learning models trained over aggregated datasets from various human activity recognition applications without privacy leakages. Two of the prevailing privacy-preserving protocols, secure multi-party computation and differential privacy, however, are still confronted with serious privacy leakages: lack of provision for privacy guarantee about individual data and insufficient protection against inference attacks on the resultant models. To mitigate the aforementioned shortfalls, we propose privacy-preserving architecture to explore the potential of secure multi-party computation and differential privacy. We utilize the inherent prospects of output perturbation and gradient perturbation in our differential privacy method, and progress with an innovation for both techniques in the distributed learning domain. Data owners collaboratively aggregate the locally trained models inside a secure multi-party computation domain in the output perturbation algorithm, and later inject appreciable statistical noise before exposing the classifier. We inject noise during every iterative update to collaboratively train a global model in our gradient perturbation algorithm. The utility guarantee of our gradient perturbation method is determined by an expected curvature relative to the minimum curvature. With the application of expected curvature, we theoretically justify the advantage of gradient perturbation in our proposed algorithm, therefore closing existing gap between practice and theory. Validation of our algorithm on real-world human recognition activity datasets establishes that our protocol incurs minimal computational overhead, provides substantial utility gains for typical security and privacy guarantees.
first_indexed 2024-12-16T13:08:38Z
format Article
id doaj.art-ffd401bfe504449b95a010360e07e316
institution Directory Open Access Journal
issn 1551-0018
language English
last_indexed 2024-12-16T13:08:38Z
publishDate 2021-04-01
publisher AIMS Press
record_format Article
series Mathematical Biosciences and Engineering
spelling doaj.art-ffd401bfe504449b95a010360e07e3162022-12-21T22:30:41ZengAIMS PressMathematical Biosciences and Engineering1551-00182021-04-011843006303310.3934/mbe.2021151Insuring against the perils in distributed learning: privacy-preserving empirical risk minimizationKwabena Owusu-Agyemang0Zhen Qin1Appiah Benjamin 2Hu Xiong3Zhiguang Qin4University of Electronic Science and Technology of China, School of Information and Software Engineering, ChinaUniversity of Electronic Science and Technology of China, School of Information and Software Engineering, ChinaUniversity of Electronic Science and Technology of China, School of Information and Software Engineering, ChinaUniversity of Electronic Science and Technology of China, School of Information and Software Engineering, ChinaUniversity of Electronic Science and Technology of China, School of Information and Software Engineering, ChinaMultiple organizations would benefit from collaborative learning models trained over aggregated datasets from various human activity recognition applications without privacy leakages. Two of the prevailing privacy-preserving protocols, secure multi-party computation and differential privacy, however, are still confronted with serious privacy leakages: lack of provision for privacy guarantee about individual data and insufficient protection against inference attacks on the resultant models. To mitigate the aforementioned shortfalls, we propose privacy-preserving architecture to explore the potential of secure multi-party computation and differential privacy. We utilize the inherent prospects of output perturbation and gradient perturbation in our differential privacy method, and progress with an innovation for both techniques in the distributed learning domain. Data owners collaboratively aggregate the locally trained models inside a secure multi-party computation domain in the output perturbation algorithm, and later inject appreciable statistical noise before exposing the classifier. We inject noise during every iterative update to collaboratively train a global model in our gradient perturbation algorithm. The utility guarantee of our gradient perturbation method is determined by an expected curvature relative to the minimum curvature. With the application of expected curvature, we theoretically justify the advantage of gradient perturbation in our proposed algorithm, therefore closing existing gap between practice and theory. Validation of our algorithm on real-world human recognition activity datasets establishes that our protocol incurs minimal computational overhead, provides substantial utility gains for typical security and privacy guarantees.https://www.aimspress.com/article/doi/10.3934/mbe.2021151?viewType=HTMLinternet of thingsdifferential privacyfully homomorphic encryptionprivacy-preservingsecure multi-party computationshuman activity recognition
spellingShingle Kwabena Owusu-Agyemang
Zhen Qin
Appiah Benjamin
Hu Xiong
Zhiguang Qin
Insuring against the perils in distributed learning: privacy-preserving empirical risk minimization
Mathematical Biosciences and Engineering
internet of things
differential privacy
fully homomorphic encryption
privacy-preserving
secure multi-party computations
human activity recognition
title Insuring against the perils in distributed learning: privacy-preserving empirical risk minimization
title_full Insuring against the perils in distributed learning: privacy-preserving empirical risk minimization
title_fullStr Insuring against the perils in distributed learning: privacy-preserving empirical risk minimization
title_full_unstemmed Insuring against the perils in distributed learning: privacy-preserving empirical risk minimization
title_short Insuring against the perils in distributed learning: privacy-preserving empirical risk minimization
title_sort insuring against the perils in distributed learning privacy preserving empirical risk minimization
topic internet of things
differential privacy
fully homomorphic encryption
privacy-preserving
secure multi-party computations
human activity recognition
url https://www.aimspress.com/article/doi/10.3934/mbe.2021151?viewType=HTML
work_keys_str_mv AT kwabenaowusuagyemang insuringagainsttheperilsindistributedlearningprivacypreservingempiricalriskminimization
AT zhenqin insuringagainsttheperilsindistributedlearningprivacypreservingempiricalriskminimization
AT appiahbenjamin insuringagainsttheperilsindistributedlearningprivacypreservingempiricalriskminimization
AT huxiong insuringagainsttheperilsindistributedlearningprivacypreservingempiricalriskminimization
AT zhiguangqin insuringagainsttheperilsindistributedlearningprivacypreservingempiricalriskminimization