Differential Privacy Preservation in Robust Continual Learning
Enhancing the privacy of machine learning (ML) algorithms has become crucial with the presence of different types of attacks on AI applications. Continual learning (CL) is a branch of ML with the aim of learning a set of knowledge sequentially and continuously from a data stream. On the other hand,...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2022-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9721905/ |
_version_ | 1819121205575155712 |
---|---|
author | Ahmad Hassanpour Majid Moradikia Bian Yang Ahmed Abdelhadi Christoph Busch Julian Fierrez |
author_facet | Ahmad Hassanpour Majid Moradikia Bian Yang Ahmed Abdelhadi Christoph Busch Julian Fierrez |
author_sort | Ahmad Hassanpour |
collection | DOAJ |
description | Enhancing the privacy of machine learning (ML) algorithms has become crucial with the presence of different types of attacks on AI applications. Continual learning (CL) is a branch of ML with the aim of learning a set of knowledge sequentially and continuously from a data stream. On the other hand, differential privacy (DP) has been extensively used to enhance the privacy of deep learning (DL) models. However, the task of adding DP to CL would be challenging, because on one hand the DP intrinsically adds some noise that reduce the utility, on the other hand the endless learning procedure of CL is a serious obstacle, resulting in the catastrophic forgetting (CF) of previous samples of ongoing stream. To be able to add DP to CL, we have proposed a methodology by which we cannot only strike a tradeoff between privacy and utility, but also mitigate the CF. The proposed solution presents a set of key features: (1) it guarantees theoretical privacy bounds via enforcing the DP principle; (2) we further incorporate a robust procedure into the proposed DP-CL scheme to hinder the CF; and (3) most importantly, it achieves practical continuous training for a CL process without running out of the available privacy budget. Through extensive empirical evaluation on benchmark datasets and analyses, we validate the efficacy of the proposed solution. |
first_indexed | 2024-12-22T06:32:52Z |
format | Article |
id | doaj.art-af10a6fea2fa4e74b318c4e469dbb6c4 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-22T06:32:52Z |
publishDate | 2022-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-af10a6fea2fa4e74b318c4e469dbb6c42022-12-21T18:35:39ZengIEEEIEEE Access2169-35362022-01-0110242732428710.1109/ACCESS.2022.31548269721905Differential Privacy Preservation in Robust Continual LearningAhmad Hassanpour0https://orcid.org/0000-0002-3936-2223Majid Moradikia1Bian Yang2Ahmed Abdelhadi3Christoph Busch4https://orcid.org/0000-0002-9159-2923Julian Fierrez5https://orcid.org/0000-0002-6343-5656Department of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gjøvik, NorwayEngineering Technology Department, University of Houston, Houston, TX, USADepartment of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gjøvik, NorwayEngineering Technology Department, University of Houston, Houston, TX, USADepartment of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), Gjøvik, NorwaySchool of Engineering, Universidad Autonoma de Madrid, Madrid, SpainEnhancing the privacy of machine learning (ML) algorithms has become crucial with the presence of different types of attacks on AI applications. Continual learning (CL) is a branch of ML with the aim of learning a set of knowledge sequentially and continuously from a data stream. On the other hand, differential privacy (DP) has been extensively used to enhance the privacy of deep learning (DL) models. However, the task of adding DP to CL would be challenging, because on one hand the DP intrinsically adds some noise that reduce the utility, on the other hand the endless learning procedure of CL is a serious obstacle, resulting in the catastrophic forgetting (CF) of previous samples of ongoing stream. To be able to add DP to CL, we have proposed a methodology by which we cannot only strike a tradeoff between privacy and utility, but also mitigate the CF. The proposed solution presents a set of key features: (1) it guarantees theoretical privacy bounds via enforcing the DP principle; (2) we further incorporate a robust procedure into the proposed DP-CL scheme to hinder the CF; and (3) most importantly, it achieves practical continuous training for a CL process without running out of the available privacy budget. Through extensive empirical evaluation on benchmark datasets and analyses, we validate the efficacy of the proposed solution.https://ieeexplore.ieee.org/document/9721905/Differential privacycontinual learningdeep learningprivacy |
spellingShingle | Ahmad Hassanpour Majid Moradikia Bian Yang Ahmed Abdelhadi Christoph Busch Julian Fierrez Differential Privacy Preservation in Robust Continual Learning IEEE Access Differential privacy continual learning deep learning privacy |
title | Differential Privacy Preservation in Robust Continual Learning |
title_full | Differential Privacy Preservation in Robust Continual Learning |
title_fullStr | Differential Privacy Preservation in Robust Continual Learning |
title_full_unstemmed | Differential Privacy Preservation in Robust Continual Learning |
title_short | Differential Privacy Preservation in Robust Continual Learning |
title_sort | differential privacy preservation in robust continual learning |
topic | Differential privacy continual learning deep learning privacy |
url | https://ieeexplore.ieee.org/document/9721905/ |
work_keys_str_mv | AT ahmadhassanpour differentialprivacypreservationinrobustcontinuallearning AT majidmoradikia differentialprivacypreservationinrobustcontinuallearning AT bianyang differentialprivacypreservationinrobustcontinuallearning AT ahmedabdelhadi differentialprivacypreservationinrobustcontinuallearning AT christophbusch differentialprivacypreservationinrobustcontinuallearning AT julianfierrez differentialprivacypreservationinrobustcontinuallearning |