An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks

While the integration of Knowledge Distillation (KD) into Federated Learning (FL) has recently emerged as a promising solution to address the challenges of heterogeneity and communication efficiency, little is known about the security of these schemes against poisoning attacks prevalent in vanilla F...

Full description

Bibliographic Details
Main Authors: He, Weiyang, Liu, Zizhen, Chang, Chip Hong
Other Authors: School of Electrical and Electronic Engineering
Format: Conference Paper
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/173117
_version_ 1811692334097629184
author He, Weiyang
Liu, Zizhen
Chang, Chip Hong
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
He, Weiyang
Liu, Zizhen
Chang, Chip Hong
author_sort He, Weiyang
collection NTU
description While the integration of Knowledge Distillation (KD) into Federated Learning (FL) has recently emerged as a promising solution to address the challenges of heterogeneity and communication efficiency, little is known about the security of these schemes against poisoning attacks prevalent in vanilla FL. From recent countermeasures built around KD, we conjecture that the way knowledge is distilled from the global model to the local models and the type of knowledge transfer by KD themselves offer some resilience against targeted poisoning attacks in FL. To attest this hypothesis, we systematize various adversary agnostic state-of-the-art KD-based FL algorithms for the evaluation of their resistance to different targeted poisoning attacks on two vision recognition tasks. Our empirical security-utility trade-off study indicates surprisingly good inherent immunity of certain KD-based FL algorithms that are not designed to mitigate these attacks. By probing into the causes of their robustness, the KD space exploration provides further insights into the balancing of security, privacy and efficiency triad in different FL settings.
first_indexed 2024-10-01T06:34:08Z
format Conference Paper
id ntu-10356/173117
institution Nanyang Technological University
language English
last_indexed 2024-10-01T06:34:08Z
publishDate 2024
record_format dspace
spelling ntu-10356/1731172024-01-12T15:40:44Z An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks He, Weiyang Liu, Zizhen Chang, Chip Hong School of Electrical and Electronic Engineering 2023 IEEE 32nd Asian Test Symposium (ATS) Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Federated Learning Knowledge Distillation Backdoor Attacks While the integration of Knowledge Distillation (KD) into Federated Learning (FL) has recently emerged as a promising solution to address the challenges of heterogeneity and communication efficiency, little is known about the security of these schemes against poisoning attacks prevalent in vanilla FL. From recent countermeasures built around KD, we conjecture that the way knowledge is distilled from the global model to the local models and the type of knowledge transfer by KD themselves offer some resilience against targeted poisoning attacks in FL. To attest this hypothesis, we systematize various adversary agnostic state-of-the-art KD-based FL algorithms for the evaluation of their resistance to different targeted poisoning attacks on two vision recognition tasks. Our empirical security-utility trade-off study indicates surprisingly good inherent immunity of certain KD-based FL algorithms that are not designed to mitigate these attacks. By probing into the causes of their robustness, the KD space exploration provides further insights into the balancing of security, privacy and efficiency triad in different FL settings. National Research Foundation (NRF) Submitted/Accepted version This research is supported by the National Research Foundation, Singapore, and Cyber Security Agency of Singapore under its National Cybersecurity Research & Development Programme (Cyber-Hardware Forensic & Assurance Evaluation R&D Programme <NRF2018NCRNCR009-0001>). This work is also supported in part by the National Key Research and Development Program of China under grant No. 2020YFB1600201, National Natural Science Foundation of China (NSFC) under grant No. (U20A20202, 62090024, 61876173), and Youth Innovation Promotion Association CAS. 2024-01-12T08:25:42Z 2024-01-12T08:25:42Z 2023 Conference Paper He, W., Liu, Z. & Chang, C. H. (2023). An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks. 2023 IEEE 32nd Asian Test Symposium (ATS). https://dx.doi.org/10.1109/ATS59501.2023.10317993 9798350303100 2377-5386 https://hdl.handle.net/10356/173117 10.1109/ATS59501.2023.10317993 2-s2.0-85179180717 en NRF2018NCRNCR009-0001 © 2023 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1109/ATS59501.2023.10317993. application/pdf
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Federated Learning
Knowledge Distillation
Backdoor Attacks
He, Weiyang
Liu, Zizhen
Chang, Chip Hong
An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
title An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
title_full An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
title_fullStr An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
title_full_unstemmed An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
title_short An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
title_sort empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Federated Learning
Knowledge Distillation
Backdoor Attacks
url https://hdl.handle.net/10356/173117
work_keys_str_mv AT heweiyang anempiricalstudyoftheinherentresistanceofknowledgedistillationbasedfederatedlearningtotargetedpoisoningattacks
AT liuzizhen anempiricalstudyoftheinherentresistanceofknowledgedistillationbasedfederatedlearningtotargetedpoisoningattacks
AT changchiphong anempiricalstudyoftheinherentresistanceofknowledgedistillationbasedfederatedlearningtotargetedpoisoningattacks
AT heweiyang empiricalstudyoftheinherentresistanceofknowledgedistillationbasedfederatedlearningtotargetedpoisoningattacks
AT liuzizhen empiricalstudyoftheinherentresistanceofknowledgedistillationbasedfederatedlearningtotargetedpoisoningattacks
AT changchiphong empiricalstudyoftheinherentresistanceofknowledgedistillationbasedfederatedlearningtotargetedpoisoningattacks