An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
While the integration of Knowledge Distillation (KD) into Federated Learning (FL) has recently emerged as a promising solution to address the challenges of heterogeneity and communication efficiency, little is known about the security of these schemes against poisoning attacks prevalent in vanilla F...
Main Authors: | He, Weiyang, Liu, Zizhen, Chang, Chip Hong |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Conference Paper |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/173117 |
Similar Items
-
BadSFL: backdoor attack in scaffold federated learning
by: Zhang, Xuanye
Published: (2024) -
SPFL: a self-purified federated learning method against poisoning attacks
by: Liu, Zizhen, et al.
Published: (2024) -
Towards efficient and certified recovery from poisoning attacks in federated learning
by: Jiang, Yu, et al.
Published: (2025) -
Personalized federated learning with dynamic clustering and model distillation
by: Bao, Junyan
Published: (2025) -
Evaluation of backdoor attacks and defenses to deep neural networks
by: Ooi, Ying Xuan
Published: (2024)