FedTKD: a trustworthy heterogeneous federated learning based on adaptive knowledge distillation

Federated learning allows multiple parties to train models while jointly protecting user privacy. However, traditional federated learning requires each client to have the same model structure to fuse the global model. In real-world scenarios, each client may need to develop personalized models based...

Full description

Bibliographic Details
Main Authors: Chen, Leiming, Zhang, Weishan, Dong, Cihao, Zhao, Dehai, Zeng, Xingjie, Qiao, Sibo, Zhu, Yichang, Tan, Chee Wei
Other Authors: School of Civil and Environmental Engineering
Format: Journal Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/174735
_version_ 1811689680507240448
author Chen, Leiming
Zhang, Weishan
Dong, Cihao
Zhao, Dehai
Zeng, Xingjie
Qiao, Sibo
Zhu, Yichang
Tan, Chee Wei
author2 School of Civil and Environmental Engineering
author_facet School of Civil and Environmental Engineering
Chen, Leiming
Zhang, Weishan
Dong, Cihao
Zhao, Dehai
Zeng, Xingjie
Qiao, Sibo
Zhu, Yichang
Tan, Chee Wei
author_sort Chen, Leiming
collection NTU
description Federated learning allows multiple parties to train models while jointly protecting user privacy. However, traditional federated learning requires each client to have the same model structure to fuse the global model. In real-world scenarios, each client may need to develop personalized models based on its environment, making it difficult to perform federated learning in a heterogeneous model environment. Some knowledge distillation methods address the problem of heterogeneous model fusion to some extent. However, these methods assume that each client is trustworthy. Some clients may produce malicious or low-quality knowledge, making it difficult to aggregate trustworthy knowledge in a heterogeneous environment. To address these challenges, we propose a trustworthy heterogeneous federated learning framework (FedTKD) to achieve client identification and trustworthy knowledge fusion. Firstly, we propose a malicious client identification method based on client logit features, which can exclude malicious information in fusing global logit. Then, we propose a selectivity knowledge fusion method to achieve high-quality global logit computation. Additionally, we propose an adaptive knowledge distillation method to improve the accuracy of knowledge transfer from the server side to the client side. Finally, we design different attack and data distribution scenarios to validate our method. The experiment shows that our method outperforms the baseline methods, showing stable performance in all attack scenarios and achieving an accuracy improvement of 2% to 3% in different data distributions.
first_indexed 2024-10-01T05:51:57Z
format Journal Article
id ntu-10356/174735
institution Nanyang Technological University
language English
last_indexed 2024-10-01T05:51:57Z
publishDate 2024
record_format dspace
spelling ntu-10356/1747352024-04-12T15:34:37Z FedTKD: a trustworthy heterogeneous federated learning based on adaptive knowledge distillation Chen, Leiming Zhang, Weishan Dong, Cihao Zhao, Dehai Zeng, Xingjie Qiao, Sibo Zhu, Yichang Tan, Chee Wei School of Civil and Environmental Engineering Engineering Malicious client identification Knowledge distillation Federated learning allows multiple parties to train models while jointly protecting user privacy. However, traditional federated learning requires each client to have the same model structure to fuse the global model. In real-world scenarios, each client may need to develop personalized models based on its environment, making it difficult to perform federated learning in a heterogeneous model environment. Some knowledge distillation methods address the problem of heterogeneous model fusion to some extent. However, these methods assume that each client is trustworthy. Some clients may produce malicious or low-quality knowledge, making it difficult to aggregate trustworthy knowledge in a heterogeneous environment. To address these challenges, we propose a trustworthy heterogeneous federated learning framework (FedTKD) to achieve client identification and trustworthy knowledge fusion. Firstly, we propose a malicious client identification method based on client logit features, which can exclude malicious information in fusing global logit. Then, we propose a selectivity knowledge fusion method to achieve high-quality global logit computation. Additionally, we propose an adaptive knowledge distillation method to improve the accuracy of knowledge transfer from the server side to the client side. Finally, we design different attack and data distribution scenarios to validate our method. The experiment shows that our method outperforms the baseline methods, showing stable performance in all attack scenarios and achieving an accuracy improvement of 2% to 3% in different data distributions. Ministry of Education (MOE) Nanyang Technological University Published version The work is supported by the Singapore Ministry of Education (AcRF Tier 1 RG91/22 and NTU startup fund), the National Natural Science Foundation of China (No. 62072469), and the China Scholarship Council (No. 202206450035) 2024-04-08T07:54:42Z 2024-04-08T07:54:42Z 2024 Journal Article Chen, L., Zhang, W., Dong, C., Zhao, D., Zeng, X., Qiao, S., Zhu, Y. & Tan, C. W. (2024). FedTKD: a trustworthy heterogeneous federated learning based on adaptive knowledge distillation. Entropy, 26(1), 96-. https://dx.doi.org/10.3390/e26010096 1099-4300 https://hdl.handle.net/10356/174735 10.3390/e26010096 38275504 2-s2.0-85183101588 1 26 96 en RG91/22 Entropy © 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/4.0/). application/pdf
spellingShingle Engineering
Malicious client identification
Knowledge distillation
Chen, Leiming
Zhang, Weishan
Dong, Cihao
Zhao, Dehai
Zeng, Xingjie
Qiao, Sibo
Zhu, Yichang
Tan, Chee Wei
FedTKD: a trustworthy heterogeneous federated learning based on adaptive knowledge distillation
title FedTKD: a trustworthy heterogeneous federated learning based on adaptive knowledge distillation
title_full FedTKD: a trustworthy heterogeneous federated learning based on adaptive knowledge distillation
title_fullStr FedTKD: a trustworthy heterogeneous federated learning based on adaptive knowledge distillation
title_full_unstemmed FedTKD: a trustworthy heterogeneous federated learning based on adaptive knowledge distillation
title_short FedTKD: a trustworthy heterogeneous federated learning based on adaptive knowledge distillation
title_sort fedtkd a trustworthy heterogeneous federated learning based on adaptive knowledge distillation
topic Engineering
Malicious client identification
Knowledge distillation
url https://hdl.handle.net/10356/174735
work_keys_str_mv AT chenleiming fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT zhangweishan fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT dongcihao fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT zhaodehai fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT zengxingjie fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT qiaosibo fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT zhuyichang fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT tancheewei fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation