FedTKD: A Trustworthy Heterogeneous Federated Learning Based on Adaptive Knowledge Distillation

Federated learning allows multiple parties to train models while jointly protecting user privacy. However, traditional federated learning requires each client to have the same model structure to fuse the global model. In real-world scenarios, each client may need to develop personalized models based...

Full description

Bibliographic Details
Main Authors: Leiming Chen, Weishan Zhang, Cihao Dong, Dehai Zhao, Xingjie Zeng, Sibo Qiao, Yichang Zhu, Chee Wei Tan
Format: Article
Language:English
Published: MDPI AG 2024-01-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/26/1/96
_version_ 1797344082225266688
author Leiming Chen
Weishan Zhang
Cihao Dong
Dehai Zhao
Xingjie Zeng
Sibo Qiao
Yichang Zhu
Chee Wei Tan
author_facet Leiming Chen
Weishan Zhang
Cihao Dong
Dehai Zhao
Xingjie Zeng
Sibo Qiao
Yichang Zhu
Chee Wei Tan
author_sort Leiming Chen
collection DOAJ
description Federated learning allows multiple parties to train models while jointly protecting user privacy. However, traditional federated learning requires each client to have the same model structure to fuse the global model. In real-world scenarios, each client may need to develop personalized models based on its environment, making it difficult to perform federated learning in a heterogeneous model environment. Some knowledge distillation methods address the problem of heterogeneous model fusion to some extent. However, these methods assume that each client is trustworthy. Some clients may produce malicious or low-quality knowledge, making it difficult to aggregate trustworthy knowledge in a heterogeneous environment. To address these challenges, we propose a trustworthy heterogeneous federated learning framework (FedTKD) to achieve client identification and trustworthy knowledge fusion. Firstly, we propose a malicious client identification method based on client logit features, which can exclude malicious information in fusing global logit. Then, we propose a selectivity knowledge fusion method to achieve high-quality global logit computation. Additionally, we propose an adaptive knowledge distillation method to improve the accuracy of knowledge transfer from the server side to the client side. Finally, we design different attack and data distribution scenarios to validate our method. The experiment shows that our method outperforms the baseline methods, showing stable performance in all attack scenarios and achieving an accuracy improvement of 2% to 3% in different data distributions.
first_indexed 2024-03-08T10:57:12Z
format Article
id doaj.art-61ce043d94134fb39e136159ac1d2f27
institution Directory Open Access Journal
issn 1099-4300
language English
last_indexed 2024-03-08T10:57:12Z
publishDate 2024-01-01
publisher MDPI AG
record_format Article
series Entropy
spelling doaj.art-61ce043d94134fb39e136159ac1d2f272024-01-26T16:23:21ZengMDPI AGEntropy1099-43002024-01-012619610.3390/e26010096FedTKD: A Trustworthy Heterogeneous Federated Learning Based on Adaptive Knowledge DistillationLeiming Chen0Weishan Zhang1Cihao Dong2Dehai Zhao3Xingjie Zeng4Sibo Qiao5Yichang Zhu6Chee Wei Tan7School of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, ChinaSchool of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, ChinaSchool of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, ChinaCSIRO’Data61, Sydney 2015, AustraliaSchool of Computer Science, Southwest Petroleum University, Chengdu 610500, ChinaSchool of Software, Tiangong University, Tianjin 300387, ChinaSchool of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, ChinaSchool of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, SingaporeFederated learning allows multiple parties to train models while jointly protecting user privacy. However, traditional federated learning requires each client to have the same model structure to fuse the global model. In real-world scenarios, each client may need to develop personalized models based on its environment, making it difficult to perform federated learning in a heterogeneous model environment. Some knowledge distillation methods address the problem of heterogeneous model fusion to some extent. However, these methods assume that each client is trustworthy. Some clients may produce malicious or low-quality knowledge, making it difficult to aggregate trustworthy knowledge in a heterogeneous environment. To address these challenges, we propose a trustworthy heterogeneous federated learning framework (FedTKD) to achieve client identification and trustworthy knowledge fusion. Firstly, we propose a malicious client identification method based on client logit features, which can exclude malicious information in fusing global logit. Then, we propose a selectivity knowledge fusion method to achieve high-quality global logit computation. Additionally, we propose an adaptive knowledge distillation method to improve the accuracy of knowledge transfer from the server side to the client side. Finally, we design different attack and data distribution scenarios to validate our method. The experiment shows that our method outperforms the baseline methods, showing stable performance in all attack scenarios and achieving an accuracy improvement of 2% to 3% in different data distributions.https://www.mdpi.com/1099-4300/26/1/96heterogeneous federated learningadaptive knowledge distillationmalicious client identificationtrustworthy knowledge aggregation
spellingShingle Leiming Chen
Weishan Zhang
Cihao Dong
Dehai Zhao
Xingjie Zeng
Sibo Qiao
Yichang Zhu
Chee Wei Tan
FedTKD: A Trustworthy Heterogeneous Federated Learning Based on Adaptive Knowledge Distillation
Entropy
heterogeneous federated learning
adaptive knowledge distillation
malicious client identification
trustworthy knowledge aggregation
title FedTKD: A Trustworthy Heterogeneous Federated Learning Based on Adaptive Knowledge Distillation
title_full FedTKD: A Trustworthy Heterogeneous Federated Learning Based on Adaptive Knowledge Distillation
title_fullStr FedTKD: A Trustworthy Heterogeneous Federated Learning Based on Adaptive Knowledge Distillation
title_full_unstemmed FedTKD: A Trustworthy Heterogeneous Federated Learning Based on Adaptive Knowledge Distillation
title_short FedTKD: A Trustworthy Heterogeneous Federated Learning Based on Adaptive Knowledge Distillation
title_sort fedtkd a trustworthy heterogeneous federated learning based on adaptive knowledge distillation
topic heterogeneous federated learning
adaptive knowledge distillation
malicious client identification
trustworthy knowledge aggregation
url https://www.mdpi.com/1099-4300/26/1/96
work_keys_str_mv AT leimingchen fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT weishanzhang fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT cihaodong fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT dehaizhao fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT xingjiezeng fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT siboqiao fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT yichangzhu fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation
AT cheeweitan fedtkdatrustworthyheterogeneousfederatedlearningbasedonadaptiveknowledgedistillation