Training Vision Transformers in Federated Learning with Limited Edge-Device Resources
Vision transformers (ViTs) demonstrate exceptional performance in numerous computer vision tasks owing to their self-attention modules. Despite improved network performance, transformers frequently require significant computational resources. The increasing need for data privacy has encouraged the d...
Main Authors: | Jiang Tao, Zhen Gao, Zhaohui Guo |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-08-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/11/17/2638 |
Similar Items
-
Federated Distillation Methodology for Label-Based Group Structures
by: Geonhee Yang, et al.
Published: (2023-12-01) -
Federated Learning for Diabetic Retinopathy Detection Using Vision Transformers
by: Mohamed Chetoui, et al.
Published: (2023-11-01) -
A Decentralized Federated Learning Based on Node Selection and Knowledge Distillation
by: Zhongchang Zhou, et al.
Published: (2023-07-01) -
FedDK: Improving Cyclic Knowledge Distillation for Personalized Healthcare Federated Learning
by: Yikai Xu, et al.
Published: (2023-01-01) -
A Personalized Federated Learning Method Based on Clustering and Knowledge Distillation
by: Jianfei Zhang, et al.
Published: (2024-02-01)