Computation-efficient knowledge distillation via uncertainty-aware mixup
Knowledge distillation (KD) has emerged as an essential technique not only for model compression, but also other learning tasks such as continual learning. Given the richer application spectrum and potential online usage of KD, knowledge distillation efficiency becomes a pivotal component. In this w...
Main Authors: | Xu, Guodong, Liu, Ziwei, Loy, Chen Change |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Journal Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/172038 |
Similar Items
-
MKD: Mixup-Based Knowledge Distillation for Mandarin End-to-End Speech Recognition
by: Xing Wu, et al.
Published: (2022-05-01) -
Edge-computing-based knowledge distillation and multitask learning for partial discharge recognition
by: Ji, Jinsheng, et al.
Published: (2024) -
Uncertainty-Aware Knowledge Distillation for Collision Identification of Collaborative Robots
by: Wookyong Kwon, et al.
Published: (2021-10-01) -
Inter-region affinity distillation for road marking segmentation
by: Hou, Yuenan, et al.
Published: (2022) -
Discriminator-enhanced knowledge-distillation networks
by: Li, Zhenping, et al.
Published: (2023)