Computation-efficient knowledge distillation via uncertainty-aware mixup

Knowledge distillation (KD) has emerged as an essential technique not only for model compression, but also other learning tasks such as continual learning. Given the richer application spectrum and potential online usage of KD, knowledge distillation efficiency becomes a pivotal component. In this w...

Full description

Bibliographic Details
Main Authors: Xu, Guodong, Liu, Ziwei, Loy, Chen Change
Other Authors: School of Computer Science and Engineering
Format: Journal Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172038

Similar Items