Efficient and Controllable Model Compression through Sequential Knowledge Distillation and Pruning

Efficient model deployment is a key focus in deep learning. This has led to the exploration of methods such as knowledge distillation and network pruning to compress models and increase their performance. In this study, we investigate the potential synergy between knowledge distillation and network...

Full description

Bibliographic Details
Main Authors: Leila Malihi, Gunther Heidemann
Format: Article
Language:English
Published: MDPI AG 2023-09-01
Series:Big Data and Cognitive Computing
Subjects:
Online Access:https://www.mdpi.com/2504-2289/7/3/154