Compressing BERT for Binary Text Classification via Adaptive Truncation before Fine-Tuning

Large-scale pre-trained language models such as BERT have brought much better performance to text classification. However, their large sizes can lead to sometimes prohibitively slow fine-tuning and inference. To alleviate this, various compression methods have been proposed; however, most of these m...

Full description

Bibliographic Details
Main Authors: Xin Zhang, Jing Fan, Mengzhe Hei
Format: Article
Language:English
Published: MDPI AG 2022-11-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/12/23/12055