PET: Parameter-efficient Knowledge Distillation on Transformer.

Given a large Transformer model, how can we obtain a small and computationally efficient model which maintains the performance of the original model? Transformer has shown significant performance improvements for many NLP tasks in recent years. However, their large size, expensive computational cost...

Full description

Bibliographic Details
Main Authors: Hyojin Jeon, Seungcheol Park, Jin-Gee Kim, U Kang
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2023-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0288060