Efficient Latent Space Compression for Lightning-Fast Fine-Tuning and Inference of Transformer-Based Models
This paper presents a technique to reduce the number of parameters in a transformer-based encoder–decoder architecture by incorporating autoencoders. To discover the optimal compression, we trained different autoencoders on the embedding space (encoder’s output) of several pre-trained models. The ex...
Main Authors: | Ala Alam Falaki, Robin Gras |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-07-01
|
Series: | Machine Learning and Knowledge Extraction |
Subjects: | |
Online Access: | https://www.mdpi.com/2504-4990/5/3/45 |
Similar Items
-
2FAST2Q: a general-purpose sequence search and counting program for FASTQ files
by: Afonso M. Bravo, et al.
Published: (2022-10-01) -
Stacked LSTM Sequence-to-Sequence Autoencoder with Feature Selection for Daily Solar Radiation Prediction: A Review and New Modeling Results
by: Sujan Ghimire, et al.
Published: (2022-01-01) -
Abstractive text summarization of low-resourced languages using deep learning
by: Nida Shafiq, et al.
Published: (2023-01-01) -
Sequence-to-Sequence Voice Reconstruction for Silent Speech in a Tonal Language
by: Huiyan Li, et al.
Published: (2022-06-01) -
Prediction accuracy of regulatory elements from sequence varies by functional sequencing technique
by: Ronald J. Nowling, et al.
Published: (2023-08-01)