Large language model enhanced with prompt-based vanilla distillation for sentence embeddings
In this dissertation, the prompt-based method PromptEOL is used to train the opt- 2.7b model with the Parameter-Efficient Fine-Tuning method to reduce the number of training parameters and GPU memory usage. Then the opt-2.7b-lora model is used as the teacher model to train the student model under...
Main Author: | Wang, Minghao |
---|---|
Other Authors: | Lihui Chen |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/173839 |
Similar Items
-
Composition distillation for semantic sentence embeddings
by: Vaanavan, Sezhiyan
Published: (2024) -
On-the-fly knowledge distillation model for sentence embedding
by: Zhu, Xuchun
Published: (2024) -
Prefix Data Augmentation for Contrastive Learning of Unsupervised Sentence Embedding
by: Chunchun Wang, et al.
Published: (2024-03-01) -
Extracting Sentence Embeddings from Pretrained Transformer Models
by: Lukas Stankevičius, et al.
Published: (2024-10-01) -
Simple Data Transformations for Mitigating the Syntactic Similarity to Improve Sentence Embeddings at Supervised Contrastive Learning
by: Minji Kim, et al.
Published: (2024-08-01)