Large language model enhanced with prompt-based vanilla distillation for sentence embeddings
In this dissertation, the prompt-based method PromptEOL is used to train the opt- 2.7b model with the Parameter-Efficient Fine-Tuning method to reduce the number of training parameters and GPU memory usage. Then the opt-2.7b-lora model is used as the teacher model to train the student model under...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/173839 |