Large language model enhanced with prompt-based vanilla distillation for sentence embeddings
In this dissertation, the prompt-based method PromptEOL is used to train the opt- 2.7b model with the Parameter-Efficient Fine-Tuning method to reduce the number of training parameters and GPU memory usage. Then the opt-2.7b-lora model is used as the teacher model to train the student model under...
Auteur principal: | |
---|---|
Autres auteurs: | |
Format: | Thesis-Master by Coursework |
Langue: | English |
Publié: |
Nanyang Technological University
2024
|
Sujets: | |
Accès en ligne: | https://hdl.handle.net/10356/173839 |