Large language model enhanced with prompt-based vanilla distillation for sentence embeddings

In this dissertation, the prompt-based method PromptEOL is used to train the opt- 2.7b model with the Parameter-Efficient Fine-Tuning method to reduce the number of training parameters and GPU memory usage. Then the opt-2.7b-lora model is used as the teacher model to train the student model under...

Description complète

Détails bibliographiques
Auteur principal: Wang, Minghao
Autres auteurs: Lihui Chen
Format: Thesis-Master by Coursework
Langue:English
Publié: Nanyang Technological University 2024
Sujets:
Accès en ligne:https://hdl.handle.net/10356/173839