Large language model enhanced with prompt-based vanilla distillation for sentence embeddings

In this dissertation, the prompt-based method PromptEOL is used to train the opt- 2.7b model with the Parameter-Efficient Fine-Tuning method to reduce the number of training parameters and GPU memory usage. Then the opt-2.7b-lora model is used as the teacher model to train the student model under...

Full description

Bibliographic Details
Main Author: Wang, Minghao
Other Authors: Lihui Chen
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/173839
_version_ 1826119076931436544
author Wang, Minghao
author2 Lihui Chen
author_facet Lihui Chen
Wang, Minghao
author_sort Wang, Minghao
collection NTU
description In this dissertation, the prompt-based method PromptEOL is used to train the opt- 2.7b model with the Parameter-Efficient Fine-Tuning method to reduce the number of training parameters and GPU memory usage. Then the opt-2.7b-lora model is used as the teacher model to train the student model under the distillation framework of DistillCSE with the vanilla distillation. The core method of evaluation we use centers on Semantic Textual Similarity detection.
first_indexed 2024-10-01T04:54:08Z
format Thesis-Master by Coursework
id ntu-10356/173839
institution Nanyang Technological University
language English
last_indexed 2024-10-01T04:54:08Z
publishDate 2024
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1738392024-03-01T15:44:20Z Large language model enhanced with prompt-based vanilla distillation for sentence embeddings Wang, Minghao Lihui Chen School of Electrical and Electronic Engineering ELHCHEN@ntu.edu.sg Engineering Sentence embeddings In this dissertation, the prompt-based method PromptEOL is used to train the opt- 2.7b model with the Parameter-Efficient Fine-Tuning method to reduce the number of training parameters and GPU memory usage. Then the opt-2.7b-lora model is used as the teacher model to train the student model under the distillation framework of DistillCSE with the vanilla distillation. The core method of evaluation we use centers on Semantic Textual Similarity detection. Master's degree 2024-03-01T02:52:12Z 2024-03-01T02:52:12Z 2023 Thesis-Master by Coursework Wang, M. (2023). Large language model enhanced with prompt-based vanilla distillation for sentence embeddings. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/173839 https://hdl.handle.net/10356/173839 en application/pdf Nanyang Technological University
spellingShingle Engineering
Sentence embeddings
Wang, Minghao
Large language model enhanced with prompt-based vanilla distillation for sentence embeddings
title Large language model enhanced with prompt-based vanilla distillation for sentence embeddings
title_full Large language model enhanced with prompt-based vanilla distillation for sentence embeddings
title_fullStr Large language model enhanced with prompt-based vanilla distillation for sentence embeddings
title_full_unstemmed Large language model enhanced with prompt-based vanilla distillation for sentence embeddings
title_short Large language model enhanced with prompt-based vanilla distillation for sentence embeddings
title_sort large language model enhanced with prompt based vanilla distillation for sentence embeddings
topic Engineering
Sentence embeddings
url https://hdl.handle.net/10356/173839
work_keys_str_mv AT wangminghao largelanguagemodelenhancedwithpromptbasedvanilladistillationforsentenceembeddings