Prompt sensitivity of transformer variants for text classification
This study investigates the sensitivity of various Transformer model architectures, encoder-only (BERT), decoder-only (GPT-2), and encoder-decoder (T5), in response to various types of prompt modifications on text classification tasks. By leveraging a fine-tuning approach, the models were evaluated...
Main Author: | Ong, Li Han |
---|---|
Other Authors: | Wang Wenya |
Format: | Final Year Project (FYP) |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181519 |
Similar Items
-
Universal Motion Generator: Trajectory Autocompletion by Motion Prompts
by: Wang, Yanwei, et al.
Published: (2022) -
Deep learning techniques for hate speech detection
by: Teng, Yen Fong
Published: (2024) -
TrueGPT: can you privately extract algorithms from ChatGPT in tabular classification?
by: Soegeng, Hans Farrell
Published: (2024) -
Discriminative Gaussian Process Latent Variable Model for Classification
by: Urtasun, Raquel, et al.
Published: (2007) -
Generative AI and education
by: Chieng, Shannon Shuen Ern
Published: (2024)