Few-shot learning for text classification

Few-shot text classification addresses the critical challenge of performing accurate classification in scenarios with limited labeled data, a common constraint in many real-world applications. Motivated by the need to improve model performance under such constraints, this report explores advanced ap...

Full description

Bibliographic Details
Main Author: Cao, Jianzhe
Other Authors: Mao Kezhi
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2025
Subjects:
Online Access:https://hdl.handle.net/10356/182917
Description
Summary:Few-shot text classification addresses the critical challenge of performing accurate classification in scenarios with limited labeled data, a common constraint in many real-world applications. Motivated by the need to improve model performance under such constraints, this report explores advanced approaches leveraging pre-trained models and innovative adaptation techniques. Our primary goal is to enhance the efficiency and effectiveness of few-shot learning methods for text classification tasks. We implement and evaluate Zmap and Wmap methods using Sentence-BERT, demonstrating their ability to capture semantic relationships with minimal data. Additionally, we explore prompt-based adaptation strategies: Prompt Engineering, Prompt Tuning, and Fine-tuning on the Llama3.1 large language model, achieving notable performance improvements across benchmark datasets such as IMDB, AG News, and SST-2. Despite the promising results, limitations such as dependency on manual prompt design and domain-specific tuning are identified. To address these, we propose directions for future research, including automated prompt optimization and cross-domain adaptations. This work aims to advance the development of robust few-shot learning techniques, providing practical solutions for low-resource text classification tasks.