Virtual prompt pre-training for prototype-based few-shot relation extraction
Prompt tuning with pre-trained language models (PLM) has exhibited outstanding performance by reducing the gap between pre-training tasks and various downstream applications, which requires additional labor efforts in label word mappings and prompt template engineering. However, in a label intensive...
Main Authors: | He, Kai, Huang, Yucheng, Mao, Rui, Gong, Tieliang, Li, Chen, Cambria, Erik |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Journal Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/170494 |
Similar Items
-
Tackling background ambiguities in multi-class few-shot point cloud semantic segmentation
by: Lai, Lvlong, et al.
Published: (2022) -
CRCNet: few-shot segmentation with cross-reference and region–global conditional networks
by: Liu, Weide, et al.
Published: (2023) -
Development and design optimisation of digital intensity measurement system for shot peening
by: Tarani, Dinesh Kumar
Published: (2024) -
PromptChart: prompting InstructGPT for zero & few-shot chart question answering and summarization
by: Do, Xuan Long
Published: (2023) -
Zero-shot learning via category-specific visual-semantic mapping and label refinement
by: Niu, Li, et al.
Published: (2020)