Prompting a pretrained transformer can be a universal approximator

Despite the widespread adoption of prompting, prompt tuning and prefix-tuning of transformer models, our theoretical understanding of these fine-tuning methods remains limited. A key question is whether one can arbitrarily modify the behavior of a pretrained model by prompting or prefix-tuning it. F...

全面介紹

書目詳細資料
Main Authors: Petrov, A, Torr, PHS, Bibi, A
格式: Conference item
語言:English
出版: PMLR 2024