Prompting a pretrained transformer can be a universal approximator

Despite the widespread adoption of prompting, prompt tuning and prefix-tuning of transformer models, our theoretical understanding of these fine-tuning methods remains limited. A key question is whether one can arbitrarily modify the behavior of a pretrained model by prompting or prefix-tuning it. F...

詳細記述

書誌詳細
主要な著者: Petrov, A, Torr, PHS, Bibi, A
フォーマット: Conference item
言語:English
出版事項: PMLR 2024