Prompting a pretrained transformer can be a universal approximator

Despite the widespread adoption of prompting, prompt tuning and prefix-tuning of transformer models, our theoretical understanding of these fine-tuning methods remains limited. A key question is whether one can arbitrarily modify the behavior of a pretrained model by prompting or prefix-tuning it. F...

पूर्ण विवरण

ग्रंथसूची विवरण
मुख्य लेखकों: Petrov, A, Torr, PHS, Bibi, A
स्वरूप: Conference item
भाषा:English
प्रकाशित: PMLR 2024