Prompting a pretrained transformer can be a universal approximator
Despite the widespread adoption of prompting, prompt tuning and prefix-tuning of transformer models, our theoretical understanding of these fine-tuning methods remains limited. A key question is whether one can arbitrarily modify the behavior of a pretrained model by prompting or prefix-tuning it. F...
Main Authors: | Petrov, A, Torr, PHS, Bibi, A |
---|---|
Format: | Conference item |
Language: | English |
Published: |
PMLR
2024
|
Similar Items
-
Universal in-context approximation by prompting fully recurrent models
by: Petrov, A, et al.
Published: (2025) -
MPrompt: A Pretraining-Prompting Scheme for Enhanced Fewshot Subgraph Classification
by: Xu, Muhua
Published: (2024) -
On pretraining data diversity for self-supervised learning
by: Hammoud, HAAK, et al.
Published: (2024) -
When do prompting and prefix-tuning work? a theory of capabilities and limitations
by: Petrov, A, et al.
Published: (2024) -
No "zero-shot" without exponential data: pretraining concept frequency determines multimodal model performance
by: Udandarao, V, et al.
Published: (2024)