Prompting a pretrained transformer can be a universal approximator
Despite the widespread adoption of prompting, prompt tuning and prefix-tuning of transformer models, our theoretical understanding of these fine-tuning methods remains limited. A key question is whether one can arbitrarily modify the behavior of a pretrained model by prompting or prefix-tuning it. F...
मुख्य लेखकों: | , , |
---|---|
स्वरूप: | Conference item |
भाषा: | English |
प्रकाशित: |
PMLR
2024
|