Universal in-context approximation by prompting fully recurrent models
Zero-shot and in-context learning enable solving tasks without model fine-tuning, making them essential for developing generative model solutions. Therefore, it is crucial to understand whether a pretrained model can be prompted to approximate any function, i.e., whether it is a universal in-context...
Main Authors: | , , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Neural Information Processing Systems Foundation
2024
|