Universal in-context approximation by prompting fully recurrent models

Zero-shot and in-context learning enable solving tasks without model fine-tuning, making them essential for developing generative model solutions. Therefore, it is crucial to understand whether a pretrained model can be prompted to approximate any function, i.e., whether it is a universal in-context...

Full description

Bibliographic Details
Main Authors: Petrov, A, Lamb, TA, Paren, A, Torr, PHS, Bibi, A
Format: Conference item
Language:English
Published: Neural Information Processing Systems Foundation 2024