Universal in-context approximation by prompting fully recurrent models
Zero-shot and in-context learning enable solving tasks without model fine-tuning, making them essential for developing generative model solutions. Therefore, it is crucial to understand whether a pretrained model can be prompted to approximate any function, i.e., whether it is a universal in-context...
Main Authors: | Petrov, A, Lamb, TA, Paren, A, Torr, PHS, Bibi, A |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Neural Information Processing Systems Foundation
2024
|
Similar Items
-
Prompting a pretrained transformer can be a universal approximator
by: Petrov, A, et al.
Published: (2024) -
When do prompting and prefix-tuning work? a theory of capabilities and limitations
by: Petrov, A, et al.
Published: (2024) -
Fully-convolutional Siamese networks for object tracking
by: Bertinetto, L, et al.
Published: (2016) -
Certifying ensembles: a general certification theory with s-lipschitzness
by: Petrov, A, et al.
Published: (2023) -
Approximate structured output learning for Constrained Local Models with application to real-time facial feature detection and tracking on low-power devices
by: Zheng, S, et al.
Published: (2013)