In-context learning learns label relationships but is not conventional learning

The predictions of Large Language Models (LLMs) on downstream tasks often improve significantly when including examples of the input–label relationship in the context. However, there is currently no consensus about how this in-context learning (ICL) ability of LLMs works. For example, while Xie et a...

Descrición completa

Detalles Bibliográficos
Main Authors: Kossen, J, Gal, Y, Rainforth, T
Formato: Conference item
Idioma:English
Publicado: OpenReview 2024