In-context learning learns label relationships but is not conventional learning

The predictions of Large Language Models (LLMs) on downstream tasks often improve significantly when including examples of the input–label relationship in the context. However, there is currently no consensus about how this in-context learning (ICL) ability of LLMs works. For example, while Xie et a...

Full description

Bibliographic Details
Main Authors: Kossen, J, Gal, Y, Rainforth, T
Format: Conference item
Language:English
Published: OpenReview 2024