Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns

Abstract Contextual embeddings, derived from deep language models (DLMs), provide a continuous vectorial representation of language. This embedding space differs fundamentally from the symbolic representations posited by traditional psycholinguistics. We hypothesize that language areas in the human...

Volledige beschrijving

Bibliografische gegevens
Hoofdauteurs: Ariel Goldstein, Avigail Grinstein-Dabush, Mariano Schain, Haocheng Wang, Zhuoqiao Hong, Bobbi Aubrey, Samuel A. Nastase, Zaid Zada, Eric Ham, Amir Feder, Harshvardhan Gazula, Eliav Buchnik, Werner Doyle, Sasha Devore, Patricia Dugan, Roi Reichart, Daniel Friedman, Michael Brenner, Avinatan Hassidim, Orrin Devinsky, Adeen Flinker, Uri Hasson
Formaat: Artikel
Taal:English
Gepubliceerd in: Nature Portfolio 2024-03-01
Reeks:Nature Communications
Online toegang:https://doi.org/10.1038/s41467-024-46631-y