Driving and suppressing the human language network using large language models

Transformer models such as GPT generate human-like language and are highly predictive of human brain responses to language. Here, using fMRI-measured brain responses to 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of brain response associated with...

Full description

Bibliographic Details
Main Authors: Tuckute, Greta, Sathe, Aalok, Srikant, Shashank, Taliaferro, Maya, Wang, Mingye, Schrimpf, Martin, Kay, Kendrick, Fedorenko, Evelina
Other Authors: Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Format: Article
Language:en_US
Published: Springer Nature 2024
Online Access:https://hdl.handle.net/1721.1/153265