Semantic projection recovers rich human knowledge of multiple object features from word embeddings
How is knowledge about word meaning represented in the mental lexicon? Current computational models infer word meanings from lexical co-occurrence patterns. They learn to represent words as vectors in a multidimensional space, wherein words that are used in more similar linguistic contexts-that is,...
Main Authors: | Grand, Gabriel, Blank, Idan Asher, Pereira, Francisco, Fedorenko, Evelina |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences |
Format: | Article |
Language: | English |
Published: |
Springer Science and Business Media LLC
2023
|
Online Access: | https://hdl.handle.net/1721.1/148772 |
Similar Items
-
Domain-General Brain Regions Do Not Track Linguistic Input as Closely as Language-Selective Regions
by: Fedorenko, Evelina, et al.
Published: (2018) -
Lack of selectivity for syntax relative to word meanings throughout the language network
by: Fedorenko, Evelina G, et al.
Published: (2021) -
No evidence for differences among language regions in their temporal receptive windows
by: Blank, Idan Asher, et al.
Published: (2022) -
Activity in the fronto-parietal multiple-demand network is robustly associated with individual differences in working memory and fluid intelligence
by: Blank, Idan Asher, et al.
Published: (2021) -
Syntactic dependencies correspond to word pairs with high mutual information
by: Futrell, Richard, et al.
Published: (2021)