An ERP investigation of visual word recognition in syllabary scripts
The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach,...
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Springer-Verlag
2016
|
Online Access: | http://hdl.handle.net/1721.1/103857 |
_version_ | 1826204730328612864 |
---|---|
author | Okano, Kana Grainger, Jonathan Holcomb, Phillip J. |
author2 | Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences |
author_facet | Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences Okano, Kana Grainger, Jonathan Holcomb, Phillip J. |
author_sort | Okano, Kana |
collection | MIT |
description | The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, in the present study we examined word recognition in a different writing system, the Japanese syllabary scripts hiragana and katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words in which the prime and target words were both in the same script (within-script priming, Exp. 1) or were in the opposite script (cross-script priming, Exp. 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sublexical processing) and N400 (lexical–semantic processing) components were modulated by priming, although the time course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in “Experiment 1: Within-script priming”, in which the prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neurocognitive processes that operate in similar manners across different writing systems and languages, as well as pointing to the viability of the bimodal interactive-activation framework for modeling such processes. |
first_indexed | 2024-09-23T13:00:29Z |
format | Article |
id | mit-1721.1/103857 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T13:00:29Z |
publishDate | 2016 |
publisher | Springer-Verlag |
record_format | dspace |
spelling | mit-1721.1/1038572022-10-01T12:27:59Z An ERP investigation of visual word recognition in syllabary scripts Okano, Kana Grainger, Jonathan Holcomb, Phillip J. Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences McGovern Institute for Brain Research at MIT Okano, Kana The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, in the present study we examined word recognition in a different writing system, the Japanese syllabary scripts hiragana and katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words in which the prime and target words were both in the same script (within-script priming, Exp. 1) or were in the opposite script (cross-script priming, Exp. 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sublexical processing) and N400 (lexical–semantic processing) components were modulated by priming, although the time course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in “Experiment 1: Within-script priming”, in which the prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neurocognitive processes that operate in similar manners across different writing systems and languages, as well as pointing to the viability of the bimodal interactive-activation framework for modeling such processes. 2016-08-04T22:01:54Z 2016-08-04T22:01:54Z 2013-02 2016-05-23T12:18:10Z Article http://purl.org/eprint/type/JournalArticle 1530-7026 1531-135X http://hdl.handle.net/1721.1/103857 Okano, Kana, Jonathan Grainger, and Phillip J. Holcomb. “An ERP Investigation of Visual Word Recognition in Syllabary Scripts.” Cognitive, Affective, & Behavioral Neuroscience 13.2 (2013): 390–404. en http://dx.doi.org/10.3758/s13415-013-0149-7 Cognitive, Affective, & Behavioral Neuroscience Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. Psychonomic Society, Inc. application/pdf Springer-Verlag Springer-Verlag |
spellingShingle | Okano, Kana Grainger, Jonathan Holcomb, Phillip J. An ERP investigation of visual word recognition in syllabary scripts |
title | An ERP investigation of visual word recognition in syllabary scripts |
title_full | An ERP investigation of visual word recognition in syllabary scripts |
title_fullStr | An ERP investigation of visual word recognition in syllabary scripts |
title_full_unstemmed | An ERP investigation of visual word recognition in syllabary scripts |
title_short | An ERP investigation of visual word recognition in syllabary scripts |
title_sort | erp investigation of visual word recognition in syllabary scripts |
url | http://hdl.handle.net/1721.1/103857 |
work_keys_str_mv | AT okanokana anerpinvestigationofvisualwordrecognitioninsyllabaryscripts AT graingerjonathan anerpinvestigationofvisualwordrecognitioninsyllabaryscripts AT holcombphillipj anerpinvestigationofvisualwordrecognitioninsyllabaryscripts AT okanokana erpinvestigationofvisualwordrecognitioninsyllabaryscripts AT graingerjonathan erpinvestigationofvisualwordrecognitioninsyllabaryscripts AT holcombphillipj erpinvestigationofvisualwordrecognitioninsyllabaryscripts |