Does Complexity Matter? Meta-Analysis of Learner Performance in Artificial Grammar Tasks
Complexity has been shown to affect performance on artificial grammar learning (AGL) tasks (categorization of test items as grammatical/ungrammatical according to the implicitly trained grammar rules). However, previously published AGL experiments did not utilize consistent measures to investigate t...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2014-09-01
|
Series: | Frontiers in Psychology |
Subjects: | |
Online Access: | http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.01084/full |
_version_ | 1819133981678895104 |
---|---|
author | Rachel eSchiff Pessia eKatan |
author_facet | Rachel eSchiff Pessia eKatan |
author_sort | Rachel eSchiff |
collection | DOAJ |
description | Complexity has been shown to affect performance on artificial grammar learning (AGL) tasks (categorization of test items as grammatical/ungrammatical according to the implicitly trained grammar rules). However, previously published AGL experiments did not utilize consistent measures to investigate the comprehensive effect of grammar complexity on task performance. The present study focused on computerizing Bollt and Jones's (2000) technique of calculating topological entropy (TE), a quantitative measure of AGL charts' complexity, with the aim of examining associations between grammar systems' TE and learners’ AGL task performance. We surveyed the literature and identified 56 previous AGL experiments based on 10 different grammars that met the sampling criteria. Using the automated matrix-lift-action method, we assigned a TE value for each of these 10 previously used AGL systems and examined its correlation with learners' task performance. The meta-regression analysis showed a significant correlation, demonstrating that the complexity effect transcended the different settings and conditions in which the categorization task was performed. The results reinforced the importance of using this new automated tool to uniformly measure grammar systems’ complexity when experimenting with and evaluating the findings of AGL studies. |
first_indexed | 2024-12-22T09:55:56Z |
format | Article |
id | doaj.art-1597bc36b58a4a3889ccbe08c8ba6241 |
institution | Directory Open Access Journal |
issn | 1664-1078 |
language | English |
last_indexed | 2024-12-22T09:55:56Z |
publishDate | 2014-09-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Psychology |
spelling | doaj.art-1597bc36b58a4a3889ccbe08c8ba62412022-12-21T18:30:15ZengFrontiers Media S.A.Frontiers in Psychology1664-10782014-09-01510.3389/fpsyg.2014.01084102776Does Complexity Matter? Meta-Analysis of Learner Performance in Artificial Grammar TasksRachel eSchiff0Pessia eKatan1Bar Ilan UniversityBar Ilan UniversityComplexity has been shown to affect performance on artificial grammar learning (AGL) tasks (categorization of test items as grammatical/ungrammatical according to the implicitly trained grammar rules). However, previously published AGL experiments did not utilize consistent measures to investigate the comprehensive effect of grammar complexity on task performance. The present study focused on computerizing Bollt and Jones's (2000) technique of calculating topological entropy (TE), a quantitative measure of AGL charts' complexity, with the aim of examining associations between grammar systems' TE and learners’ AGL task performance. We surveyed the literature and identified 56 previous AGL experiments based on 10 different grammars that met the sampling criteria. Using the automated matrix-lift-action method, we assigned a TE value for each of these 10 previously used AGL systems and examined its correlation with learners' task performance. The meta-regression analysis showed a significant correlation, demonstrating that the complexity effect transcended the different settings and conditions in which the categorization task was performed. The results reinforced the importance of using this new automated tool to uniformly measure grammar systems’ complexity when experimenting with and evaluating the findings of AGL studies.http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.01084/fullimplicit learningComplexityartificial grammar learningtopological entropygrammar system |
spellingShingle | Rachel eSchiff Pessia eKatan Does Complexity Matter? Meta-Analysis of Learner Performance in Artificial Grammar Tasks Frontiers in Psychology implicit learning Complexity artificial grammar learning topological entropy grammar system |
title | Does Complexity Matter? Meta-Analysis of Learner Performance in Artificial Grammar Tasks |
title_full | Does Complexity Matter? Meta-Analysis of Learner Performance in Artificial Grammar Tasks |
title_fullStr | Does Complexity Matter? Meta-Analysis of Learner Performance in Artificial Grammar Tasks |
title_full_unstemmed | Does Complexity Matter? Meta-Analysis of Learner Performance in Artificial Grammar Tasks |
title_short | Does Complexity Matter? Meta-Analysis of Learner Performance in Artificial Grammar Tasks |
title_sort | does complexity matter meta analysis of learner performance in artificial grammar tasks |
topic | implicit learning Complexity artificial grammar learning topological entropy grammar system |
url | http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.01084/full |
work_keys_str_mv | AT racheleschiff doescomplexitymattermetaanalysisoflearnerperformanceinartificialgrammartasks AT pessiaekatan doescomplexitymattermetaanalysisoflearnerperformanceinartificialgrammartasks |