Predictive structure or paradigm size? Investigating the effects of i-complexity and e-complexity on the learnability of morphological systems
Research on cross-linguistic differences in morphological paradigms reveals a wide range of variation on many dimensions, including the number of categories expressed, the number of unique forms, and the number of inflectional classes. However, in an influential paper, Ackerman & Malouf (2013) a...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Institute of Computer Science, Polish Academy of Sciences
2021-10-01
|
Series: | Journal of Language Modelling |
Subjects: | |
Online Access: | https://jlm.ipipan.waw.pl/index.php/JLM/article/view/259 |
_version_ | 1818286162477842432 |
---|---|
author | Tamar Johnson Kexin Gao Kenny Smith Hugh Rabagliati Jennifer Culbertson |
author_facet | Tamar Johnson Kexin Gao Kenny Smith Hugh Rabagliati Jennifer Culbertson |
author_sort | Tamar Johnson |
collection | DOAJ |
description | Research on cross-linguistic differences in morphological paradigms reveals a wide range of variation on many dimensions, including the number of categories expressed, the number of unique forms, and the number of inflectional classes. However, in an influential paper, Ackerman & Malouf (2013) argue that there is one dimension on which languages do not differ widely: in predictive structure. Predictive structure in a paradigm describes the extent to which forms predict each other, called i-complexity. Ackerman & Malouf (2013) show that although languages differ according to measure of surface paradigm complexity, called e-complexity, they tend to have low i-complexity. They conclude that morphological paradigms have evolved under a pressure for low i-complexity, such that even paradigms with very high e-complexity are relatively easy to learn so long as they have low i-complexity. While this would potentially explain why languages are able to maintain large paradigms, recent work by Johnson et al. (submitted) suggests that both neural networks and human learners may actually be more sensitive to e-complexity than i-complexity. Here we will build on this work, reporting a series of experiments under more realistic learning conditions which confirm that indeed, across a range of paradigms that vary in either e- or i-complexity, neural networks (LSTMs) are sensitive to both, but show a larger effect of e-complexity (and other measures associated with size and diversity of forms). In human learners, we fail to find any effect of i-complexity at all. Further, analysis of a large number of randomly generated paradigms show that e- and i-complexity are negatively correlated: paradigms with high e-complexity necessarily show low i-complexity.These findings suggest that the observations made by Ackerman & Malouf (2013) for natural language paradigms may stem from the nature of these measures rather than learning pressures specially attuned to i-complexity. |
first_indexed | 2024-12-13T01:20:13Z |
format | Article |
id | doaj.art-e2ffbc5013da4c0dbcd97901704ea408 |
institution | Directory Open Access Journal |
issn | 2299-856X 2299-8470 |
language | English |
last_indexed | 2024-12-13T01:20:13Z |
publishDate | 2021-10-01 |
publisher | Institute of Computer Science, Polish Academy of Sciences |
record_format | Article |
series | Journal of Language Modelling |
spelling | doaj.art-e2ffbc5013da4c0dbcd97901704ea4082022-12-22T00:04:15ZengInstitute of Computer Science, Polish Academy of SciencesJournal of Language Modelling2299-856X2299-84702021-10-019110.15398/jlm.v9i1.259Predictive structure or paradigm size? Investigating the effects of i-complexity and e-complexity on the learnability of morphological systemsTamar Johnson0Kexin GaoKenny SmithHugh RabagliatiJennifer CulbertsonUniversity of EdinburghResearch on cross-linguistic differences in morphological paradigms reveals a wide range of variation on many dimensions, including the number of categories expressed, the number of unique forms, and the number of inflectional classes. However, in an influential paper, Ackerman & Malouf (2013) argue that there is one dimension on which languages do not differ widely: in predictive structure. Predictive structure in a paradigm describes the extent to which forms predict each other, called i-complexity. Ackerman & Malouf (2013) show that although languages differ according to measure of surface paradigm complexity, called e-complexity, they tend to have low i-complexity. They conclude that morphological paradigms have evolved under a pressure for low i-complexity, such that even paradigms with very high e-complexity are relatively easy to learn so long as they have low i-complexity. While this would potentially explain why languages are able to maintain large paradigms, recent work by Johnson et al. (submitted) suggests that both neural networks and human learners may actually be more sensitive to e-complexity than i-complexity. Here we will build on this work, reporting a series of experiments under more realistic learning conditions which confirm that indeed, across a range of paradigms that vary in either e- or i-complexity, neural networks (LSTMs) are sensitive to both, but show a larger effect of e-complexity (and other measures associated with size and diversity of forms). In human learners, we fail to find any effect of i-complexity at all. Further, analysis of a large number of randomly generated paradigms show that e- and i-complexity are negatively correlated: paradigms with high e-complexity necessarily show low i-complexity.These findings suggest that the observations made by Ackerman & Malouf (2013) for natural language paradigms may stem from the nature of these measures rather than learning pressures specially attuned to i-complexity.https://jlm.ipipan.waw.pl/index.php/JLM/article/view/259morphological complexitylearningneural networkstypology |
spellingShingle | Tamar Johnson Kexin Gao Kenny Smith Hugh Rabagliati Jennifer Culbertson Predictive structure or paradigm size? Investigating the effects of i-complexity and e-complexity on the learnability of morphological systems Journal of Language Modelling morphological complexity learning neural networks typology |
title | Predictive structure or paradigm size? Investigating the effects of i-complexity and e-complexity on the learnability of morphological systems |
title_full | Predictive structure or paradigm size? Investigating the effects of i-complexity and e-complexity on the learnability of morphological systems |
title_fullStr | Predictive structure or paradigm size? Investigating the effects of i-complexity and e-complexity on the learnability of morphological systems |
title_full_unstemmed | Predictive structure or paradigm size? Investigating the effects of i-complexity and e-complexity on the learnability of morphological systems |
title_short | Predictive structure or paradigm size? Investigating the effects of i-complexity and e-complexity on the learnability of morphological systems |
title_sort | predictive structure or paradigm size investigating the effects of i complexity and e complexity on the learnability of morphological systems |
topic | morphological complexity learning neural networks typology |
url | https://jlm.ipipan.waw.pl/index.php/JLM/article/view/259 |
work_keys_str_mv | AT tamarjohnson predictivestructureorparadigmsizeinvestigatingtheeffectsoficomplexityandecomplexityonthelearnabilityofmorphologicalsystems AT kexingao predictivestructureorparadigmsizeinvestigatingtheeffectsoficomplexityandecomplexityonthelearnabilityofmorphologicalsystems AT kennysmith predictivestructureorparadigmsizeinvestigatingtheeffectsoficomplexityandecomplexityonthelearnabilityofmorphologicalsystems AT hughrabagliati predictivestructureorparadigmsizeinvestigatingtheeffectsoficomplexityandecomplexityonthelearnabilityofmorphologicalsystems AT jenniferculbertson predictivestructureorparadigmsizeinvestigatingtheeffectsoficomplexityandecomplexityonthelearnabilityofmorphologicalsystems |