Learning Reduplication with a Neural Network that Lacks Explicit Variables

Reduplicative linguistic patterns have been used as evidence for explicit algebraic variables in models of cognition.1 Here, we show that a variable-free neural network can model these patterns in a way that predicts observed human behavior. Specifically, we successfully simulate the three experime...

Full description

Bibliographic Details
Main Authors: Brandon Prickett, Aaron Traylor, Joe Pater
Format: Article
Language:English
Published: Institute of Computer Science, Polish Academy of Sciences 2022-03-01
Series:Journal of Language Modelling
Subjects:
Online Access:https://jlm.ipipan.waw.pl/index.php/JLM/article/view/274
Description
Summary:Reduplicative linguistic patterns have been used as evidence for explicit algebraic variables in models of cognition.1 Here, we show that a variable-free neural network can model these patterns in a way that predicts observed human behavior. Specifically, we successfully simulate the three experiments presented by Marcus et al. (1999), as well as Endress et al.’s (2007) partial replication of one of those experiments. We then explore the model’s ability to generalize reduplicative mappings to different kinds of novel inputs. Using Berent’s (2013) scopes of generalization as a metric, we claim that the model matches the scope of generalization that has been observed in humans. We argue that these results challenge past claims about the necessity of symbolic variables in models of cognition.
ISSN:2299-856X
2299-8470