Sumari: | A central goal of research into language acquisition is explaining how, when learners generalize to new cases, they appropriately RESTRICT their generalizations (e.g., to avoid producing ungrammatical utterance such as *<i>The clown laughed the man</i>). The past 30 years have seen an unresolved debate between STATISTICAL PREEMPTION and ENTRENCHMENT as explanations. Under preemption, the use of a verb in a particular construction (e.g., *<i>The clown laughed the man</i>) is probabilistically blocked by hearing that verb other constructions WITH SIMILAR MEANINGS ONLY (e.g., <i>The clown made the man laugh</i>). Under entrenchment, such errors (e.g., *<i>The clown laughed the man</i>) are probabilistically blocked by hearing ANY utterance that includes the relevant verb (e.g., by <i>The clown made the man laugh</i> AND <i>The man laughed</i>). Across five artificial-language-learning studies, we designed a training regime such that learners received evidence for the (by the relevant hypothesis) ungrammaticality of a particular unattested verb/noun+particle combination (e.g., *<i>chila</i>+<i>kem</i>; *<i>squeako</i>+<i>kem</i>) via either preemption only or entrenchment only. Across all five studies, participants in the preemption condition (as per our preregistered prediction) rated unattested verb/noun+particle combinations as less acceptable for restricted verbs/nouns, which appeared during training, than for unrestricted, novel-at-test verbs/nouns, which did not appear during training; i.e., strong evidence for preemption. Participants in the entrenchment condition showed no evidence for such an effect (and in 3/5 experiments, positive evidence for the null). We conclude that a successful model of learning linguistic restrictions must instantiate competition between different forms only where they express the same (or similar) meanings.
|