An entropy model for artificial grammar learning

A model is proposed to characterize the type of knowledge acquired in Artificial Grammar Learning (AGL). In particular, Shannon entropy is employed to compute the complexity of different test items in an AGL task, relative to the training items. According to this model, the more predictable a test i...

Full description

Bibliographic Details
Main Author: Emmanuel Pothos
Format: Article
Language:English
Published: Frontiers Media S.A. 2010-06-01
Series:Frontiers in Psychology
Subjects:
Online Access:http://journal.frontiersin.org/Journal/10.3389/fpsyg.2010.00016/full
Description
Summary:A model is proposed to characterize the type of knowledge acquired in Artificial Grammar Learning (AGL). In particular, Shannon entropy is employed to compute the complexity of different test items in an AGL task, relative to the training items. According to this model, the more predictable a test item is from the training items, the more likely it is that this item should be selected as compatible with the training items. The predictions of the entropy model are explored in relation to the results from several previous AGL datasets and compared to other AGL measures. This particular approach in AGL resonates well with similar models in categorization and reasoning which also postulate that cognitive processing is geared towards the reduction of entropy.
ISSN:1664-1078