Summary: | Visual category learning (VCL) involves detecting which features are most relevant for categorization. This requires attentional learning, which allows effectively redirecting attention to object’s features most relevant for categorization while also filtering out irrelevant features. When features relevant for categorization are not salient VCL relies also on perceptual learning, which enable becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks that varied in feature-saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback-information (tasks with mid-information, moderately ambiguous feedback that increased attentional load vs. tasks with high-information non-ambiguous feedback). Participants were required learning to categorize novel stimuli by detecting the feature-dimension relevant for categorization. We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load associated with the processing of moderately ambiguous feedback does not compromise VCL when both the task relevant feature and irrelevant features are salient. In low-saliency VCL tasks performance improvement relied on slower perceptual learning, but when the feedback was highly-informative participants were ultimately capable reaching performances matching those observed in high-saliency VCL tasks. However, VCL was much compromised when features were with low-saliency and the feedback was ambiguous. We suggest that this later learning scenario is characterized by a ‘cognitive loop paradox’ where two interdependent learning processes have to take place simultaneously.
|