A Bayesian inference theory of attention: neuroscience and algorithms

The past four decades of research in visual neuroscience has generated a large and disparate body of literature on the role of attention [Itti et al., 2005]. Although several models have been developed to describe specific properties of attention, a theoretical framework that explains the computatio...

Full description

Bibliographic Details
Main Authors: Chikkerur, Sharat, Serre, Thomas, Poggio, Tomaso
Other Authors: Tomaso Poggio
Published: 2009
Online Access:http://hdl.handle.net/1721.1/49416
_version_ 1826205952032899072
author Chikkerur, Sharat
Serre, Thomas
Poggio, Tomaso
author2 Tomaso Poggio
author_facet Tomaso Poggio
Chikkerur, Sharat
Serre, Thomas
Poggio, Tomaso
author_sort Chikkerur, Sharat
collection MIT
description The past four decades of research in visual neuroscience has generated a large and disparate body of literature on the role of attention [Itti et al., 2005]. Although several models have been developed to describe specific properties of attention, a theoretical framework that explains the computational role of attention and is consistent with all known effects is still needed. Recently, several authors have suggested that visual perception can be interpreted as a Bayesian inference process [Rao et al., 2002, Knill and Richards, 1996, Lee and Mumford, 2003]. Within this framework, topdown priors via cortical feedback help disambiguate noisy bottom-up sensory input signals. Building on earlier work by Rao [2005], we show that this Bayesian inference proposal can be extended to explain the role and predict the main properties of attention: namely to facilitate the recognition of objects in clutter. Visual recognition proceeds by estimating the posterior probabilities for objects and their locations within an image via an exchange of messages between ventral and parietal areas of the visual cortex. Within this framework, spatial attention is used to reduce the uncertainty in feature information; feature-based attention is used to reduce the uncertainty in location information. In conjunction, they are used to recognize objects in clutter. Here, we find that several key attentional phenomena such such as pop-out, multiplicative modulation and change in contrast response emerge naturally as a property of the network. We explain the idea in three stages. We start with developing a simplified model of attention in the brain identifying the primary areas involved and their interconnections. Secondly, we propose a Bayesian network where each node has direct neural correlates within our simplified biological model. Finally, we elucidate the properties of the resulting model, showing that the predictions are consistent with physiological and behavioral evidence.
first_indexed 2024-09-23T13:21:38Z
id mit-1721.1/49416
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T13:21:38Z
publishDate 2009
record_format dspace
spelling mit-1721.1/494162019-04-12T23:25:14Z A Bayesian inference theory of attention: neuroscience and algorithms Chikkerur, Sharat Serre, Thomas Poggio, Tomaso Tomaso Poggio Center for Biological and Computational Learning (CBCL) The past four decades of research in visual neuroscience has generated a large and disparate body of literature on the role of attention [Itti et al., 2005]. Although several models have been developed to describe specific properties of attention, a theoretical framework that explains the computational role of attention and is consistent with all known effects is still needed. Recently, several authors have suggested that visual perception can be interpreted as a Bayesian inference process [Rao et al., 2002, Knill and Richards, 1996, Lee and Mumford, 2003]. Within this framework, topdown priors via cortical feedback help disambiguate noisy bottom-up sensory input signals. Building on earlier work by Rao [2005], we show that this Bayesian inference proposal can be extended to explain the role and predict the main properties of attention: namely to facilitate the recognition of objects in clutter. Visual recognition proceeds by estimating the posterior probabilities for objects and their locations within an image via an exchange of messages between ventral and parietal areas of the visual cortex. Within this framework, spatial attention is used to reduce the uncertainty in feature information; feature-based attention is used to reduce the uncertainty in location information. In conjunction, they are used to recognize objects in clutter. Here, we find that several key attentional phenomena such such as pop-out, multiplicative modulation and change in contrast response emerge naturally as a property of the network. We explain the idea in three stages. We start with developing a simplified model of attention in the brain identifying the primary areas involved and their interconnections. Secondly, we propose a Bayesian network where each node has direct neural correlates within our simplified biological model. Finally, we elucidate the properties of the resulting model, showing that the predictions are consistent with physiological and behavioral evidence. 2009-10-06T22:45:10Z 2009-10-06T22:45:10Z 2009-10-03 http://hdl.handle.net/1721.1/49416 CBCL-280 MIT-CSAIL-TR-2009-047 18 p. application/pdf application/postscript
spellingShingle Chikkerur, Sharat
Serre, Thomas
Poggio, Tomaso
A Bayesian inference theory of attention: neuroscience and algorithms
title A Bayesian inference theory of attention: neuroscience and algorithms
title_full A Bayesian inference theory of attention: neuroscience and algorithms
title_fullStr A Bayesian inference theory of attention: neuroscience and algorithms
title_full_unstemmed A Bayesian inference theory of attention: neuroscience and algorithms
title_short A Bayesian inference theory of attention: neuroscience and algorithms
title_sort bayesian inference theory of attention neuroscience and algorithms
url http://hdl.handle.net/1721.1/49416
work_keys_str_mv AT chikkerursharat abayesianinferencetheoryofattentionneuroscienceandalgorithms
AT serrethomas abayesianinferencetheoryofattentionneuroscienceandalgorithms
AT poggiotomaso abayesianinferencetheoryofattentionneuroscienceandalgorithms
AT chikkerursharat bayesianinferencetheoryofattentionneuroscienceandalgorithms
AT serrethomas bayesianinferencetheoryofattentionneuroscienceandalgorithms
AT poggiotomaso bayesianinferencetheoryofattentionneuroscienceandalgorithms