Probabilistic Inference in Discrete Spaces Can Be Implemented into Networks of LIF Neurons

The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell...

Full description

Bibliographic Details
Main Authors: Dimitri eProbst, Mihai Alexandru Petrovici, Ilja eBytschok, Johannes eBill, Dejan ePecevski, Johannes eSchemmel, Karlheinz eMeier
Format: Article
Language:English
Published: Frontiers Media S.A. 2015-02-01
Series:Frontiers in Computational Neuroscience
Subjects:
Online Access:http://journal.frontiersin.org/Journal/10.3389/fncom.2015.00013/full
_version_ 1818130560667615232
author Dimitri eProbst
Mihai Alexandru Petrovici
Ilja eBytschok
Johannes eBill
Dejan ePecevski
Johannes eSchemmel
Karlheinz eMeier
author_facet Dimitri eProbst
Mihai Alexandru Petrovici
Ilja eBytschok
Johannes eBill
Dejan ePecevski
Johannes eSchemmel
Karlheinz eMeier
author_sort Dimitri eProbst
collection DOAJ
description The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.
first_indexed 2024-12-11T08:06:59Z
format Article
id doaj.art-79003be675a640d2ab3a40fd6cd6b27c
institution Directory Open Access Journal
issn 1662-5188
language English
last_indexed 2024-12-11T08:06:59Z
publishDate 2015-02-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Computational Neuroscience
spelling doaj.art-79003be675a640d2ab3a40fd6cd6b27c2022-12-22T01:14:58ZengFrontiers Media S.A.Frontiers in Computational Neuroscience1662-51882015-02-01910.3389/fncom.2015.00013119535Probabilistic Inference in Discrete Spaces Can Be Implemented into Networks of LIF NeuronsDimitri eProbst0Mihai Alexandru Petrovici1Ilja eBytschok2Johannes eBill3Dejan ePecevski4Johannes eSchemmel5Karlheinz eMeier6University of HeidelbergUniversity of HeidelbergUniversity of HeidelbergGraz University of TechnologyGraz University of TechnologyUniversity of HeidelbergUniversity of HeidelbergThe means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.http://journal.frontiersin.org/Journal/10.3389/fncom.2015.00013/fulltheoretical neuroscienceNeural codingGraphical ModelsMCMCneuromorphic hardwareBayesian theory
spellingShingle Dimitri eProbst
Mihai Alexandru Petrovici
Ilja eBytschok
Johannes eBill
Dejan ePecevski
Johannes eSchemmel
Karlheinz eMeier
Probabilistic Inference in Discrete Spaces Can Be Implemented into Networks of LIF Neurons
Frontiers in Computational Neuroscience
theoretical neuroscience
Neural coding
Graphical Models
MCMC
neuromorphic hardware
Bayesian theory
title Probabilistic Inference in Discrete Spaces Can Be Implemented into Networks of LIF Neurons
title_full Probabilistic Inference in Discrete Spaces Can Be Implemented into Networks of LIF Neurons
title_fullStr Probabilistic Inference in Discrete Spaces Can Be Implemented into Networks of LIF Neurons
title_full_unstemmed Probabilistic Inference in Discrete Spaces Can Be Implemented into Networks of LIF Neurons
title_short Probabilistic Inference in Discrete Spaces Can Be Implemented into Networks of LIF Neurons
title_sort probabilistic inference in discrete spaces can be implemented into networks of lif neurons
topic theoretical neuroscience
Neural coding
Graphical Models
MCMC
neuromorphic hardware
Bayesian theory
url http://journal.frontiersin.org/Journal/10.3389/fncom.2015.00013/full
work_keys_str_mv AT dimitrieprobst probabilisticinferenceindiscretespacescanbeimplementedintonetworksoflifneurons
AT mihaialexandrupetrovici probabilisticinferenceindiscretespacescanbeimplementedintonetworksoflifneurons
AT iljaebytschok probabilisticinferenceindiscretespacescanbeimplementedintonetworksoflifneurons
AT johannesebill probabilisticinferenceindiscretespacescanbeimplementedintonetworksoflifneurons
AT dejanepecevski probabilisticinferenceindiscretespacescanbeimplementedintonetworksoflifneurons
AT johanneseschemmel probabilisticinferenceindiscretespacescanbeimplementedintonetworksoflifneurons
AT karlheinzemeier probabilisticinferenceindiscretespacescanbeimplementedintonetworksoflifneurons