Reducing the computational footprint for real-time BCPNN learning
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagat...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2015-01-01
|
Series: | Frontiers in Neuroscience |
Subjects: | |
Online Access: | http://journal.frontiersin.org/Journal/10.3389/fnins.2015.00002/full |
_version_ | 1818589059102015488 |
---|---|
author | Bernhard eVogginger René eSchüffny Anders eLansner Anders eLansner Love eCederström Johannes ePartzsch Sebastian eHöppner |
author_facet | Bernhard eVogginger René eSchüffny Anders eLansner Anders eLansner Love eCederström Johannes ePartzsch Sebastian eHöppner |
author_sort | Bernhard eVogginger |
collection | DOAJ |
description | The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. |
first_indexed | 2024-12-16T09:34:37Z |
format | Article |
id | doaj.art-abc84418ac5a4a679e30192727fe679e |
institution | Directory Open Access Journal |
issn | 1662-453X |
language | English |
last_indexed | 2024-12-16T09:34:37Z |
publishDate | 2015-01-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Neuroscience |
spelling | doaj.art-abc84418ac5a4a679e30192727fe679e2022-12-21T22:36:25ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2015-01-01910.3389/fnins.2015.00002119266Reducing the computational footprint for real-time BCPNN learningBernhard eVogginger0René eSchüffny1Anders eLansner2Anders eLansner3Love eCederström4Johannes ePartzsch5Sebastian eHöppner6Technische Universität DresdenTechnische Universität DresdenRoyal Institute of Technology (KTH)Stockholm UniversityTechnische Universität DresdenTechnische Universität DresdenTechnische Universität DresdenThe implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.http://journal.frontiersin.org/Journal/10.3389/fnins.2015.00002/fullsynaptic plasticityHebbian Learningspiking neural networkslook-up tablesdigital neuromorphic hardwareBayesian confidence propagation neural network (BCPNN) |
spellingShingle | Bernhard eVogginger René eSchüffny Anders eLansner Anders eLansner Love eCederström Johannes ePartzsch Sebastian eHöppner Reducing the computational footprint for real-time BCPNN learning Frontiers in Neuroscience synaptic plasticity Hebbian Learning spiking neural networks look-up tables digital neuromorphic hardware Bayesian confidence propagation neural network (BCPNN) |
title | Reducing the computational footprint for real-time BCPNN learning |
title_full | Reducing the computational footprint for real-time BCPNN learning |
title_fullStr | Reducing the computational footprint for real-time BCPNN learning |
title_full_unstemmed | Reducing the computational footprint for real-time BCPNN learning |
title_short | Reducing the computational footprint for real-time BCPNN learning |
title_sort | reducing the computational footprint for real time bcpnn learning |
topic | synaptic plasticity Hebbian Learning spiking neural networks look-up tables digital neuromorphic hardware Bayesian confidence propagation neural network (BCPNN) |
url | http://journal.frontiersin.org/Journal/10.3389/fnins.2015.00002/full |
work_keys_str_mv | AT bernhardevogginger reducingthecomputationalfootprintforrealtimebcpnnlearning AT reneeschuffny reducingthecomputationalfootprintforrealtimebcpnnlearning AT anderselansner reducingthecomputationalfootprintforrealtimebcpnnlearning AT anderselansner reducingthecomputationalfootprintforrealtimebcpnnlearning AT loveecederstrom reducingthecomputationalfootprintforrealtimebcpnnlearning AT johannesepartzsch reducingthecomputationalfootprintforrealtimebcpnnlearning AT sebastianehoppner reducingthecomputationalfootprintforrealtimebcpnnlearning |