Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal Activity
Spiking neural networks (SNNs) are believed to be highly computationally and energy efficient for specific neurochip hardware real-time solutions. However, there is a lack of learning algorithms for complex SNNs with recurrent connections, comparable in efficiency with back-propagation techniques an...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2018-11-01
|
Series: | Frontiers in Neuroinformatics |
Subjects: | |
Online Access: | https://www.frontiersin.org/article/10.3389/fninf.2018.00079/full |
_version_ | 1818313887843352576 |
---|---|
author | Vyacheslav Demin Vyacheslav Demin Dmitry Nekhaev |
author_facet | Vyacheslav Demin Vyacheslav Demin Dmitry Nekhaev |
author_sort | Vyacheslav Demin |
collection | DOAJ |
description | Spiking neural networks (SNNs) are believed to be highly computationally and energy efficient for specific neurochip hardware real-time solutions. However, there is a lack of learning algorithms for complex SNNs with recurrent connections, comparable in efficiency with back-propagation techniques and capable of unsupervised training. Here we suppose that each neuron in a biological neural network tends to maximize its activity in competition with other neurons, and put this principle at the basis of a new SNN learning algorithm. In such a way, a spiking network with the learned feed-forward, reciprocal and intralayer inhibitory connections, is introduced to the MNIST database digit recognition. It has been demonstrated that this SNN can be trained without a teacher, after a short supervised initialization of weights by the same algorithm. Also, it has been shown that neurons are grouped into families of hierarchical structures, corresponding to different digit classes and their associations. This property is expected to be useful to reduce the number of layers in deep neural networks and modeling the formation of various functional structures in a biological nervous system. Comparison of the learning properties of the suggested algorithm, with those of the Sparse Distributed Representation approach shows similarity in coding but also some advantages of the former. The basic principle of the proposed algorithm is believed to be practically applicable to the construction of much more complicated and diverse task solving SNNs. We refer to this new approach as “Family-Engaged Execution and Learning of Induced Neuron Groups”, or FEELING. |
first_indexed | 2024-12-13T08:40:53Z |
format | Article |
id | doaj.art-83bdfeca07b74baf82adbfeda5e356c6 |
institution | Directory Open Access Journal |
issn | 1662-5196 |
language | English |
last_indexed | 2024-12-13T08:40:53Z |
publishDate | 2018-11-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Neuroinformatics |
spelling | doaj.art-83bdfeca07b74baf82adbfeda5e356c62022-12-21T23:53:32ZengFrontiers Media S.A.Frontiers in Neuroinformatics1662-51962018-11-011210.3389/fninf.2018.00079378909Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal ActivityVyacheslav Demin0Vyacheslav Demin1Dmitry Nekhaev2National Research Center “Kurchatov Institute”, Moscow, RussiaMoscow Institute of Phycics and Technology, Dolgoprudny, RussiaNational Research Center “Kurchatov Institute”, Moscow, RussiaSpiking neural networks (SNNs) are believed to be highly computationally and energy efficient for specific neurochip hardware real-time solutions. However, there is a lack of learning algorithms for complex SNNs with recurrent connections, comparable in efficiency with back-propagation techniques and capable of unsupervised training. Here we suppose that each neuron in a biological neural network tends to maximize its activity in competition with other neurons, and put this principle at the basis of a new SNN learning algorithm. In such a way, a spiking network with the learned feed-forward, reciprocal and intralayer inhibitory connections, is introduced to the MNIST database digit recognition. It has been demonstrated that this SNN can be trained without a teacher, after a short supervised initialization of weights by the same algorithm. Also, it has been shown that neurons are grouped into families of hierarchical structures, corresponding to different digit classes and their associations. This property is expected to be useful to reduce the number of layers in deep neural networks and modeling the formation of various functional structures in a biological nervous system. Comparison of the learning properties of the suggested algorithm, with those of the Sparse Distributed Representation approach shows similarity in coding but also some advantages of the former. The basic principle of the proposed algorithm is believed to be practically applicable to the construction of much more complicated and diverse task solving SNNs. We refer to this new approach as “Family-Engaged Execution and Learning of Induced Neuron Groups”, or FEELING.https://www.frontiersin.org/article/10.3389/fninf.2018.00079/fullspiking neural networksunsupervised learningsupervised learningdigits recognitionclassificationneuron clustering |
spellingShingle | Vyacheslav Demin Vyacheslav Demin Dmitry Nekhaev Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal Activity Frontiers in Neuroinformatics spiking neural networks unsupervised learning supervised learning digits recognition classification neuron clustering |
title | Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal Activity |
title_full | Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal Activity |
title_fullStr | Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal Activity |
title_full_unstemmed | Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal Activity |
title_short | Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal Activity |
title_sort | recurrent spiking neural network learning based on a competitive maximization of neuronal activity |
topic | spiking neural networks unsupervised learning supervised learning digits recognition classification neuron clustering |
url | https://www.frontiersin.org/article/10.3389/fninf.2018.00079/full |
work_keys_str_mv | AT vyacheslavdemin recurrentspikingneuralnetworklearningbasedonacompetitivemaximizationofneuronalactivity AT vyacheslavdemin recurrentspikingneuralnetworklearningbasedonacompetitivemaximizationofneuronalactivity AT dmitrynekhaev recurrentspikingneuralnetworklearningbasedonacompetitivemaximizationofneuronalactivity |