Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks.

Learning synaptic weights of spiking neural network (SNN) models that can reproduce target spike trains from provided neural firing data is a central problem in computational neuroscience and spike-based computing. The discovery of the optimal weight values can be posed as a supervised learning task...

Full description

Bibliographic Details
Main Authors: Bryce Allen Bagley, Blake Bordelon, Benjamin Moseley, Ralf Wessel
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2020-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0229083
_version_ 1818667160136843264
author Bryce Allen Bagley
Blake Bordelon
Benjamin Moseley
Ralf Wessel
author_facet Bryce Allen Bagley
Blake Bordelon
Benjamin Moseley
Ralf Wessel
author_sort Bryce Allen Bagley
collection DOAJ
description Learning synaptic weights of spiking neural network (SNN) models that can reproduce target spike trains from provided neural firing data is a central problem in computational neuroscience and spike-based computing. The discovery of the optimal weight values can be posed as a supervised learning task wherein the weights of the model network are chosen to maximize the similarity between the target spike trains and the model outputs. It is still largely unknown whether optimizing spike train similarity of highly recurrent SNNs produces weight matrices similar to those of the ground truth model. To this end, we propose flexible heuristic supervised learning rules, termed Pre-Synaptic Pool Modification (PSPM), that rely on stochastic weight updates in order to produce spikes within a short window of the desired times and eliminate spikes outside of this window. PSPM improves spike train similarity for all-to-all SNNs and makes no assumption about the post-synaptic potential of the neurons or the structure of the network since no gradients are required. We test whether optimizing for spike train similarity entails the discovery of accurate weights and explore the relative contributions of local and homeostatic weight updates. Although PSPM improves similarity between spike trains, the learned weights often differ from the weights of the ground truth model, implying that connectome inference from spike data may require additional constraints on connectivity statistics. We also find that spike train similarity is sensitive to local updates, but other measures of network activity such as avalanche distributions, can be learned through synaptic homeostasis.
first_indexed 2024-12-17T06:16:00Z
format Article
id doaj.art-fc54297a5b654bd19e7cdb2bef7c6891
institution Directory Open Access Journal
issn 1932-6203
language English
last_indexed 2024-12-17T06:16:00Z
publishDate 2020-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj.art-fc54297a5b654bd19e7cdb2bef7c68912022-12-21T22:00:30ZengPublic Library of Science (PLoS)PLoS ONE1932-62032020-01-01152e022908310.1371/journal.pone.0229083Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks.Bryce Allen BagleyBlake BordelonBenjamin MoseleyRalf WesselLearning synaptic weights of spiking neural network (SNN) models that can reproduce target spike trains from provided neural firing data is a central problem in computational neuroscience and spike-based computing. The discovery of the optimal weight values can be posed as a supervised learning task wherein the weights of the model network are chosen to maximize the similarity between the target spike trains and the model outputs. It is still largely unknown whether optimizing spike train similarity of highly recurrent SNNs produces weight matrices similar to those of the ground truth model. To this end, we propose flexible heuristic supervised learning rules, termed Pre-Synaptic Pool Modification (PSPM), that rely on stochastic weight updates in order to produce spikes within a short window of the desired times and eliminate spikes outside of this window. PSPM improves spike train similarity for all-to-all SNNs and makes no assumption about the post-synaptic potential of the neurons or the structure of the network since no gradients are required. We test whether optimizing for spike train similarity entails the discovery of accurate weights and explore the relative contributions of local and homeostatic weight updates. Although PSPM improves similarity between spike trains, the learned weights often differ from the weights of the ground truth model, implying that connectome inference from spike data may require additional constraints on connectivity statistics. We also find that spike train similarity is sensitive to local updates, but other measures of network activity such as avalanche distributions, can be learned through synaptic homeostasis.https://doi.org/10.1371/journal.pone.0229083
spellingShingle Bryce Allen Bagley
Blake Bordelon
Benjamin Moseley
Ralf Wessel
Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks.
PLoS ONE
title Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks.
title_full Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks.
title_fullStr Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks.
title_full_unstemmed Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks.
title_short Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks.
title_sort pre synaptic pool modification pspm a supervised learning procedure for recurrent spiking neural networks
url https://doi.org/10.1371/journal.pone.0229083
work_keys_str_mv AT bryceallenbagley presynapticpoolmodificationpspmasupervisedlearningprocedureforrecurrentspikingneuralnetworks
AT blakebordelon presynapticpoolmodificationpspmasupervisedlearningprocedureforrecurrentspikingneuralnetworks
AT benjaminmoseley presynapticpoolmodificationpspmasupervisedlearningprocedureforrecurrentspikingneuralnetworks
AT ralfwessel presynapticpoolmodificationpspmasupervisedlearningprocedureforrecurrentspikingneuralnetworks