SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training

Spiking Neural Networks (SNNs) are a pathway that could potentially empower low-power event-driven neuromorphic hardware due to their spatio-temporal information processing capability and high biological plausibility. Although SNNs are currently more efficient than artificial neural networks (ANNs),...

全面介绍

书目详细资料
Main Authors: Fangxin Liu, Wenbo Zhao, Yongbiao Chen, Zongwu Wang, Tao Yang, Li Jiang
格式: 文件
语言:English
出版: Frontiers Media S.A. 2021-11-01
丛编:Frontiers in Neuroscience
主题:
在线阅读:https://www.frontiersin.org/articles/10.3389/fnins.2021.756876/full
_version_ 1830199175342456832
author Fangxin Liu
Fangxin Liu
Wenbo Zhao
Wenbo Zhao
Yongbiao Chen
Zongwu Wang
Tao Yang
Li Jiang
Li Jiang
Li Jiang
author_facet Fangxin Liu
Fangxin Liu
Wenbo Zhao
Wenbo Zhao
Yongbiao Chen
Zongwu Wang
Tao Yang
Li Jiang
Li Jiang
Li Jiang
author_sort Fangxin Liu
collection DOAJ
description Spiking Neural Networks (SNNs) are a pathway that could potentially empower low-power event-driven neuromorphic hardware due to their spatio-temporal information processing capability and high biological plausibility. Although SNNs are currently more efficient than artificial neural networks (ANNs), they are not as accurate as ANNs. Error backpropagation is the most common method for directly training neural networks, promoting the prosperity of ANNs in various deep learning fields. However, since the signals transmitted in the SNN are non-differentiable discrete binary spike events, the activation function in the form of spikes presents difficulties for the gradient-based optimization algorithms to be directly applied in SNNs, leading to a performance gap (i.e., accuracy and latency) between SNNs and ANNs. This paper introduces a new learning algorithm, called SSTDP, which bridges the gap between backpropagation (BP)-based learning and spike-time-dependent plasticity (STDP)-based learning to train SNNs efficiently. The scheme incorporates the global optimization process from BP and the efficient weight update derived from STDP. It not only avoids the non-differentiable derivation in the BP process but also utilizes the local feature extraction property of STDP. Consequently, our method can lower the possibility of vanishing spikes in BP training and reduce the number of time steps to reduce network latency. In SSTDP, we employ temporal-based coding and use Integrate-and-Fire (IF) neuron as the neuron model to provide considerable computational benefits. Our experiments show the effectiveness of the proposed SSTDP learning algorithm on the SNN by achieving the best classification accuracy 99.3% on the Caltech 101 dataset, 98.1% on the MNIST dataset, and 91.3% on the CIFAR-10 dataset compared to other SNNs trained with other learning methods. It also surpasses the best inference accuracy of the directly trained SNN with 25~32× less inference latency. Moreover, we analyze event-based computations to demonstrate the efficacy of the SNN for inference operation in the spiking domain, and SSTDP methods can achieve 1.3~37.7× fewer addition operations per inference. The code is available at: https://github.com/MXHX7199/SNN-SSTDP.
first_indexed 2024-12-18T02:07:33Z
format Article
id doaj.art-f6a5d3a0ea4e4e48a939a0bc137d6667
institution Directory Open Access Journal
issn 1662-453X
language English
last_indexed 2024-12-18T02:07:33Z
publishDate 2021-11-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Neuroscience
spelling doaj.art-f6a5d3a0ea4e4e48a939a0bc137d66672022-12-21T21:24:33ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2021-11-011510.3389/fnins.2021.756876756876SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network TrainingFangxin Liu0Fangxin Liu1Wenbo Zhao2Wenbo Zhao3Yongbiao Chen4Zongwu Wang5Tao Yang6Li Jiang7Li Jiang8Li Jiang9School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, ChinaShanghai Qi Zhi Institute, Shanghai, ChinaShanghai Qi Zhi Institute, Shanghai, ChinaSchool of Engineering and Applied Science, Columbia Univeristy, New York, NY, United StatesSchool of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, ChinaSchool of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, ChinaSchool of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, ChinaSchool of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, ChinaShanghai Qi Zhi Institute, Shanghai, ChinaMoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, ChinaSpiking Neural Networks (SNNs) are a pathway that could potentially empower low-power event-driven neuromorphic hardware due to their spatio-temporal information processing capability and high biological plausibility. Although SNNs are currently more efficient than artificial neural networks (ANNs), they are not as accurate as ANNs. Error backpropagation is the most common method for directly training neural networks, promoting the prosperity of ANNs in various deep learning fields. However, since the signals transmitted in the SNN are non-differentiable discrete binary spike events, the activation function in the form of spikes presents difficulties for the gradient-based optimization algorithms to be directly applied in SNNs, leading to a performance gap (i.e., accuracy and latency) between SNNs and ANNs. This paper introduces a new learning algorithm, called SSTDP, which bridges the gap between backpropagation (BP)-based learning and spike-time-dependent plasticity (STDP)-based learning to train SNNs efficiently. The scheme incorporates the global optimization process from BP and the efficient weight update derived from STDP. It not only avoids the non-differentiable derivation in the BP process but also utilizes the local feature extraction property of STDP. Consequently, our method can lower the possibility of vanishing spikes in BP training and reduce the number of time steps to reduce network latency. In SSTDP, we employ temporal-based coding and use Integrate-and-Fire (IF) neuron as the neuron model to provide considerable computational benefits. Our experiments show the effectiveness of the proposed SSTDP learning algorithm on the SNN by achieving the best classification accuracy 99.3% on the Caltech 101 dataset, 98.1% on the MNIST dataset, and 91.3% on the CIFAR-10 dataset compared to other SNNs trained with other learning methods. It also surpasses the best inference accuracy of the directly trained SNN with 25~32× less inference latency. Moreover, we analyze event-based computations to demonstrate the efficacy of the SNN for inference operation in the spiking domain, and SSTDP methods can achieve 1.3~37.7× fewer addition operations per inference. The code is available at: https://github.com/MXHX7199/SNN-SSTDP.https://www.frontiersin.org/articles/10.3389/fnins.2021.756876/fullspiking neural networkgradient descent backpropagationneuromorphic computingspike-time-dependent plasticitydeep learningefficient training
spellingShingle Fangxin Liu
Fangxin Liu
Wenbo Zhao
Wenbo Zhao
Yongbiao Chen
Zongwu Wang
Tao Yang
Li Jiang
Li Jiang
Li Jiang
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training
Frontiers in Neuroscience
spiking neural network
gradient descent backpropagation
neuromorphic computing
spike-time-dependent plasticity
deep learning
efficient training
title SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training
title_full SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training
title_fullStr SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training
title_full_unstemmed SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training
title_short SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training
title_sort sstdp supervised spike timing dependent plasticity for efficient spiking neural network training
topic spiking neural network
gradient descent backpropagation
neuromorphic computing
spike-time-dependent plasticity
deep learning
efficient training
url https://www.frontiersin.org/articles/10.3389/fnins.2021.756876/full
work_keys_str_mv AT fangxinliu sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT fangxinliu sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT wenbozhao sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT wenbozhao sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT yongbiaochen sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT zongwuwang sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT taoyang sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT lijiang sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT lijiang sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT lijiang sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining