Supervised learning in multilayer spiking neural network

Spiking Neural Networks (SNNs) are an exciting prospect in the field of Artificial Neural Networks (ANNs). We try to replicate the massive interconnection of neurons, the computational units, evident in brain to perform useful task in ANNs, albeit with highly abstracted model of neurons. Mostly the...

Full description

Bibliographic Details
Main Author: Shrestha, Sumit Bam
Other Authors: Song Qing
Format: Thesis
Language:English
Published: 2017
Subjects:
Online Access:http://hdl.handle.net/10356/72144
Description
Summary:Spiking Neural Networks (SNNs) are an exciting prospect in the field of Artificial Neural Networks (ANNs). We try to replicate the massive interconnection of neurons, the computational units, evident in brain to perform useful task in ANNs, albeit with highly abstracted model of neurons. Mostly the artificial neurons are realized in the form of non-linear activation function which process numeric inputs and output. SNNs are less abstract than these systems with non-linear activation function in the sense that they make use of mathematical model of neurons, termed spiking neurons, which process inputs in the form of spikes and emits spike as output. This is exactly the way in which natural neurons exchange information. Since spikes are events in time, there is an extra dimension of time along with amplitude in SNNs which makes them suited to temporal processes. There are a few supervised learning algorithms for learning in SNN. As far as learning in multilayer architecture, we have SpikeProp and its extensions and Multi-ReS Me. The SpikeProp methods are based on adaptation of backpropagation for SNNs and mostly consider first spike of the neuron. Original SpikeProp is usually slow and face stability issues during learning. Large learning rate and even very small learning rate often makes it unstable. The instability is observable in the form of sudden jumps in training error, called surge, which change the course of learning and often cause failure of the learning process as well. To introduce stability criterion, we present weight convergence analysis of SpikeProp. Based on the convergence condition, we introduce an adaptive learning rate rule which selects suitable learning rate to guarantee convergence of learning process and large enough learning rate so that the learning process is fast enough. Based on performance on several benchmark problems, this method with learning rate adaptation, SpikePropAd, demonstrates less surges and faster learning as well compared to SpikeProp and its faster variant RProp. The performance is evaluated broadly in terms of speed of learning, rate of successful learning. We also consider the internal and external disturbances to the learning process and provide a thorough error analysis in addition to weight convergence analysis. We use conic sector stability theory to determine the conditions for making the learning process stable in L2 space and extend the result for L1 stability. L2 stability in theory requires the disturbance to die out after a certain period of time whereas the L1 stability implies that the system is stable provided the disturbance is within bounds. We explore two approaches for robust stability of SpikeProp in presence of disturbance: individual error approach, which leads to SpikePropR learning and total error approach, which leads to SpiekPropRT learnnig. SpikePropR has slight improvement over SpikePropAd. SpikePropRT on the other hand has significant improvement over SpikePropAd, especially for real world non synthetic datasets. An event based weight update rule for learning spike-train rather than the time of first spike, EvSpikeProp, is also proposed. This method overcomes the limitations of other multi-spike extension of SpikeProp and is suitable for learning in an online fashion which is more suited to SNNs because spikes are continuous processes. The results derived in the convergence and stability analysis of SpikeProp are extended for multi-spike framework to show weight convergence and robust stability in L2 and L1 space. The resulting method is named EvSpikePropR. It shows better performance compared to Multi-ReSuMe based on the performance results for different learning problems. Apart from that, we also extended the adaptive learning rule based on weight convergence for delay learning of SNN as well. It is named SpiekPropAdDel. This delay learning extension is useful because it speeds the learning process, eliminates redundant synapses and minimizes surge as well.