Trainable quantization for Speedy Spiking Neural Networks

Spiking neural networks are considered as the third generation of Artificial Neural Networks. SNNs perform computation using neurons and synapses that communicate using binary and asynchronous signals known as spikes. They have attracted significant research interest over the last years since their...

Full description

Bibliographic Details
Main Authors: Andrea Castagnetti, Alain Pegatoquet, Benoît Miramond
Format: Article
Language:English
Published: Frontiers Media S.A. 2023-03-01
Series:Frontiers in Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fnins.2023.1154241/full
_version_ 1811160728862720000
author Andrea Castagnetti
Alain Pegatoquet
Benoît Miramond
author_facet Andrea Castagnetti
Alain Pegatoquet
Benoît Miramond
author_sort Andrea Castagnetti
collection DOAJ
description Spiking neural networks are considered as the third generation of Artificial Neural Networks. SNNs perform computation using neurons and synapses that communicate using binary and asynchronous signals known as spikes. They have attracted significant research interest over the last years since their computing paradigm allows theoretically sparse and low-power operations. This hypothetical gain, used from the beginning of the neuromorphic research, was however limited by three main factors: the absence of an efficient learning rule competing with the one of classical deep learning, the lack of mature learning framework, and an important data processing latency finally generating energy overhead. While the first two limitations have recently been addressed in the literature, the major problem of latency is not solved yet. Indeed, information is not exchanged instantaneously between spiking neurons but gradually builds up over time as spikes are generated and propagated through the network. This paper focuses on quantization error, one of the main consequence of the SNN discrete representation of information. We argue that the quantization error is the main source of accuracy drop between ANN and SNN. In this article we propose an in-depth characterization of SNN quantization noise. We then propose a end-to-end direct learning approach based on a new trainable spiking neural model. This model allows adapting the threshold of neurons during training and implements efficient quantization strategies. This novel approach better explains the global behavior of SNNs and minimizes the quantization noise during training. The resulting SNN can be trained over a limited amount of timesteps, reducing latency, while beating state of the art accuracy and preserving high sparsity on the main datasets considered in the neuromorphic community.
first_indexed 2024-04-10T06:03:29Z
format Article
id doaj.art-125fa6f403bb48c6951e55800db8f9bb
institution Directory Open Access Journal
issn 1662-453X
language English
last_indexed 2024-04-10T06:03:29Z
publishDate 2023-03-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Neuroscience
spelling doaj.art-125fa6f403bb48c6951e55800db8f9bb2023-03-03T05:31:29ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2023-03-011710.3389/fnins.2023.11542411154241Trainable quantization for Speedy Spiking Neural NetworksAndrea CastagnettiAlain PegatoquetBenoît MiramondSpiking neural networks are considered as the third generation of Artificial Neural Networks. SNNs perform computation using neurons and synapses that communicate using binary and asynchronous signals known as spikes. They have attracted significant research interest over the last years since their computing paradigm allows theoretically sparse and low-power operations. This hypothetical gain, used from the beginning of the neuromorphic research, was however limited by three main factors: the absence of an efficient learning rule competing with the one of classical deep learning, the lack of mature learning framework, and an important data processing latency finally generating energy overhead. While the first two limitations have recently been addressed in the literature, the major problem of latency is not solved yet. Indeed, information is not exchanged instantaneously between spiking neurons but gradually builds up over time as spikes are generated and propagated through the network. This paper focuses on quantization error, one of the main consequence of the SNN discrete representation of information. We argue that the quantization error is the main source of accuracy drop between ANN and SNN. In this article we propose an in-depth characterization of SNN quantization noise. We then propose a end-to-end direct learning approach based on a new trainable spiking neural model. This model allows adapting the threshold of neurons during training and implements efficient quantization strategies. This novel approach better explains the global behavior of SNNs and minimizes the quantization noise during training. The resulting SNN can be trained over a limited amount of timesteps, reducing latency, while beating state of the art accuracy and preserving high sparsity on the main datasets considered in the neuromorphic community.https://www.frontiersin.org/articles/10.3389/fnins.2023.1154241/fullSpiking Neural Networksquantization errorlow latencysparsitydirect training
spellingShingle Andrea Castagnetti
Alain Pegatoquet
Benoît Miramond
Trainable quantization for Speedy Spiking Neural Networks
Frontiers in Neuroscience
Spiking Neural Networks
quantization error
low latency
sparsity
direct training
title Trainable quantization for Speedy Spiking Neural Networks
title_full Trainable quantization for Speedy Spiking Neural Networks
title_fullStr Trainable quantization for Speedy Spiking Neural Networks
title_full_unstemmed Trainable quantization for Speedy Spiking Neural Networks
title_short Trainable quantization for Speedy Spiking Neural Networks
title_sort trainable quantization for speedy spiking neural networks
topic Spiking Neural Networks
quantization error
low latency
sparsity
direct training
url https://www.frontiersin.org/articles/10.3389/fnins.2023.1154241/full
work_keys_str_mv AT andreacastagnetti trainablequantizationforspeedyspikingneuralnetworks
AT alainpegatoquet trainablequantizationforspeedyspikingneuralnetworks
AT benoitmiramond trainablequantizationforspeedyspikingneuralnetworks