Quantization Framework for Fast Spiking Neural Networks
Compared with artificial neural networks (ANNs), spiking neural networks (SNNs) offer additional temporal dynamics with the compromise of lower information transmission rates through the use of spikes. When using an ANN-to-SNN conversion technique there is a direct link between the activation bit pr...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2022-07-01
|
Series: | Frontiers in Neuroscience |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fnins.2022.918793/full |
_version_ | 1818195144309997568 |
---|---|
author | Chen Li Lei Ma Lei Ma Steve Furber |
author_facet | Chen Li Lei Ma Lei Ma Steve Furber |
author_sort | Chen Li |
collection | DOAJ |
description | Compared with artificial neural networks (ANNs), spiking neural networks (SNNs) offer additional temporal dynamics with the compromise of lower information transmission rates through the use of spikes. When using an ANN-to-SNN conversion technique there is a direct link between the activation bit precision of the artificial neurons and the time required by the spiking neurons to represent the same bit precision. This implicit link suggests that techniques used to reduce the activation bit precision of ANNs, such as quantization, can help shorten the inference latency of SNNs. However, carrying ANN quantization knowledge over to SNNs is not straightforward, as there are many fundamental differences between them. Here we propose a quantization framework for fast SNNs (QFFS) to overcome these difficulties, providing a method to build SNNs with enhanced latency and reduced loss of accuracy relative to the baseline ANN model. In this framework, we promote the compatibility of ANN information quantization techniques with SNNs, and suppress “occasional noise” to minimize accuracy loss. The resulting SNNs overcome the accuracy degeneration observed previously in SNNs with a limited number of time steps and achieve an accuracy of 70.18% on ImageNet within 8 time steps. This is the first demonstration that SNNs built by ANN-to-SNN conversion can achieve a similar latency to SNNs built by direct training. |
first_indexed | 2024-12-12T01:13:31Z |
format | Article |
id | doaj.art-5612a3d511af4fdbae974b15b5c7fccf |
institution | Directory Open Access Journal |
issn | 1662-453X |
language | English |
last_indexed | 2024-12-12T01:13:31Z |
publishDate | 2022-07-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Neuroscience |
spelling | doaj.art-5612a3d511af4fdbae974b15b5c7fccf2022-12-22T00:43:25ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2022-07-011610.3389/fnins.2022.918793918793Quantization Framework for Fast Spiking Neural NetworksChen Li0Lei Ma1Lei Ma2Steve Furber3Advanced Processor Technologies (APT) Group, Department of Computer Science, The University of Manchester, Manchester, United KingdomBeijing Academy of Artificial Intelligence, Beijing, ChinaNational Biomedical Imaging Center, Peking University, Beijing, ChinaAdvanced Processor Technologies (APT) Group, Department of Computer Science, The University of Manchester, Manchester, United KingdomCompared with artificial neural networks (ANNs), spiking neural networks (SNNs) offer additional temporal dynamics with the compromise of lower information transmission rates through the use of spikes. When using an ANN-to-SNN conversion technique there is a direct link between the activation bit precision of the artificial neurons and the time required by the spiking neurons to represent the same bit precision. This implicit link suggests that techniques used to reduce the activation bit precision of ANNs, such as quantization, can help shorten the inference latency of SNNs. However, carrying ANN quantization knowledge over to SNNs is not straightforward, as there are many fundamental differences between them. Here we propose a quantization framework for fast SNNs (QFFS) to overcome these difficulties, providing a method to build SNNs with enhanced latency and reduced loss of accuracy relative to the baseline ANN model. In this framework, we promote the compatibility of ANN information quantization techniques with SNNs, and suppress “occasional noise” to minimize accuracy loss. The resulting SNNs overcome the accuracy degeneration observed previously in SNNs with a limited number of time steps and achieve an accuracy of 70.18% on ImageNet within 8 time steps. This is the first demonstration that SNNs built by ANN-to-SNN conversion can achieve a similar latency to SNNs built by direct training.https://www.frontiersin.org/articles/10.3389/fnins.2022.918793/fullspiking neural networksfast spiking neural networksANN-to-SNN conversioninference latencyquantizationoccasional noise |
spellingShingle | Chen Li Lei Ma Lei Ma Steve Furber Quantization Framework for Fast Spiking Neural Networks Frontiers in Neuroscience spiking neural networks fast spiking neural networks ANN-to-SNN conversion inference latency quantization occasional noise |
title | Quantization Framework for Fast Spiking Neural Networks |
title_full | Quantization Framework for Fast Spiking Neural Networks |
title_fullStr | Quantization Framework for Fast Spiking Neural Networks |
title_full_unstemmed | Quantization Framework for Fast Spiking Neural Networks |
title_short | Quantization Framework for Fast Spiking Neural Networks |
title_sort | quantization framework for fast spiking neural networks |
topic | spiking neural networks fast spiking neural networks ANN-to-SNN conversion inference latency quantization occasional noise |
url | https://www.frontiersin.org/articles/10.3389/fnins.2022.918793/full |
work_keys_str_mv | AT chenli quantizationframeworkforfastspikingneuralnetworks AT leima quantizationframeworkforfastspikingneuralnetworks AT leima quantizationframeworkforfastspikingneuralnetworks AT stevefurber quantizationframeworkforfastspikingneuralnetworks |