A Comparative Study on the Performance and Security Evaluation of Spiking Neural Networks

The brain-inspired Spiking neural networks (SNN) claim to present advantages for visual classification tasks in terms of energy efficiency and inherent robustness. In this work, we explore the impact on network inter-layer sparsity through neural coding schemes and the intrinsic structural parameter...

Full description

Bibliographic Details
Main Authors: Yanjie Li, Xiaoxin Cui, Yihao Zhou, Ying Li
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9940625/
_version_ 1798018922706894848
author Yanjie Li
Xiaoxin Cui
Yihao Zhou
Ying Li
author_facet Yanjie Li
Xiaoxin Cui
Yihao Zhou
Ying Li
author_sort Yanjie Li
collection DOAJ
description The brain-inspired Spiking neural networks (SNN) claim to present advantages for visual classification tasks in terms of energy efficiency and inherent robustness. In this work, we explore the impact on network inter-layer sparsity through neural coding schemes and the intrinsic structural parameters of Leaky Integrate-and-Fire (LIF) neurons, which can be a candidate metric for performance evaluation. Towards this, we perform a comparative study of four critical neural coding schemes: rate coding (poisson coding), latency coding, phase coding, and direct coding, as well as 6 LIF neuron intrinsic parameter options for a total of 24 combined parameter schemes. Specifically, the models were trained using a supervised training algorithm with a surrogate gradient, and two adversarial attacks, Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) were applied on a CIFAR10 dataset. We identified the sources of interlayer sparsity in SNN, and quantitatively analyzed the differences in sparsity caused by coding schemes, neuron leakage factors and thresholds. Various aspects of network performance were thoroughly considered in this paper, including inference accuracy, adversarial robustness, and energy efficiency. Our results show that latency coding is the optimum choice in achieving the highest adversarial robustness and energy efficient against low intensity attacks, while rate coding offers the best adversarial robustness against medium and high intensity attacks. The maximum deviations of robustness and efficiency between different coding schemes are 9.35% in VGG5 and 13.59% in VGG9. Increasing the sparsity of spike activity by improving the threshold can bring a short-lived adversarial robustness sweet spot, while excessive sparsity due to changes in threshold and leakage can instead reduce the adversarial robustness. The study reveals the advantages and disadvantages, and design space of SNN in various dimensions, allowing researchers to frame their neuromorphic systems in terms of the coding methods, neuron inherent structure, and model learning capabilities.
first_indexed 2024-04-11T16:32:18Z
format Article
id doaj.art-3651d9a7c2694a2c87f8061da660a39d
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-04-11T16:32:18Z
publishDate 2022-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-3651d9a7c2694a2c87f8061da660a39d2022-12-22T04:13:59ZengIEEEIEEE Access2169-35362022-01-011011757211758110.1109/ACCESS.2022.32203679940625A Comparative Study on the Performance and Security Evaluation of Spiking Neural NetworksYanjie Li0https://orcid.org/0000-0003-2897-6446Xiaoxin Cui1https://orcid.org/0000-0002-0394-8839Yihao Zhou2Ying Li3https://orcid.org/0000-0002-5089-0158Institute of Microelectronics, Chinese Academy of Sciences, Beijing, ChinaSchool of Integrated Circuits, Peking University, Beijing, ChinaInstitute of Microelectronics, Chinese Academy of Sciences, Beijing, ChinaInstitute of Microelectronics, Chinese Academy of Sciences, Beijing, ChinaThe brain-inspired Spiking neural networks (SNN) claim to present advantages for visual classification tasks in terms of energy efficiency and inherent robustness. In this work, we explore the impact on network inter-layer sparsity through neural coding schemes and the intrinsic structural parameters of Leaky Integrate-and-Fire (LIF) neurons, which can be a candidate metric for performance evaluation. Towards this, we perform a comparative study of four critical neural coding schemes: rate coding (poisson coding), latency coding, phase coding, and direct coding, as well as 6 LIF neuron intrinsic parameter options for a total of 24 combined parameter schemes. Specifically, the models were trained using a supervised training algorithm with a surrogate gradient, and two adversarial attacks, Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) were applied on a CIFAR10 dataset. We identified the sources of interlayer sparsity in SNN, and quantitatively analyzed the differences in sparsity caused by coding schemes, neuron leakage factors and thresholds. Various aspects of network performance were thoroughly considered in this paper, including inference accuracy, adversarial robustness, and energy efficiency. Our results show that latency coding is the optimum choice in achieving the highest adversarial robustness and energy efficient against low intensity attacks, while rate coding offers the best adversarial robustness against medium and high intensity attacks. The maximum deviations of robustness and efficiency between different coding schemes are 9.35% in VGG5 and 13.59% in VGG9. Increasing the sparsity of spike activity by improving the threshold can bring a short-lived adversarial robustness sweet spot, while excessive sparsity due to changes in threshold and leakage can instead reduce the adversarial robustness. The study reveals the advantages and disadvantages, and design space of SNN in various dimensions, allowing researchers to frame their neuromorphic systems in terms of the coding methods, neuron inherent structure, and model learning capabilities.https://ieeexplore.ieee.org/document/9940625/Spiking neural networkaccuracyenergy efficiencyadversarial robustnesssparsity
spellingShingle Yanjie Li
Xiaoxin Cui
Yihao Zhou
Ying Li
A Comparative Study on the Performance and Security Evaluation of Spiking Neural Networks
IEEE Access
Spiking neural network
accuracy
energy efficiency
adversarial robustness
sparsity
title A Comparative Study on the Performance and Security Evaluation of Spiking Neural Networks
title_full A Comparative Study on the Performance and Security Evaluation of Spiking Neural Networks
title_fullStr A Comparative Study on the Performance and Security Evaluation of Spiking Neural Networks
title_full_unstemmed A Comparative Study on the Performance and Security Evaluation of Spiking Neural Networks
title_short A Comparative Study on the Performance and Security Evaluation of Spiking Neural Networks
title_sort comparative study on the performance and security evaluation of spiking neural networks
topic Spiking neural network
accuracy
energy efficiency
adversarial robustness
sparsity
url https://ieeexplore.ieee.org/document/9940625/
work_keys_str_mv AT yanjieli acomparativestudyontheperformanceandsecurityevaluationofspikingneuralnetworks
AT xiaoxincui acomparativestudyontheperformanceandsecurityevaluationofspikingneuralnetworks
AT yihaozhou acomparativestudyontheperformanceandsecurityevaluationofspikingneuralnetworks
AT yingli acomparativestudyontheperformanceandsecurityevaluationofspikingneuralnetworks
AT yanjieli comparativestudyontheperformanceandsecurityevaluationofspikingneuralnetworks
AT xiaoxincui comparativestudyontheperformanceandsecurityevaluationofspikingneuralnetworks
AT yihaozhou comparativestudyontheperformanceandsecurityevaluationofspikingneuralnetworks
AT yingli comparativestudyontheperformanceandsecurityevaluationofspikingneuralnetworks