DTS-SNN: Spiking Neural Networks With Dynamic Time-Surfaces

Convolution helps spiking neural networks (SNNs) capture the spatio-temporal structures of neuromorphic (event) data as evident in the convolution-based SNNs (C-SNNs) with the state-of-the-art classification-accuracies on various datasets. However, the efficacy aside, the efficiency of C-SNN is ques...

Full description

Bibliographic Details
Main Authors: Donghyung Yoo, Doo Seok Jeong
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9903429/
Description
Summary:Convolution helps spiking neural networks (SNNs) capture the spatio-temporal structures of neuromorphic (event) data as evident in the convolution-based SNNs (C-SNNs) with the state-of-the-art classification-accuracies on various datasets. However, the efficacy aside, the efficiency of C-SNN is questionable. In this regard, we propose SNNs with novel trainable dynamic time-surfaces (DTS-SNNs) as efficient alternatives to convolution. The novel dynamic time-surface proposed in this work features its high responsiveness to moving objects given the use of the zero-sum temporal kernel that is motivated by the simple cells&#x2019; receptive fields in the early stage visual pathway. We evaluated the performance and computational complexity of our DTS-SNNs on three real-world event-based datasets (DVS128 Gesture, Spiking Heidelberg dataset, N-Cars). The results highlight high classification accuracies and significant improvements in computational efficiency, e.g., merely 1.51&#x0025; behind of the state-of-the-art result on DVS128 Gesture but a <inline-formula> <tex-math notation="LaTeX">$\times 18$ </tex-math></inline-formula> improvement in efficiency. The code is available online (<uri>https://github.com/dooseokjeong/DTS-SNN</uri>).
ISSN:2169-3536