Summary: | As the third generation of neural networks, spiking neural network (SNN) motivated by neurophysiology enjoys considerable advances due to integrating different information, such as time and space. The frequency-domain provides a powerful capability of modeling and training convolutional neural networks (CNNs). However, SNN with binary input and output will lose much information and slightly inferior to deep neural networks (DNN). We consider how to make the most of information to protect input. Binary input and output are different from DNN, the essence of difference at frequency distribution. In this work, from the insight of frequency distribution, we rethink the SNN training process and give a novel method to transfer SNN to high-frequency spiking neural network (HF-SNN). This approach preserves considerably more information than other optimizing strategies and enables flexibility in the training process. Besides, we evaluate the HF-SNN with extensive experiments on three large datasets: CIFAR-10, CIFAR-100, and ImageNet. Finally, our model supports training a deeper SNN model from scratch and achieves better performance on these datasets than the existing SNN model.
|