Convolutional Spiking Neural Networks for Spatio-Temporal Feature Extraction

Spiking neural networks (SNNs) can be used in low-power and embedded systems (such as emerging neuromorphic chips) due to their event-based nature. Also, they have the advantage of low computation cost in contrast to conventional artificial neural networks (ANNs), while preserving ANN's properties. However, temporal coding in layers of convolutional spiking neural networks and other types of SNNs has yet to be studied. In this paper, we provide insight into spatio-temporal feature extraction of convolutional SNNs in experiments designed to exploit this property. The shallow convolutional SNN outperforms state-of-the-art spatio-temporal feature extractor methods such as C3D, ConvLstm, and similar networks. Furthermore, we present a new deep spiking architecture to tackle real-world problems (in particular classification tasks) which achieved superior performance compared to other SNN methods on NMNIST (99.6%), DVS-CIFAR10 (69.2%) and DVS-Gesture (96.7%) and ANN methods on UCF-101 (42.1%) and HMDB-51 (21.5%) datasets. It is also worth noting that the training process is implemented based on variation of spatio-temporal backpropagation explained in the paper.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Event data classification CIFAR10-DVS STS-ResNet Accuracy 69.2 # 7
Gesture Recognition DVS128 Gesture STS-ResNet Accuracy (%) 96.7 # 6
Image Classification N-MNIST STS-ResNet Accuracy 99.6 # 1

Methods


No methods listed for this paper. Add relevant methods here