SpinalNet: Deep Neural Network with Gradual Input

Deep neural networks (DNNs) have achieved the state of the art performance in numerous fields. However, DNNs need high computation times, and people always expect better performance in a lower computation. Therefore, we study the human somatosensory system and design a neural network (SpinalNet) to achieve higher accuracy with fewer computations. Hidden layers in traditional NNs receive inputs in the previous layer, apply activation function, and then transfer the outcomes to the next layer. In the proposed SpinalNet, each layer is split into three splits: 1) input split, 2) intermediate split, and 3) output split. Input split of each layer receives a part of the inputs. The intermediate split of each layer receives outputs of the intermediate split of the previous layer and outputs of the input split of the current layer. The number of incoming weights becomes significantly lower than traditional DNNs. The SpinalNet can also be used as the fully connected or classification layer of DNN and supports both traditional learning and transfer learning. We observe significant error reductions with lower computational costs in most of the DNNs. Traditional learning on the VGG-5 network with SpinalNet classification layers provided the state-of-the-art (SOTA) performance on QMNIST, Kuzushiji-MNIST, EMNIST (Letters, Digits, and Balanced) datasets. Traditional learning with ImageNet pre-trained initial weights and SpinalNet classification layers provided the SOTA performance on STL-10, Fruits 360, Bird225, and Caltech-101 datasets. The scripts of the proposed SpinalNet are available at the following link: https://github.com/dipuk0506/SpinalNet

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Fine-Grained Image Classification Bird-225 VGG-19bn (Spinal FC) Accuracy 99.02 # 4
Fine-Grained Image Classification Bird-225 VGG-19bn Accuracy 98.67 # 5
Fine-Grained Image Classification Caltech-101 VGG-19bn (Spinal FC) Top-1 Error Rate 6.84% # 7
Fine-Grained Image Classification Caltech-101 Wide-ResNet-101 Top-1 Error Rate 2.89% # 3
Fine-Grained Image Classification Caltech-101 Wide-ResNet-101 (Spinal FC) Top-1 Error Rate 2.68% # 2
Accuracy 97.32 # 6
Image Classification EMNIST-Balanced VGG-5 Accuracy 91.04 # 3
Image Classification EMNIST-Balanced VGG-5(Spinal FC) Accuracy 91.05 # 2
Image Classification EMNIST-Digits VGG-5(Spinal FC) Accuracy (%) 99.75 # 3
Image Classification EMNIST-Letters VGG-5(Spinal FC) Accuracy 95.88 # 2
Image Classification EMNIST-Letters VGG-5 Accuracy 95.86 # 3
Image Classification Flowers-102 Wide-ResNet-101 (Spinal FC) Accuracy 99.30 # 11
Fine-Grained Image Classification Fruits-360 VGG-19bn Accuracy (%) 99.90 # 2
Image Classification Kuzushiji-MNIST VGG-5 (Spinal FC) Accuracy 99.15 # 1
Error 0.85 # 1
Image Classification MNIST VGG-5 (Spinal FC) Percentage error 0.28 # 13
Accuracy 99.72 # 8
Fine-Grained Image Classification Oxford 102 Flowers Wide-ResNet-101 (Spinal FC) Accuracy 99.30% # 5
Image Classification STL-10 VGG-19bn Percentage correct 95.44 # 18
Image Classification STL-10 Wide-ResNet-101 (Spinal FC) Percentage correct 98.66 # 4
Satellite Image Classification STL-10, 40 Labels WideResNet Percentage correct 98.58 # 1

Methods