Stochastic Optimization of Plain Convolutional Neural Networks with Simple methods

24 Jan 2020  ·  Yahia Assiri ·

Convolutional neural networks have been achieving the best possible accuracies in many visual pattern classification problems. However, due to the model capacity required to capture such representations, they are often oversensitive to overfitting and therefore require proper regularization to generalize well. In this paper, we present a combination of regularization techniques which work together to get better performance, we built plain CNNs, and then we used data augmentation, dropout and customized early stopping function, we tested and evaluated these techniques by applying models on five famous datasets, MNIST, CIFAR10, CIFAR100, SVHN, STL10, and we achieved three state-of-the-art-of (MNIST, SVHN, STL10) and very high-Accuracy on the other two datasets.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Image Classification CIFAR-10 Stochastic Optimization of Plain Convolutional Neural Networks with Simple methods Percentage correct 94.29 # 143
PARAMS 4.3M # 193
Image Classification CIFAR-100 SOPCNN Percentage correct 72.96 # 155
PARAMS 4,252,298 # 1
Image Classification MNIST SOPCNN (Only a single Model) Percentage error 0.17 # 4
Accuracy 99.83 # 4
Trainable Parameters 1,400,000 # 3
Image Classification STL-10 SOPCNN Percentage correct 88.08 # 44
Image Classification SVHN SOPCNN Percentage error 1.50 # 11

Methods