Rethinking Recurrent Neural Networks and Other Improvements for Image Classification

30 Jul 2020  ยท  Nguyen Huu Phong, Bernardete Ribeiro ยท

Over the long history of machine learning, which dates back several decades, recurrent neural networks (RNNs) have been used mainly for sequential data and time series and generally with 1D information. Even in some rare studies on 2D images, these networks are used merely to learn and generate data sequentially rather than for image recognition tasks. In this study, we propose integrating an RNN as an additional layer when designing image recognition models. We also develop end-to-end multimodel ensembles that produce expert predictions using several models. In addition, we extend the training strategy so that our model performs comparably to leading models and can even match the state-of-the-art models on several challenging datasets (e.g., SVHN (0.99), Cifar-100 (0.9027) and Cifar-10 (0.9852)). Moreover, our model sets a new record on the Surrey dataset (0.949). The source code of the methods provided in this article is available at https://github.com/leonlha/e2e-3m and http://nguyenhuuphong.me.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification CIFAR-10 E2E-3M Percentage correct 98.52 # 34
PARAMS 20M # 205
Image Classification CIFAR-100 E2E-3M Percentage correct 90.27 # 25
Image Classification Fashion-MNIST E2E-3M Percentage error 4.08 # 5
Image Classification iCassava'19 E2E-3M Top-1 Accuracy 0.9368 # 1
Image Classification Surrey ASL E2E-3M Accuracy (%) 94.90 # 1
Image Classification SVHN E2E-M3 Percentage error 1.0 # 2

Methods