The Fine-Grained Image Classification task focuses on differentiating between hard-to-distinguish object classes, such as species of birds, flowers, or animals; and identifying the makes or models of vehicles.
( Image credit: Looking for the Devil in the Details )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch.
SOTA for Image Classification on SVHN
Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available.
In this work, we introduce a series of architecture modifications that aim to boost neural networks' accuracy, while retaining their GPU training and inference efficiency.
#2 best model for Image Classification on CIFAR-10 (using extra training data)
Scaling up deep neural network capacity has been known as an effective approach to improving model quality for several different machine learning tasks.
#2 best model for Fine-Grained Image Classification on Birdsnap (using extra training data)
In consideration of intrinsic consistency between informativeness of the regions and their probability being ground-truth class, we design a novel training paradigm, which enables Navigator to detect most informative regions under the guidance from Teacher.
#14 best model for Fine-Grained Image Classification on Stanford Cars
Towards addressing this problem, we propose an iterative matrix square root normalization method for fast end-to-end training of global covariance pooling networks.
#2 best model for Fine-Grained Image Classification on CUB-200-2011
It has been shown that using the first and second order statistics (e. g., mean and variance) to perform Z-score standardization on network activations or weight vectors, such as batch normalization (BN) and weight standardization (WS), can improve the training performance.
Conversely, when training a ResNeXt-101 32x48d pre-trained in weakly-supervised fashion on 940 million public images at resolution 224x224 and further optimizing for test resolution 320x320, we obtain a test top-1 accuracy of 86. 4% (top-5: 98. 0%) (single-crop).
SOTA for Image Classification on iNaturalist (using extra training data)
We conduct detailed analysis of the main components that lead to high transfer performance.
SOTA for Image Classification on ObjectNet (Bounding Box) (using extra training data)
In this paper, we propose a novel "Destruction and Construction Learning" (DCL) method to enhance the difficulty of fine-grained recognition and exercise the classification model to acquire expert knowledge.
#9 best model for Fine-Grained Image Classification on FGVC Aircraft