Cross-X Learning for Fine-Grained Visual Categorization

Recognizing objects from subcategories with very subtle differences remains a challenging task due to the large intra-class and small inter-class variation. Recent work tackles this problem in a weakly-supervised manner: object parts are first detected and the corresponding part-specific features are extracted for fine-grained classification. However, these methods typically treat the part-specific features of each image in isolation while neglecting their relationships between different images. In this paper, we propose Cross-X learning, a simple yet effective approach that exploits the relationships between different images and between different network layers for robust multi-scale feature learning. Our approach involves two novel components: (i) a cross-category cross-semantic regularizer that guides the extracted features to represent semantic parts and, (ii) a cross-layer regularizer that improves the robustness of multi-scale features by matching the prediction distribution across multiple layers. Our approach can be easily trained end-to-end and is scalable to large datasets like NABirds. We empirically analyze the contributions of different components of our approach and demonstrate its robustness, effectiveness and state-of-the-art performance on five benchmark datasets. Code is available at \url{https://github.com/cswluo/CrossX}.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract

Results from the Paper


Ranked #18 on Fine-Grained Image Classification on NABirds (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Fine-Grained Image Classification CUB-200-2011 Cross-X Accuracy 87.7% # 53
Fine-Grained Image Classification FGVC Aircraft Cross-X Accuracy 92.7% # 34
Fine-Grained Image Classification NABirds Cross-X Accuracy 86.4% # 18
Fine-Grained Image Classification Stanford Cars Cross-X Accuracy 94.6% # 33

Methods


No methods listed for this paper. Add relevant methods here