Fine-Grained Visual Classification via Internal Ensemble Learning Transformer

Recently, vision transformers (ViTs) have been investigated in fine-grained visual recognition (FGVC) and are now considered state of the art. However, most ViT-based works ignore the different learning performances of the heads in the multihead self-attention (MHSA) mechanism and its layers. To address these issues, in this paper, we propose a novel internal ensemble learning transformer (IELT) for FGVC. The proposed IELT involves three main modules: multi-head voting (MHV) module, cross-layer refinement (CLR) module, and dynamic selection (DS) module. To solve the problem of the inconsistent performances of multiple heads, we propose the MHV module, which considers all of the heads in each layer as weak learners and votes for tokens of discriminative regions as cross-layer feature based on the attention maps and spatial relationships. To effectively mine the cross-layer feature and suppress the noise, the CLR module is proposed, where the refined feature is extracted and the assist logits operation is developed for the final prediction. In addition, a newly designed DS module adjusts the token selection number at each layer by weighting their contributions of the refined feature. In this way, the idea of ensemble learning is combined with the ViT to improve fine-grained feature representation. The experiments demonstrate that our method achieves competitive results compared with the state of the art on five popular FGVC datasets. Source code has been released and can be found at https://github.com/mobulan/IELT.

PDF
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Fine-Grained Image Classification CUB-200-2011 IELT Accuracy 91.8% # 8
Fine-Grained Image Classification NABirds IELT Accuracy 90.8% # 9
Fine-Grained Image Classification Oxford 102 Flowers IELT Accuracy 99.64% # 2
Fine-Grained Image Classification Oxford-IIIT Pet Dataset IELT Accuracy 95.28% # 7
Fine-Grained Image Classification Stanford Dogs IELT Accuracy 91.8% # 9

Methods


No methods listed for this paper. Add relevant methods here