TransNeXt: Robust Foveal Visual Perception for Vision Transformers

28 Nov 2023  ·  Dai Shi ·

Due to the depth degradation effect in residual connections, many efficient Vision Transformers models that rely on stacking layers for information exchange often fail to form sufficient information mixing, leading to unnatural visual perception. To address this issue, in this paper, we propose Aggregated Attention, a biomimetic design-based token mixer that simulates biological foveal vision and continuous eye movement while enabling each token on the feature map to have a global perception. Furthermore, we incorporate learnable tokens that interact with conventional queries and keys, which further diversifies the generation of affinity matrices beyond merely relying on the similarity between queries and keys. Our approach does not rely on stacking for information exchange, thus effectively avoiding depth degradation and achieving natural visual perception. Additionally, we propose Convolutional GLU, a channel mixer that bridges the gap between GLU and SE mechanism, which empowers each token to have channel attention based on its nearest neighbor image features, enhancing local modeling capability and model robustness. We combine aggregated attention and convolutional GLU to create a new visual backbone called TransNeXt. Extensive experiments demonstrate that our TransNeXt achieves state-of-the-art performance across multiple model sizes. At a resolution of $224^2$, TransNeXt-Tiny attains an ImageNet accuracy of 84.0%, surpassing ConvNeXt-B with 69% fewer parameters. Our TransNeXt-Base achieves an ImageNet accuracy of 86.2% and an ImageNet-A accuracy of 61.6% at a resolution of $384^2$, a COCO object detection mAP of 57.1, and an ADE20K semantic segmentation mIoU of 54.7.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation ADE20K TransNeXt-Tiny (IN-1K pretrain, Mask2Former, 512) Validation mIoU 53.4 # 76
Params (M) 47.5 # 50
Semantic Segmentation ADE20K TransNeXt-Base (IN-1K pretrain, Mask2Former, 512) Validation mIoU 54.7 # 49
Params (M) 109 # 28
Semantic Segmentation ADE20K TransNeXt-Small (IN-1K pretrain, Mask2Former, 512) Validation mIoU 54.1 # 61
Params (M) 69 # 38
Object Detection COCO minival TransNeXt-Base (IN-1K pretrain, DINO 1x) box AP 57.1 # 39
Object Detection COCO minival TransNeXt-Small (IN-1K pretrain, DINO 1x) box AP 56.6 # 42
Object Detection COCO minival TransNeXt-Tiny (IN-1K pretrain, DINO 1x) box AP 55.7 # 46
Image Classification ImageNet TransNeXt-Tiny (IN-1K supervised, 224) Top 1 Accuracy 84.0% # 337
Number of params 28.2M # 638
GFLOPs 5.7 # 237
Image Classification ImageNet TransNeXt-Small (IN-1K supervised, 224) Top 1 Accuracy 84.7% # 282
Number of params 49.7M # 723
GFLOPs 10.3 # 300
Image Classification ImageNet TransNeXt-Micro (IN-1K supervised, 224) Top 1 Accuracy 82.5% # 483
Number of params 12.8M # 504
GFLOPs 2.7 # 167
Image Classification ImageNet TransNeXt-Small (IN-1K supervised, 384) Top 1 Accuracy 86.0% # 177
Number of params 49.7M # 723
GFLOPs 32.1 # 397
Image Classification ImageNet TransNeXt-Base (IN-1K supervised, 384) Top 1 Accuracy 86.2% # 165
Number of params 89.7M # 847
GFLOPs 56.3 # 431
Domain Generalization ImageNet-A TransNeXt-Base (IN-1K supervised, 384) Top-1 accuracy % 61.6 # 15
Number of params 89.7M # 9
Domain Generalization ImageNet-A TransNeXt-Small (IN-1K supervised, 384) Top-1 accuracy % 58.3 # 16
Number of params 49.7M # 11
Domain Generalization ImageNet-A TransNeXt-Small (IN-1K supervised, 224) Top-1 accuracy % 47.1 # 21
Number of params 49.7M # 11
Domain Generalization ImageNet-A TransNeXt-Base (IN-1K supervised, 224) Top-1 accuracy % 50.6 # 19
Number of params 89.7M # 9

Methods