Self-Supervised Pre-Training for Transformer-Based Person Re-Identification

23 Nov 2021  ·  Hao Luo, Pichao Wang, Yi Xu, Feng Ding, Yanxin Zhou, Fan Wang, Hao Li, Rong Jin ·

Transformer-based supervised pre-training achieves great performance in person re-identification (ReID). However, due to the domain gap between ImageNet and ReID datasets, it usually needs a larger pre-training dataset (e.g. ImageNet-21K) to boost the performance because of the strong data fitting ability of the transformer. To address this challenge, this work targets to mitigate the gap between the pre-training and ReID datasets from the perspective of data and model structure, respectively. We first investigate self-supervised learning (SSL) methods with Vision Transformer (ViT) pretrained on unlabelled person images (the LUPerson dataset), and empirically find it significantly surpasses ImageNet supervised pre-training models on ReID tasks. To further reduce the domain gap and accelerate the pre-training, the Catastrophic Forgetting Score (CFS) is proposed to evaluate the gap between pre-training and fine-tuning data. Based on CFS, a subset is selected via sampling relevant data close to the down-stream ReID data and filtering irrelevant data from the pre-training dataset. For the model structure, a ReID-specific module named IBN-based convolution stem (ICS) is proposed to bridge the domain gap by learning more invariant features. Extensive experiments have been conducted to fine-tune the pre-training models under supervised learning, unsupervised domain adaptation (UDA), and unsupervised learning (USL) settings. We successfully downscale the LUPerson dataset to 50% with no performance degradation. Finally, we achieve state-of-the-art performance on Market-1501 and MSMT17. For example, our ViT-S/16 achieves 91.3%/89.9%/89.6% mAP accuracy on Market1501 for supervised/UDA/USL ReID. Codes and models will be released to https://github.com/michuanhaohao/TransReID-SSL.

PDF Abstract

Results from the Paper


 Ranked #1 on Unsupervised Person Re-Identification on Market-1501 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Unsupervised Person Re-Identification Market-1501 TransReID-SSL (ViTi-S) Rank-1 95.3 # 2
MAP 89.6 # 1
Unsupervised Person Re-Identification Market-1501 TransReID-SSL (ViT-S w/o RK) Rank-1 95.3 # 2
Unsupervised Person Re-Identification Market-1501 TransReID-SSL (ViT-S) Rank-1 94.2 # 6
MAP 88.2 # 4
Person Re-Identification Market-1501 TransReID-SSL (ViT-B w/o RK) Rank-1 96.7 # 9
mAP 93.2 # 21
Person Re-Identification MSMT17 TransReID-SSL (without RK) Rank-1 89.6 # 6
Person Re-Identification MSMT17 TransReID-SSL (ViT-B without RK) Rank-1 89.5 # 7
mAP 75.0 # 9
Unsupervised Person Re-Identification MSMT17 TransReID-SSL (ViTi-S) mAP 50.6 # 5
Rank-1 75 # 5
Unsupervised Person Re-Identification MSMT17 TransReID-SSL (ViT-S) mAP 40.9 # 7
Rank-1 66.4 # 8

Methods