Omni-Scale Feature Learning for Person Re-Identification

As an instance-level recognition problem, person re-identification (ReID) relies on discriminative features, which not only capture different spatial scales but also encapsulate an arbitrary combination of multiple scales. We call features of both homogeneous and heterogeneous scales omni-scale features. In this paper, a novel deep ReID CNN is designed, termed Omni-Scale Network (OSNet), for omni-scale feature learning. This is achieved by designing a residual block composed of multiple convolutional streams, each detecting features at a certain scale. Importantly, a novel unified aggregation gate is introduced to dynamically fuse multi-scale features with input-dependent channel-wise weights. To efficiently learn spatial-channel correlations and avoid overfitting, the building block uses pointwise and depthwise convolutions. By stacking such block layer-by-layer, our OSNet is extremely lightweight and can be trained from scratch on existing ReID benchmarks. Despite its small model size, OSNet achieves state-of-the-art performance on six person ReID datasets, outperforming most large-sized models, often by a clear margin. Code and models are available at: \url{https://github.com/KaiyangZhou/deep-person-reid}.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Person Re-Identification CUHK03 OSNet MAP 67.8 # 10
Person Re-Identification CUHK03 detected OSNet (ICCV'19) MAP 67.8 # 10
Rank-1 72.3 # 10
Person Re-Identification DukeMTMC-reID OSNet (ICCV'19) Rank-1 88.6 # 43
mAP 73.5 # 57
Person Re-Identification Market-1501 OSNet (ICCV'19) Rank-1 94.8 # 62
mAP 84.9 # 75
Person Re-Identification Market-1501-C OS-Net Rank-1 30.94 # 15
mAP 10.37 # 12
mINP 0.23 # 18
Person Re-Identification MSMT17 OSNet Rank-1 78.7 # 27
mAP 52.9 # 27
Person Re-Identification MSMT17-C OS-Net Rank-1 28.51 # 2
mAP 7.86 # 2
mINP 0.08 # 1

Methods