Long-Term Person Re-identification with Dramatic Appearance Change: Algorithm and Benchmark

For person re-identification (Re-ID) task, most of previous studies assumed that the pedestrians do not change their appearances. The works on cross-appearance Re-ID, including datasets and algorithms, are still few. Therefore, this paper contributes a cross-season appearance change Re-ID dataset, namely NKUP+, including more than 300 IDs from surveillance videos over 10 months, to support the studies of the cross-appearance Re-ID. In addition, we propose a network named M2Net, which integrates multi-modality features from the RGB images, contour images and human parsing images. By ignoring irrelevant misleading information for cross-appearance retrieval in RGB images, M2Net can learn features that are robust to appearance changes. Meanwhile, we propose a sampling strategy called RAS to contain a variety of appearances in one batch. And appearance loss and multi-appearance loss are designed to guide the network to learn both same-appearance and cross-appearance features. Finally, we evaluated our method on NKUP+/PRCC/DeepChange datasets, and the results showed that, compared with the baseline, our method renders significant improvement, leading to the state-of-the-art performance over other methods. Our dataset is available at https://github.com/nkicsl/NKUP-dataset.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here