Take More Positives: An Empirical Study of Contrastive Learing in Unsupervised Person Re-Identification

12 Jan 2021  ·  Xuanyu He, Wei zhang, Ran Song, Qian Zhang, Xiangyuan Lan, Lin Ma ·

Unsupervised person re-identification (re-ID) aims at closing the performance gap to supervised methods. These methods build reliable relationship between data points while learning representations. However, we empirically show that the reason why they are successful is not only their label generation mechanisms, but also their unexplored designs. By studying two unsupervised person re-ID methods in a cross-method way, we point out a hard negative problem is handled implicitly by their designs of data augmentations and PK sampler respectively. In this paper, we find another simple solution for the problem, i.e., taking more positives during training, by which we generate pseudo-labels and update models in an iterative manner. Based on our findings, we propose a contrastive learning method without a memory back for unsupervised person re-ID. Our method works well on benchmark datasets and outperforms the state-of-the-art methods. Code will be made available.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods