Self-supervised representation learning via adaptive hard-positive mining

1 Jan 2021  ·  Shaofeng Zhang, Junchi Yan, Xiaokang Yang ·

Despite their success in perception over the last decade, deep neural networks are also known ravenous to labeled data for training, which limits their applicability to real-world problems. Hence self-supervised learning has recently attracted intensive attention. Contrastive learning has been one of the dominant approaches for effective feature extraction and achieve state-of-the-art performance. In this paper, we first theoretically show that these methods cannot fully take advantage of training samples in the sense of hard positive samples mining. Then we propose a new contrastive method called AdpCLR$^{full}$ (adaptive self-supervised contrastive learning representations), which can more effectively (supported by our proof) explore the samples in a way of being closer to supervised contrastive learning. We thoroughly evaluate the quality of the learned representation on ImageNet for both pretraining based version (AdpCLR$^{pre}$) and fully trained version (AdpCLR$^{full}$). The results of accuracy show AdpCLR$^{pre}$ outperforms state-of-the-art contrastive-based models by 3.0\% with extra 100 epochs, while AdpCLR$^{full}$ outperforms by 2.5\% with additional 600 epochs.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods