Prototypical Contrastive Learning of Unsupervised Representations

This paper presents Prototypical Contrastive Learning (PCL), an unsupervised representation learning method that addresses the fundamental limitations of instance-wise contrastive learning. PCL not only learns low-level features for the task of instance discrimination, but more importantly, it implicitly encodes semantic structures of the data into the learned embedding space. Specifically, we introduce prototypes as latent variables to help find the maximum-likelihood estimation of the network parameters in an Expectation-Maximization framework. We iteratively perform E-step as finding the distribution of prototypes via clustering and M-step as optimizing the network via contrastive learning. We propose ProtoNCE loss, a generalized version of the InfoNCE loss for contrastive learning, which encourages representations to be closer to their assigned prototypes. PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks with substantial improvement in low-resource transfer learning. Code and pretrained models are available at https://github.com/salesforce/PCL.

PDF Abstract ICLR 2021 PDF ICLR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Self-Supervised Image Classification ImageNet PCL (ResNet-50) Top 1 Accuracy 65.9% # 104
Contrastive Learning imagenet-1k ResNet50 (v2) ImageNet Top-1 Accuracy 67.6 # 5
Contrastive Learning imagenet-1k ResNet50 ImageNet Top-1 Accuracy 61.5 # 8
Semi-Supervised Image Classification ImageNet - 1% labeled data PCL (ResNet-50) Top 5 Accuracy 75.6% # 28

Methods