Contrastive unsupervised learning has recently shown encouraging progress, e. g., in Momentum Contrast (MoCo) and SimCLR.
#5 best model for Self-Supervised Image Classification on ImageNet
This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning.
#8 best model for Self-Supervised Image Classification on ImageNet
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
We analyze key properties of the approach that make it work, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views we learn from, the better the resulting representation captures underlying scene semantics.
#6 best model for Self-Supervised Image Classification on ImageNet
Contrastive learning between multiple views of the data has recently achieved state of the art performance in the field of self-supervised representation learning.
#3 best model for Self-Supervised Image Classification on ImageNet
In this paper, we propose a novel training methodology that consistently outperforms cross entropy on supervised learning tasks across different architectures and data augmentations.
#53 best model for Image Classification on ImageNet
Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.
We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training.
Contrastive representation learning has been outstandingly successful in practice.