Contrastive Learning

2196 papers with code • 1 benchmarks • 11 datasets

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Libraries

Use these libraries to find Contrastive Learning models and implementations
7 papers
2,756
6 papers
1,357
See all 6 libraries.

Most implemented papers

Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval

microsoft/ANCE ICLR 2021

In this paper, we identify that the main bottleneck is in the training mechanisms, where the negative instances used in training are not representative of the irrelevant documents in testing.

Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup

luyug/GradCache ACL (RepL4NLP) 2021

Contrastive learning has been applied successfully to learn vector representations of text.

Rethinking Self-supervised Correspondence Learning: A Video Frame-level Similarity Perspective

xvjiarui/VFS ICCV 2021

To learn generalizable representation for correspondence in large-scale, a variety of self-supervised pretext tasks are proposed to explicitly perform object-level or patch-level similarity learning.

Parametric Contrastive Learning

jiequancui/Parametric-Contrastive-Learning ICCV 2021

In this paper, we propose Parametric Contrastive Learning (PaCo) to tackle long-tailed recognition.

Revisiting 3D ResNets for Video Recognition

tensorflow/models 3 Sep 2021

A recent work from Bello shows that training and scaling strategies may be more significant than model architectures for visual recognition.

Data-Efficient Image Recognition with Contrastive Predictive Coding

philip-bachman/amdim-public ICML 2020

Human observers can learn to recognize new categories of images from a handful of examples, yet doing so with artificial ones remains an open challenge.

Large Scale Adversarial Representation Learning

lukemelas/unsupervised-image-segmentation NeurIPS 2019

We extensively evaluate the representation learning and generation capabilities of these BigBiGAN models, demonstrating that these generation-based models achieve the state of the art in unsupervised representation learning on ImageNet, as well as in unconditional image generation.

Self-labelling via simultaneous clustering and representation learning

yukimasano/self-label ICLR 2020

Combining clustering and representation learning is one of the most promising approaches for unsupervised learning of deep neural networks.

Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels

denisyarats/drq ICLR 2021

We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training.