Contrastive Learning

2167 papers with code • 1 benchmarks • 11 datasets

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Libraries

Use these libraries to find Contrastive Learning models and implementations
7 papers
2,741
6 papers
1,355
See all 6 libraries.

Latest papers with no code

Multimodal 3D Object Detection on Unseen Domains

no code yet • 17 Apr 2024

To this end, we propose CLIX$^\text{3D}$, a multimodal fusion and supervised contrastive learning framework for 3D object detection that performs alignment of object features from same-class samples of different domains while pushing the features from different classes apart.

EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence

no code yet • 16 Apr 2024

We follow the global contrastive learning loss as introduced in SogCLR, and propose EMC$^2$ which utilizes an adaptive Metropolis-Hastings subroutine to generate hardness-aware negative samples in an online fashion during the optimization.

Uncertainty-guided Open-Set Source-Free Unsupervised Domain Adaptation with Target-private Class Segregation

no code yet • 16 Apr 2024

We propose a novel approach for SF-OSDA that exploits the granularity of target-private categories by segregating their samples into multiple unknown classes.

Contextrast: Contextual Contrastive Learning for Semantic Segmentation

no code yet • 16 Apr 2024

Despite great improvements in semantic segmentation, challenges persist because of the lack of local/global contexts and the relationship between them.

Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition

no code yet • 15 Apr 2024

In this study, we propose a novel Joint Contrastive learning framework with Feature Alignment (JCFA) to address cross-corpus EEG-based emotion recognition.

RankCLIP: Ranking-Consistent Language-Image Pretraining

no code yet • 15 Apr 2024

Among the ever-evolving development of vision-language models, contrastive language-image pretraining (CLIP) has set new benchmarks in many downstream tasks such as zero-shot classifications by leveraging self-supervised contrastive learning on large amounts of text-image pairs.

Real-world Instance-specific Image Goal Navigation for Service Robots: Bridging the Domain Gap with Contrastive Learning

no code yet • 15 Apr 2024

To address this, we propose a novel method called Few-shot Cross-quality Instance-aware Adaptation (CrossIA), which employs contrastive learning with an instance classifier to align features between massive low- and few high-quality images.

Fuse after Align: Improving Face-Voice Association Learning via Multimodal Encoder

no code yet • 15 Apr 2024

Today, there have been many achievements in learning the association between voice and face.

Learning Tracking Representations from Single Point Annotations

no code yet • 15 Apr 2024

In this paper, we propose to learn tracking representations from single point annotations (i. e., 4. 5x faster to annotate than the traditional bounding box) in a weakly supervised manner.

Contrastive Mean-Shift Learning for Generalized Category Discovery

no code yet • 15 Apr 2024

We address the problem of generalized category discovery (GCD) that aims to partition a partially labeled collection of images; only a small part of the collection is labeled and the total number of target classes is unknown.