Contrastive Learning

2196 papers with code • 1 benchmarks • 11 datasets

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Libraries

Use these libraries to find Contrastive Learning models and implementations
7 papers
2,756
6 papers
1,357
See all 6 libraries.

Latest papers with no code

CORI: CJKV Benchmark with Romanization Integration -- A step towards Cross-lingual Transfer Beyond Textual Scripts

no code yet • 19 Apr 2024

Naively assuming English as a source language may hinder cross-lingual transfer for many languages by failing to consider the importance of language contact.

Improving Pediatric Pneumonia Diagnosis with Adult Chest X-ray Images Utilizing Contrastive Learning and Embedding Similarity

no code yet • 19 Apr 2024

Despite the advancement of deep learning-based computer-aided diagnosis (CAD) methods for pneumonia from adult chest x-ray (CXR) images, the performance of CAD methods applied to pediatric images remains suboptimal, mainly due to the lack of large-scale annotated pediatric imaging datasets.

Zero-Shot Medical Phrase Grounding with Off-the-shelf Diffusion Models

no code yet • 19 Apr 2024

In this work, we use a publicly available Foundation Model, namely the Latent Diffusion Model, to solve this challenging task.

Contrastive Gaussian Clustering: Weakly Supervised 3D Scene Segmentation

no code yet • 19 Apr 2024

Recent works in novel-view synthesis have shown how to model the appearance of a scene via a cloud of 3D Gaussians, and how to generate accurate images from a given viewpoint by projecting on it the Gaussians before $\alpha$ blending their color.

Leveraging Intra-modal and Inter-modal Interaction for Multi-Modal Entity Alignment

no code yet • 19 Apr 2024

Multi-modal entity alignment (MMEA) aims to identify equivalent entity pairs across different multi-modal knowledge graphs (MMKGs).

When LLMs are Unfit Use FastFit: Fast and Effective Text Classification with Many Classes

no code yet • 18 Apr 2024

We present FastFit, a method, and a Python package design to provide fast and accurate few-shot classification, especially for scenarios with many semantically similar classes.

Knowledge-Aware Multi-Intent Contrastive Learning for Multi-Behavior Recommendation

no code yet • 18 Apr 2024

This model uses relationships in the knowledge graph to construct intents, aiming to mine the connections between users' multi-behaviors from the perspective of intents to achieve more accurate recommendations.

FecTek: Enhancing Term Weight in Lexicon-Based Retrieval with Feature Context and Term-level Knowledge

no code yet • 18 Apr 2024

To effectively enrich the feature context representations of term weight, the Feature Context Module (FCM) is introduced, which leverages the power of BERT's representation to determine dynamic weights for each element in the embedding.

TrACT: A Training Dynamics Aware Contrastive Learning Framework for Long-tail Trajectory Prediction

no code yet • 18 Apr 2024

In this paper, we propose to incorporate richer training dynamics information into a prototypical contrastive learning framework.

Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives

no code yet • 17 Apr 2024

The Composed Image Retrieval (CIR) task aims to retrieve target images using a composed query consisting of a reference image and a modified text.