Contrastive Learning

2205 papers with code • 1 benchmarks • 11 datasets

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Libraries

Use these libraries to find Contrastive Learning models and implementations
7 papers
2,763
6 papers
1,357
See all 6 libraries.

Latest papers with no code

MM-TTS: A Unified Framework for Multimodal, Prompt-Induced Emotional Text-to-Speech Synthesis

no code yet • 29 Apr 2024

Emotional Text-to-Speech (E-TTS) synthesis has gained significant attention in recent years due to its potential to enhance human-computer interaction.

Retrieval-Oriented Knowledge for Click-Through Rate Prediction

no code yet • 28 Apr 2024

Specifically, a knowledge base, consisting of a retrieval-oriented embedding layer and a knowledge encoder, is designed to preserve and imitate the retrieved & aggregated representations in a decomposition-reconstruction paradigm.

Contrastive Learning Method for Sequential Recommendation based on Multi-Intention Disentanglement

no code yet • 28 Apr 2024

Sequential recommendation is one of the important branches of recommender system, aiming to achieve personalized recommended items for the future through the analysis and prediction of users' ordered historical interactive behaviors.

A Hybrid Approach for Document Layout Analysis in Document images

no code yet • 27 Apr 2024

This paper navigates the complexities of understanding various elements within document images, such as text, images, tables, and headings.

Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum

no code yet • 27 Apr 2024

Since consistency and complementarity information correspond to low-frequency and high-frequency information, respectively, this paper revisits the problem of multimodal emotion recognition in conversation from the perspective of the graph spectrum.

2M-NER: Contrastive Learning for Multilingual and Multimodal NER with Language and Modal Fusion

no code yet • 26 Apr 2024

To tackle this challenging MMNER task on the dataset, we introduce a new model called 2M-NER, which aligns the text and image representations using contrastive learning and integrates a multimodal collaboration module to effectively depict the interactions between the two modalities.

A Unified Label-Aware Contrastive Learning Framework for Few-Shot Named Entity Recognition

no code yet • 26 Apr 2024

Our approach enriches the context by utilizing label semantics as suffix prompts.

ConKeD++ -- Improving descriptor learning for retinal image registration: A comprehensive study of contrastive losses

no code yet • 25 Apr 2024

In this work, we propose to test and extend and improve a state-of-the-art framework for color fundus image registration, ConKeD.

Learning Discriminative Spatio-temporal Representations for Semi-supervised Action Recognition

no code yet • 25 Apr 2024

Semi-supervised action recognition aims to improve spatio-temporal reasoning ability with a few labeled data in conjunction with a large amount of unlabeled data.

FedStyle: Style-Based Federated Learning Crowdsourcing Framework for Art Commissions

no code yet • 25 Apr 2024

The unique artistic style is crucial to artists' occupational competitiveness, yet prevailing Art Commission Platforms rarely support style-based retrieval.