Contrastive Learning

2162 papers with code • 1 benchmarks • 11 datasets

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Libraries

Use these libraries to find Contrastive Learning models and implementations
7 papers
2,738
6 papers
1,355
See all 6 libraries.

ActNetFormer: Transformer-ResNet Hybrid Method for Semi-Supervised Action Recognition in Videos

faceonlive/ai-research 9 Apr 2024

Our framework leverages both labeled and unlabelled data to robustly learn action representations in videos, combining pseudo-labeling with contrastive learning for effective learning from both types of samples.

124
09 Apr 2024

Anatomical Conditioning for Contrastive Unpaired Image-to-Image Translation of Optical Coherence Tomography Images

faceonlive/ai-research 8 Apr 2024

For a unified analysis of medical images from different modalities, data harmonization using image-to-image (I2I) translation is desired.

124
08 Apr 2024

DWE+: Dual-Way Matching Enhanced Framework for Multimodal Entity Linking

faceonlive/ai-research 7 Apr 2024

Multimodal entity linking (MEL) aims to utilize multimodal information (usually textual and visual information) to link ambiguous mentions to unambiguous entities in knowledge base.

124
07 Apr 2024

IITK at SemEval-2024 Task 1: Contrastive Learning and Autoencoders for Semantic Textual Relatedness in Multilingual Texts

faceonlive/ai-research 6 Apr 2024

This paper describes our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness.

124
06 Apr 2024

DELTA: Decoupling Long-Tailed Online Continual Learning

viper-purdue/delta 6 Apr 2024

A significant challenge in achieving ubiquitous Artificial Intelligence is the limited ability of models to rapidly learn new information in real-world scenarios where data follows long-tailed distributions, all while avoiding forgetting previously acquired knowledge.

0
06 Apr 2024

On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models

faceonlive/ai-research 4 Apr 2024

We observe that, when distilled on a task from a pre-trained teacher model, a small model can achieve or surpass the performance it would achieve if it was pre-trained then finetuned on that task.

124
04 Apr 2024

A Comprehensive Survey on Self-Supervised Learning for Recommendation

hkuds/awesome-sslrec-papers 4 Apr 2024

Recommender systems play a crucial role in tackling the challenge of information overload by delivering personalized recommendations based on individual user preferences.

44
04 Apr 2024

Decoupling Static and Hierarchical Motion Perception for Referring Video Segmentation

heshuting555/dshmp 4 Apr 2024

In fact, static cues can sometimes interfere with temporal perception by overshadowing motion cues.

14
04 Apr 2024

Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive Learning

andron00e/sparsecbm 4 Apr 2024

We propose a novel architecture and method of explainable classification with Concept Bottleneck Models (CBMs).

1
04 Apr 2024

Large Language Models for Expansion of Spoken Language Understanding Systems to New Languages

samsung/mt-llm-nlu 3 Apr 2024

In the on-device scenario (tiny and not pretrained SLU), our method improved the Overall Accuracy from 5. 31% to 22. 06% over the baseline Global-Local Contrastive Learning Framework (GL-CLeF) method.

3
03 Apr 2024