Contrastive Learning
2162 papers with code • 1 benchmarks • 11 datasets
Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.
It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.
(Image credit: Schroff et al. 2015)
Libraries
Use these libraries to find Contrastive Learning models and implementationsDatasets
Latest papers
ActNetFormer: Transformer-ResNet Hybrid Method for Semi-Supervised Action Recognition in Videos
Our framework leverages both labeled and unlabelled data to robustly learn action representations in videos, combining pseudo-labeling with contrastive learning for effective learning from both types of samples.
Anatomical Conditioning for Contrastive Unpaired Image-to-Image Translation of Optical Coherence Tomography Images
For a unified analysis of medical images from different modalities, data harmonization using image-to-image (I2I) translation is desired.
DWE+: Dual-Way Matching Enhanced Framework for Multimodal Entity Linking
Multimodal entity linking (MEL) aims to utilize multimodal information (usually textual and visual information) to link ambiguous mentions to unambiguous entities in knowledge base.
IITK at SemEval-2024 Task 1: Contrastive Learning and Autoencoders for Semantic Textual Relatedness in Multilingual Texts
This paper describes our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness.
DELTA: Decoupling Long-Tailed Online Continual Learning
A significant challenge in achieving ubiquitous Artificial Intelligence is the limited ability of models to rapidly learn new information in real-world scenarios where data follows long-tailed distributions, all while avoiding forgetting previously acquired knowledge.
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
We observe that, when distilled on a task from a pre-trained teacher model, a small model can achieve or surpass the performance it would achieve if it was pre-trained then finetuned on that task.
A Comprehensive Survey on Self-Supervised Learning for Recommendation
Recommender systems play a crucial role in tackling the challenge of information overload by delivering personalized recommendations based on individual user preferences.
Decoupling Static and Hierarchical Motion Perception for Referring Video Segmentation
In fact, static cues can sometimes interfere with temporal perception by overshadowing motion cues.
Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive Learning
We propose a novel architecture and method of explainable classification with Concept Bottleneck Models (CBMs).
Large Language Models for Expansion of Spoken Language Understanding Systems to New Languages
In the on-device scenario (tiny and not pretrained SLU), our method improved the Overall Accuracy from 5. 31% to 22. 06% over the baseline Global-Local Contrastive Learning Framework (GL-CLeF) method.