Contrastive Learning
2231 papers with code • 1 benchmarks • 11 datasets
Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.
It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.
(Image credit: Schroff et al. 2015)
Libraries
Use these libraries to find Contrastive Learning models and implementationsDatasets
Latest papers
Auto-Formula: Recommend Formulas in Spreadsheets using Contrastive Learning for Table Representations
Spreadsheets are widely recognized as the most popular end-user programming tools, which blend the power of formula-based computation, with an intuitive table-based interface.
When LLMs are Unfit Use FastFit: Fast and Effective Text Classification with Many Classes
We present FastFit, a method, and a Python package design to provide fast and accurate few-shot classification, especially for scenarios with many semantically similar classes.
Observation, Analysis, and Solution: Exploring Strong Lightweight Vision Transformers via Masked Image Modeling Pre-Training
In this paper, we question if the extremely simple ViTs' fine-tuning performance with a small-scale architecture can also benefit from this pre-training paradigm, which is considerably less studied yet in contrast to the well-established lightweight architecture design methodology with sophisticated components introduced.
Harnessing Joint Rain-/Detail-aware Representations to Eliminate Intricate Rains
By integrating CoI-M with the rain-/detail-aware Contrastive learning, we develop CoIC, an innovative and potent algorithm tailored for training models on mixed datasets.
Blind Localization and Clustering of Anomalies in Textures
By identifying the anomalous regions with high fidelity, we can restrict our focus to those regions of interest; then, contrastive learning is employed to increase the separability of different anomaly types and reduce the intra-class variation.
InfoMatch: Entropy Neural Estimation for Semi-Supervised Image Classification
To address this, we employ information entropy neural estimation to utilize the potential of unlabeled samples.
Vision-and-Language Navigation via Causal Learning
In the pursuit of robust and generalizable environment perception and language understanding, the ubiquitous challenge of dataset bias continues to plague vision-and-language navigation (VLN) agents, hindering their performance in unseen environments.
MyGO: Discrete Modality Information as Fine-Grained Tokens for Multi-modal Knowledge Graph Completion
To overcome their inherent incompleteness, multi-modal knowledge graph completion (MMKGC) aims to discover unobserved knowledge from given MMKGs, leveraging both structural information from the triples and multi-modal information of the entities.
UniSAR: Modeling User Transition Behaviors between Search and Recommendation
In this paper, we propose a framework named UniSAR that effectively models the different types of fine-grained behavior transitions for providing users a Unified Search And Recommendation service.
WB LUTs: Contrastive Learning for White Balancing Lookup Tables
Automatic white balancing (AWB), one of the first steps in an integrated signal processing (ISP) pipeline, aims to correct the color cast induced by the scene illuminant.