Metric Learning
555 papers with code • 8 benchmarks • 32 datasets
The goal of Metric Learning is to learn a representation function that maps objects into an embedded space. The distance in the embedded space should preserve the objects’ similarity — similar objects get close and dissimilar objects get far away. Various loss functions have been developed for Metric Learning. For example, the contrastive loss guides the objects from the same class to be mapped to the same point and those from different classes to be mapped to different points whose distances are larger than a margin. Triplet loss is also popular, which requires the distance between the anchor sample and the positive sample to be smaller than the distance between the anchor sample and the negative sample.
Source: Road Network Metric Learning for Estimated Time of Arrival
Libraries
Use these libraries to find Metric Learning models and implementationsDatasets
Latest papers
Metric Learning from Limited Pairwise Preference Comparisons
We study whether the metric can still be recovered, even though it is known that learning individual ideal items is now no longer possible.
Curvature Augmented Manifold Embedding and Learning
A new dimensional reduction (DR) and data visualization method, Curvature-Augmented Manifold Embedding and Learning (CAMEL), is proposed.
Unsupervised Collaborative Metric Learning with Mixed-Scale Groups for General Object Retrieval
This paper presents a novel unsupervised deep metric learning approach, termed unsupervised collaborative metric learning with mixed-scale groups (MS-UGCML), devised to learn embeddings for objects of varying scales.
A Semantic Distance Metric Learning approach for Lexical Semantic Change Detection
Detecting temporal semantic changes of words is an important task for various NLP applications that must make time-sensitive predictions.
Polos: Multimodal Metric Learning from Human Feedback for Image Captioning
Establishing an automatic evaluation metric that closely aligns with human judgments is essential for effectively developing image captioning models.
Metric-Learning Encoding Models Identify Processing Profiles of Linguistic Features in BERT's Representations
Together, this demonstrates the utility of Metric-Learning Encoding Methods for studying how linguistic features are neurally encoded in language models and the advantage of MLEMs over traditional methods.
Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric Learning
As a result of the success of recent pre-trained models trained from larger-scale datasets, it is challenging to adapt the model to the DML tasks in the local data domain while retaining the previously gained knowledge.
Named Entity Recognition Under Domain Shift via Metric Learning for Life Sciences
In our experiments, we observed that such a model is prone to mislabeling the source entities, which can often appear in the text, as the target entities.
Wasserstein Distance-based Expansion of Low-Density Latent Regions for Unknown Class Detection
We present a novel approach that effectively identifies unknown objects by distinguishing between high and low-density regions in latent space.
Towards Improved Proxy-based Deep Metric Learning via Data-Augmented Domain Adaptation
Our experiments on benchmarks, including the popular CUB-200-2011, CARS196, Stanford Online Products, and In-Shop Clothes Retrieval, show that our learning algorithm significantly improves the existing proxy losses and achieves superior results compared to the existing methods.