Metric Learning
557 papers with code • 8 benchmarks • 32 datasets
The goal of Metric Learning is to learn a representation function that maps objects into an embedded space. The distance in the embedded space should preserve the objects’ similarity — similar objects get close and dissimilar objects get far away. Various loss functions have been developed for Metric Learning. For example, the contrastive loss guides the objects from the same class to be mapped to the same point and those from different classes to be mapped to different points whose distances are larger than a margin. Triplet loss is also popular, which requires the distance between the anchor sample and the positive sample to be smaller than the distance between the anchor sample and the negative sample.
Source: Road Network Metric Learning for Estimated Time of Arrival
Libraries
Use these libraries to find Metric Learning models and implementationsDatasets
Latest papers
Wasserstein Distance-based Expansion of Low-Density Latent Regions for Unknown Class Detection
We present a novel approach that effectively identifies unknown objects by distinguishing between high and low-density regions in latent space.
Towards Improved Proxy-based Deep Metric Learning via Data-Augmented Domain Adaptation
Our experiments on benchmarks, including the popular CUB-200-2011, CARS196, Stanford Online Products, and In-Shop Clothes Retrieval, show that our learning algorithm significantly improves the existing proxy losses and achieves superior results compared to the existing methods.
DUCK: Distance-based Unlearning via Centroid Kinematics
Machine Unlearning is rising as a new field, driven by the pressing necessity of ensuring privacy in modern artificial intelligence models.
Robust Concept Erasure via Kernelized Rate-Distortion Maximization
Distributed representations provide a vector space that captures meaningful relationships between data instances.
Deep Hashing via Householder Quantization
Hashing is at the heart of large-scale image similarity search, and recent methods have been substantially improved through deep learning techniques.
Adaptive End-to-End Metric Learning for Zero-Shot Cross-Domain Slot Filling
In practice, these dominant pipeline models may be limited in computational efficiency and generalization capacity because of non-parallel inference and context-free discrete label embeddings.
Long-Tailed Classification Based on Coarse-Grained Leading Forest and Multi-Center Loss
The deviation of a classification model is caused by both class-wise and attribute-wise imbalance.
FreeReg: Image-to-Point Cloud Registration Leveraging Pretrained Diffusion Models and Monocular Depth Estimators
Matching cross-modality features between images and point clouds is a fundamental problem for image-to-point cloud registration.
Dark Side Augmentation: Generating Diverse Night Examples for Metric Learning
We propose to train a GAN-based synthetic-image generator, translating available day-time image examples into night images.
Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?
By discussing the properties of each group of methods, we derive SimPool, a simple attention-based pooling mechanism as a replacement of the default one for both convolutional and transformer encoders.