Metric Learning

557 papers with code • 8 benchmarks • 32 datasets

The goal of Metric Learning is to learn a representation function that maps objects into an embedded space. The distance in the embedded space should preserve the objects’ similarity — similar objects get close and dissimilar objects get far away. Various loss functions have been developed for Metric Learning. For example, the contrastive loss guides the objects from the same class to be mapped to the same point and those from different classes to be mapped to different points whose distances are larger than a margin. Triplet loss is also popular, which requires the distance between the anchor sample and the positive sample to be smaller than the distance between the anchor sample and the negative sample.

Source: Road Network Metric Learning for Estimated Time of Arrival

Libraries

Use these libraries to find Metric Learning models and implementations

Most implemented papers

A Systematic Evaluation and Benchmark for Person Re-Identification: Features, Metrics, and Datasets

RSL-NEU/person-reid-benchmark 31 May 2016

To ensure a fair comparison, all of the approaches were implemented using a unified code library that includes 11 feature extraction algorithms and 22 metric learning and ranking techniques.

Distance Metric Learning using Graph Convolutional Networks: Application to Functional Brain Networks

sk1712/gcn_metric_learning 7 Mar 2017

Evaluating similarity between graphs is of major importance in several computer vision and pattern recognition problems, where graph representations are often used to model objects or interactions between elements.

NormFace: L2 Hypersphere Embedding for Face Verification

happynear/NormFace 21 Apr 2017

We show that both strategies, and small variants, consistently improve performance by between 0. 2% to 0. 4% on the LFW dataset based on two models.

Unsupervised Metric Learning in Presence of Missing Data

rsonthal/MRMissing.jl 19 Jul 2018

Here, we present a new algorithm MR-MISSING that extends these previous algorithms and can be used to compute low dimensional representation on data sets with missing entries.

Task-Embedded Control Networks for Few-Shot Imitation Learning

stepjam/TecNets 8 Oct 2018

Despite this, most robot learning approaches have focused on learning a single task, from scratch, with a limited notion of generalisation, and no way of leveraging the knowledge to learn other tasks more efficiently.

Deep Metric Learning by Online Soft Mining and Class-Aware Attention

XinshaoAmosWang/OSM_CAA_WeightedContrastiveLoss 4 Nov 2018

Therefore, we propose a novel sample mining method, called Online Soft Mining (OSM), which assigns one continuous score to each sample to make use of all samples in the mini-batch.

Polarity Loss for Zero-shot Object Detection

KennithLi/Awesome-Zero-Shot-Object-Detection 22 Nov 2018

This setting gives rise to the need for correct alignment between visual and semantic concepts, so that the unseen objects can be identified using only their semantic attributes.

Improved Embeddings with Easy Positive Triplet Mining

littleredxh/EasyPositiveHardNegative 8 Apr 2019

Deep metric learning seeks to define an embedding where semantically similar images are embedded to nearby locations, and semantically dissimilar images are embedded to distant locations.

Holistic and Comprehensive Annotation of Clinically Significant Findings on Diverse CT Images: Learning from Radiology Reports and Label Ontology

rsummers11/CADLab CVPR 2019

In radiologists' routine work, one major task is to read a medical image, e. g., a CT scan, find significant lesions, and describe them in the radiology report.

Relational Knowledge Distillation

yoshitomo-matsubara/torchdistill CVPR 2019

Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller.