Proxy Anchor Loss for Deep Metric Learning

CVPR 2020  ·  Sungyeon Kim, Dongwon Kim, Minsu Cho, Suha Kwak ·

Existing metric learning losses can be categorized into two classes: pair-based and proxy-based losses. The former class can leverage fine-grained semantic relations between data points, but slows convergence in general due to its high training complexity. In contrast, the latter class enables fast and reliable convergence, but cannot consider the rich data-to-data relations. This paper presents a new proxy-based loss that takes advantages of both pair- and proxy-based methods and overcomes their limitations. Thanks to the use of proxies, our loss boosts the speed of convergence and is robust against noisy labels and outliers. At the same time, it allows embedding vectors of data to interact with each other in its gradients to exploit data-to-data relations. Our method is evaluated on four public benchmarks, where a standard network trained with our loss achieves state-of-the-art performance and most quickly converges.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Results from the Paper


Ranked #10 on Metric Learning on CUB-200-2011 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Metric Learning CARS196 BN-Inception + Proxy-Anchor R@1 88.3 # 16
Metric Learning CUB-200-2011 BN-Inception + Proxy-Anchor R@1 71.1 # 10
Metric Learning Stanford Online Products BN-Inception + Proxy-Anchor R@1 80.3 # 24

Methods