The Self-Optimal-Transport Feature Transform

6 Apr 2022  ยท  Daniel Shalam, Simon Korman ยท

The Self-Optimal-Transport (SOT) feature transform is designed to upgrade the set of features of a data instance to facilitate downstream matching or grouping related tasks. The transformed set encodes a rich representation of high order relations between the instance features. Distances between transformed features capture their direct original similarity and their third party agreement regarding similarity to other features in the set. A particular min-cost-max-flow fractional matching problem, whose entropy regularized version can be approximated by an optimal transport (OT) optimization, results in our transductive transform which is efficient, differentiable, equivariant, parameterless and probabilistically interpretable. Empirically, the transform is highly effective and flexible in its use, consistently improving networks it is inserted into, in a variety of tasks and training schemes. We demonstrate its merits through the problem of unsupervised clustering and its efficiency and wide applicability for few-shot-classification, with state-of-the-art results, and large-scale person re-identification.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Few-Shot Image Classification CIFAR-FS 5-way (1-shot) PT+MAP+SF+SOT (transductive) Accuracy 89.94 # 1
Few-Shot Image Classification CIFAR-FS 5-way (5-shot) PT+MAP+SF+SOT (transductive) Accuracy 92.83 # 2
Few-Shot Image Classification CUB 200 5-way 1-shot PT+MAP+SF+SOT (transductive) Accuracy 95.80 # 1
Few-Shot Image Classification CUB 200 5-way 5-shot PT+MAP+SF+SOT (transductive) Accuracy 97.12 # 2
Few-Shot Image Classification Mini-Imagenet 5-way (1-shot) PT+MAP+SF+SOT (transductive) Accuracy 85.59 # 4
Few-Shot Image Classification Mini-Imagenet 5-way (5-shot) PT+MAP+SF+SOT (transductive) Accuracy 91.34 # 7

Methods


No methods listed for this paper. Add relevant methods here