Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence

28 Nov 2023  ·  Junyi Zhang, Charles Herrmann, Junhwa Hur, Eric Chen, Varun Jampani, Deqing Sun, Ming-Hsuan Yang ·

While pre-trained large-scale vision models have shown significant promise for semantic correspondence, their features often struggle to grasp the geometry and orientation of instances. This paper identifies the importance of being geometry-aware for semantic correspondence and reveals a limitation of the features of current foundation models under simple post-processing. We show that incorporating this information can markedly enhance semantic correspondence performance with simple but effective solutions in both zero-shot and supervised settings. We also construct a new challenging benchmark for semantic correspondence built from an existing animal pose estimation dataset, for both pre-training validating models. Our method achieves a PCK@0.10 score of 65.4 (zero-shot) and 85.6 (supervised) on the challenging SPair-71k dataset, outperforming the state of the art by 5.5p and 11.0p absolute gains, respectively. Our code and datasets are publicly available at: https://telling-left-from-right.github.io/.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic correspondence PF-PASCAL GeoAware-SC (Supervised, AP-10K P.T.) PCK 95.7 # 1
Semantic correspondence PF-PASCAL GeoAware-SC (Supervised) PCK 95.1 # 2
Semantic correspondence PF-PASCAL GeoAware-SC (Zero-Shot) PCK 82.6 # 13
Semantic correspondence SPair-71k GeoAware-SC (Supervised, AP-10K P.T.) PCK 85.6 # 1
Semantic correspondence SPair-71k GeoAware-SC (Supervised) PCK 82.9 # 2
Semantic correspondence SPair-71k GeoAware-SC (Zero-Shot) PCK 68.5 # 4

Methods


No methods listed for this paper. Add relevant methods here