Feature-metric Loss for Self-supervised Learning of Depth and Egomotion

ECCV 2020  ·  Chang Shu, Kun Yu, Zhixiang Duan, Kuiyuan Yang ·

Photometric loss is widely used for self-supervised depth and egomotion estimation. However, the loss landscapes induced by photometric differences are often problematic for optimization, caused by plateau landscapes for pixels in textureless regions or multiple local minima for less discriminative pixels. In this work, feature-metric loss is proposed and defined on feature representation, where the feature representation is also learned in a self-supervised manner and regularized by both first-order and second-order derivatives to constrain the loss landscapes to form proper convergence basins. Comprehensive experiments and detailed analysis via visualization demonstrate the effectiveness of the proposed feature-metric loss. In particular, our method improves state-of-the-art methods on KITTI from 0.885 to 0.925 measured by $\delta_1$ for depth estimation, and significantly outperforms previous method for visual odometry.

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular Depth Estimation KITTI Eigen split unsupervised FeatDepth-MS absolute relative error 0.099 # 11
RMSE 4.427 # 13
Sq Rel 0.697 # 11
RMSE log 0.184 # 17
Delta < 1.25 0.889 # 14
Delta < 1.25^2 0.963 # 14
Delta < 1.25^3 0.982 # 14
Monocular Depth Estimation KITTI Eigen split unsupervised FeatDepth-M absolute relative error 0.104 # 19

Methods


No methods listed for this paper. Add relevant methods here