Unsupervised Scale-consistent Depth Learning from Video

We propose a monocular depth estimator SC-Depth, which requires only unlabelled videos for training and enables the scale-consistent prediction at inference time. Our contributions include: (i) we propose a geometry consistency loss, which penalizes the inconsistency of predicted depths between adjacent views; (ii) we propose a self-discovered mask to automatically localize moving objects that violate the underlying static scene assumption and cause noisy signals during training; (iii) we demonstrate the efficacy of each component with a detailed ablation study and show high-quality depth estimation results in both KITTI and NYUv2 datasets. Moreover, thanks to the capability of scale-consistent prediction, we show that our monocular-trained deep networks are readily integrated into the ORB-SLAM2 system for more robust and accurate tracking. The proposed hybrid Pseudo-RGBD SLAM shows compelling results in KITTI, and it generalizes well to the KAIST dataset without additional training. Finally, we provide several demos for qualitative evaluation.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular Depth Estimation KITTI Eigen split SC-Depth (ResNet 50) absolute relative error 0.114 # 56
RMSE 4.706 # 40
RMSE log 0.191 # 36
Delta < 1.25 0.873 # 38
Delta < 1.25^2 0.960 # 37
Delta < 1.25^3 0.982 # 37
Monocular Depth Estimation KITTI Eigen split SC-Depth (ResNet18) absolute relative error 0.119 # 58
RMSE 4.950 # 42
RMSE log 0.197 # 38
Delta < 1.25 0.863 # 39
Delta < 1.25^2 0.957 # 39
Delta < 1.25^3 0.981 # 38
Monocular Depth Estimation NYU-Depth V2 self-supervised Bian et al Root mean square error (RMSE) 0.593 # 6
Absolute relative error (AbsRel) 0.157 # 6
delta_1 78.0 # 6
delta_2 94.0 # 6
delta_3 98.4 # 6

Methods


No methods listed for this paper. Add relevant methods here