NDDepth: Normal-Distance Assisted Monocular Depth Estimation

Monocular depth estimation has drawn widespread attention from the vision community due to its broad applications. In this paper, we propose a novel physics (geometry)-driven deep learning framework for monocular depth estimation by assuming that 3D scenes are constituted by piece-wise planes. Particularly, we introduce a new normal-distance head that outputs pixel-level surface normal and plane-to-origin distance for deriving depth at each position. Meanwhile, the normal and distance are regularized by a developed plane-aware consistency constraint. We further integrate an additional depth head to improve the robustness of the proposed framework. To fully exploit the strengths of these two heads, we develop an effective contrastive iterative refinement module that refines depth in a complementary manner according to the depth uncertainty. Extensive experiments indicate that the proposed method exceeds previous state-of-the-art competitors on the NYU-Depth-v2, KITTI and SUN RGB-D datasets. Notably, it ranks 1st among all submissions on the KITTI depth prediction online benchmark at the submission time.

PDF Abstract ICCV 2023 PDF ICCV 2023 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular Depth Estimation KITTI Eigen split NDDepth absolute relative error 0.050 # 13
RMSE 2.025 # 14
Sq Rel 0.141 # 16
RMSE log 0.075 # 12
Delta < 1.25 0.978 # 11
Delta < 1.25^2 0.998 # 1
Delta < 1.25^3 0.999 # 11
Monocular Depth Estimation NYU-Depth V2 NDDepth RMSE 0.311 # 20
absolute relative error 0.087 # 22
Delta < 1.25 0.936 # 22
Delta < 1.25^2 0.991 # 21
Delta < 1.25^3 0.998 # 18
log 10 0.038 # 23

Methods


No methods listed for this paper. Add relevant methods here