Adaptive Fusion of Single-View and Multi-View Depth for Autonomous Driving

12 Mar 2024  ·  Junda Cheng, Wei Yin, Kaixuan Wang, Xiaozhi Chen, Shijie Wang, Xin Yang ·

Multi-view depth estimation has achieved impressive performance over various benchmarks. However, almost all current multi-view systems rely on given ideal camera poses, which are unavailable in many real-world scenarios, such as autonomous driving. In this work, we propose a new robustness benchmark to evaluate the depth estimation system under various noisy pose settings. Surprisingly, we find current multi-view depth estimation methods or single-view and multi-view fusion methods will fail when given noisy pose settings. To address this challenge, we propose a single-view and multi-view fused depth estimation system, which adaptively integrates high-confident multi-view and single-view results for both robust and accurate depth estimations. The adaptive fusion module performs fusion by dynamically selecting high-confidence regions between two branches based on a wrapping confidence map. Thus, the system tends to choose the more reliable branch when facing textureless scenes, inaccurate calibration, dynamic objects, and other degradation or challenging conditions. Our method outperforms state-of-the-art multi-view and fusion methods under robustness testing. Furthermore, we achieve state-of-the-art performance on challenging benchmarks (KITTI and DDAD) when given accurate pose estimations. Project website: https://github.com/Junda24/AFNet/.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Monocular Depth Estimation DDAD AFNet absolute relative error 0.088 # 1
Sq Rel 0.979 # 1
RMSE 4.60 # 1
RMSE log 0.154 # 1
Monocular Depth Estimation KITTI Eigen split AFNet absolute relative error 0.044 # 6
RMSE 1.712 # 2
Sq Rel 0.132 # 20
RMSE log 0.069 # 6
Delta < 1.25 0.980 # 8
Delta < 1.25^2 0.997 # 16
Delta < 1.25^3 0.999 # 11

Methods


No methods listed for this paper. Add relevant methods here