FutureDepth: Learning to Predict the Future Improves Video Depth Estimation

In this paper, we propose a novel video depth estimation approach, FutureDepth, which enables the model to implicitly leverage multi-frame and motion cues to improve depth estimation by making it learn to predict the future at training. More specifically, we propose a future prediction network, F-Net, which takes the features of multiple consecutive frames and is trained to predict multi-frame features one time step ahead iteratively. In this way, F-Net learns the underlying motion and correspondence information, and we incorporate its features into the depth decoding process. Additionally, to enrich the learning of multiframe correspondence cues, we further leverage a reconstruction network, R-Net, which is trained via adaptively masked auto-encoding of multiframe feature volumes. At inference time, both F-Net and R-Net are used to produce queries to work with the depth decoder, as well as a final refinement network. Through extensive experiments on several benchmarks, i.e., NYUDv2, KITTI, DDAD, and Sintel, which cover indoor, driving, and open-domain scenarios, we show that FutureDepth significantly improves upon baseline models, outperforms existing video depth estimation methods, and sets new state-of-the-art (SOTA) accuracy. Furthermore, FutureDepth is more efficient than existing SOTA video depth estimation models and has similar latencies when comparing to monocular models

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Monocular Depth Estimation KITTI Eigen split FutureDepth absolute relative error 0.041 # 2
RMSE 1.856 # 6
Sq Rel 0.117 # 24
RMSE log 0.066 # 5
Delta < 1.25 0.984 # 4
Delta < 1.25^2 0.998 # 1
Delta < 1.25^3 1.000 # 1
Square relative error (SqRel) 0.117 # 1
Monocular Depth Estimation NYU-Depth V2 FutureDepth RMSE 0.233 # 9
absolute relative error 0.063 # 9
Delta < 1.25 0.981 # 4
Delta < 1.25^2 0.996 # 8
Delta < 1.25^3 0.999 # 4
log 10 0.027 # 6

Methods


No methods listed for this paper. Add relevant methods here