Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera

1 Jul 2018  ·  Fangchang Ma, Guilherme Venturelli Cavalheiro, Sertac Karaman ·

Depth completion, the technique of estimating a dense depth image from sparse depth measurements, has a variety of applications in robotics and autonomous driving. However, depth completion faces 3 main challenges: the irregularly spaced pattern in the sparse depth input, the difficulty in handling multiple sensor modalities (when color images are available), as well as the lack of dense, pixel-level ground truth depth labels. In this work, we address all these challenges. Specifically, we develop a deep regression model to learn a direct mapping from sparse depth (and color images) to dense depth. We also propose a self-supervised training framework that requires only sequences of color and sparse depth images, without the need for dense depth labels. Our experiments demonstrate that our network, when trained with semi-dense annotations, attains state-of-the- art accuracy and is the winning approach on the KITTI depth completion benchmark at the time of submission. Furthermore, the self-supervised framework outperforms a number of existing solutions trained with semi- dense annotations.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Depth Completion VOID SS-S2D MAE 178.85 # 6
RMSE 243.84 # 6
iMAE 80.12 # 6
iRMSE 107.69 # 5

Methods


No methods listed for this paper. Add relevant methods here