Depth Completion
77 papers with code • 9 benchmarks • 10 datasets
The Depth Completion task is a sub-problem of depth estimation. In the sparse-to-dense depth completion problem, one wants to infer the dense depth map of a 3-D scene given an RGB image and its corresponding sparse reconstruction in the form of a sparse depth map obtained either from computational methods such as SfM (Strcuture-from-Motion) or active sensors such as lidar or structured light sensors.
Source: LiStereo: Generate Dense Depth Maps from LIDAR and Stereo Imagery , Unsupervised Depth Completion from Visual Inertial Odometry
Datasets
Most implemented papers
Sparse and noisy LiDAR completion with RGB guidance anduncertainty
For autonomous vehicles and robotics the use of LiDAR is indispensable in order to achieve precise depth predictions.
Veritatem Dies Aperit- Temporally Consistent Depth Prediction Enabled by a Multi-Task Geometric and Semantic Scene Understanding Approach
Robust geometric and semantic scene understanding is ever more important in many real-world applications such as autonomous driving and robotic navigation.
3D LiDAR and Stereo Fusion using Stereo Matching Network with Conditional Cost Volume Normalization
The complementary characteristics of active and passive depth sensing techniques motivate the fusion of the Li-DAR sensor and stereo camera for improved depth perception.
Veritatem Dies Aperit - Temporally Consistent Depth Prediction Enabled by a Multi-Task Geometric and Semantic Scene Understanding Approach
Robust geometric and semantic scene understanding is ever more important in many real-world applications such as autonomous driving and robotic navigation.
Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision
We therefore accept this task and propose a comprehensive evaluation framework for scalable epistemic uncertainty estimation methods in deep learning.
Generating and Exploiting Probabilistic Monocular Depth Estimates
Beyond depth estimation from a single image, the monocular cue is useful in a broader range of depth inference applications and settings---such as when one can leverage other available depth cues for improved accuracy.
Deep RGB-D Canonical Correlation Analysis For Sparse Depth Completion
Such a transformation enables CFCNet to predict features and reconstruct data of missing depth measurements according to their corresponding, transformed RGB features.
Conf-Net: Toward High-Confidence Dense 3D Point-Cloud with Error-Map Prediction
Using our predicted error-map, we demonstrate that by up-filling a LiDAR point cloud from 18, 000 points to 285, 000 points, versus 300, 000 points for full depth, we can reduce the RMSE error from 1004 to 399.
ClearGrasp: 3D Shape Estimation of Transparent Objects for Manipulation
To address these challenges, we present ClearGrasp -- a deep learning approach for estimating accurate 3D geometry of transparent objects from a single RGB-D image for robotic manipulation.
Scene Completeness-Aware Lidar Depth Completion for Driving Scenario
Recent sparse depth completion for lidars only focuses on the lower scenes and produces irregular estimations on the upper because existing datasets, such as KITTI, do not provide groundtruth for upper areas.