Monocular Depth Estimation
338 papers with code • 18 benchmarks • 26 datasets
Monocular Depth Estimation is the task of estimating the depth value (distance relative to the camera) of each pixel given a single (monocular) RGB image. This challenging task is a key prerequisite for determining scene understanding for applications such as 3D scene reconstruction, autonomous driving, and AR. State-of-the-art methods usually fall into one of two categories: designing a complex network that is powerful enough to directly regress the depth map, or splitting the input into bins or windows to reduce computational complexity. The most popular benchmarks are the KITTI and NYUv2 datasets. Models are typically evaluated using RMSE or absolute relative error.
Libraries
Use these libraries to find Monocular Depth Estimation models and implementationsDatasets
Latest papers with no code
Into the Fog: Evaluating Multiple Object Tracking Robustness
To address these limitations, we propose a pipeline for physic-based volumetric fog simulation in arbitrary real-world MOT dataset utilizing frame-by-frame monocular depth estimation and a fog formation optical model.
Self-supervised Monocular Depth Estimation on Water Scenes via Specular Reflection Prior
Monocular depth estimation from a single image is an ill-posed problem for computer vision due to insufficient reliable cues as the prior knowledge.
Adaptive Discrete Disparity Volume for Self-supervised Monocular Depth Estimation
In self-supervised monocular depth estimation tasks, discrete disparity prediction has been proven to attain higher quality depth maps than common continuous methods.
BadPart: Unified Black-box Adversarial Patch Attacks against Pixel-wise Regression Tasks
Our attack prototype, named BadPart, is evaluated on both MDE and OFE tasks, utilizing a total of 7 models.
FlowDepth: Decoupling Optical Flow for Self-Supervised Monocular Depth Estimation
To address these issues, existing approaches use additional semantic priori black-box networks to separate moving objects and improve the model only at the loss level.
$\mathrm{F^2Depth}$: Self-supervised Indoor Monocular Depth Estimation via Optical Flow Consistency and Feature Map Synthesis
To evaluate the generalization ability of our $\mathrm{F^2Depth}$, we collect a Campus Indoor depth dataset composed of approximately 1500 points selected from 99 images in 18 scenes.
Track Everything Everywhere Fast and Robustly
We propose a novel test-time optimization approach for efficiently and robustly tracking any pixel at any time in a video.
Leveraging Near-Field Lighting for Monocular Depth Estimation from Endoscopy Videos
Monocular depth estimation in endoscopy videos can enable assistive and robotic surgery to obtain better coverage of the organ and detection of various health issues.
Language-Based Depth Hints for Monocular Depth Estimation
In this work, we demonstrate the use of natural language as a source of an explicit prior about the structure of the world.
DepthFM: Fast Monocular Depth Estimation with Flow Matching
Due to the generative nature of our approach, our model reliably predicts the confidence of its depth estimates.