Monocular Depth Estimation

338 papers with code • 18 benchmarks • 26 datasets

Monocular Depth Estimation is the task of estimating the depth value (distance relative to the camera) of each pixel given a single (monocular) RGB image. This challenging task is a key prerequisite for determining scene understanding for applications such as 3D scene reconstruction, autonomous driving, and AR. State-of-the-art methods usually fall into one of two categories: designing a complex network that is powerful enough to directly regress the depth map, or splitting the input into bins or windows to reduce computational complexity. The most popular benchmarks are the KITTI and NYUv2 datasets. Models are typically evaluated using RMSE or absolute relative error.

Source: Defocus Deblurring Using Dual-Pixel Data

Libraries

Use these libraries to find Monocular Depth Estimation models and implementations

Latest papers with no code

Into the Fog: Evaluating Multiple Object Tracking Robustness

no code yet • 12 Apr 2024

To address these limitations, we propose a pipeline for physic-based volumetric fog simulation in arbitrary real-world MOT dataset utilizing frame-by-frame monocular depth estimation and a fog formation optical model.

Self-supervised Monocular Depth Estimation on Water Scenes via Specular Reflection Prior

no code yet • 10 Apr 2024

Monocular depth estimation from a single image is an ill-posed problem for computer vision due to insufficient reliable cues as the prior knowledge.

Adaptive Discrete Disparity Volume for Self-supervised Monocular Depth Estimation

no code yet • 4 Apr 2024

In self-supervised monocular depth estimation tasks, discrete disparity prediction has been proven to attain higher quality depth maps than common continuous methods.

BadPart: Unified Black-box Adversarial Patch Attacks against Pixel-wise Regression Tasks

no code yet • 1 Apr 2024

Our attack prototype, named BadPart, is evaluated on both MDE and OFE tasks, utilizing a total of 7 models.

FlowDepth: Decoupling Optical Flow for Self-Supervised Monocular Depth Estimation

no code yet • 28 Mar 2024

To address these issues, existing approaches use additional semantic priori black-box networks to separate moving objects and improve the model only at the loss level.

$\mathrm{F^2Depth}$: Self-supervised Indoor Monocular Depth Estimation via Optical Flow Consistency and Feature Map Synthesis

no code yet • 27 Mar 2024

To evaluate the generalization ability of our $\mathrm{F^2Depth}$, we collect a Campus Indoor depth dataset composed of approximately 1500 points selected from 99 images in 18 scenes.

Track Everything Everywhere Fast and Robustly

no code yet • 26 Mar 2024

We propose a novel test-time optimization approach for efficiently and robustly tracking any pixel at any time in a video.

Leveraging Near-Field Lighting for Monocular Depth Estimation from Endoscopy Videos

no code yet • 26 Mar 2024

Monocular depth estimation in endoscopy videos can enable assistive and robotic surgery to obtain better coverage of the organ and detection of various health issues.

Language-Based Depth Hints for Monocular Depth Estimation

no code yet • 22 Mar 2024

In this work, we demonstrate the use of natural language as a source of an explicit prior about the structure of the world.

DepthFM: Fast Monocular Depth Estimation with Flow Matching

no code yet • 20 Mar 2024

Due to the generative nature of our approach, our model reliably predicts the confidence of its depth estimates.