3D Reconstruction
552 papers with code • 8 benchmarks • 54 datasets
3D Reconstruction is the task of creating a 3D model or representation of an object or scene from 2D images or other data sources. The goal of 3D reconstruction is to create a virtual representation of an object or scene that can be used for a variety of purposes, such as visualization, animation, simulation, and analysis. It can be used in fields such as computer vision, robotics, and virtual reality.
Image: Gwak et al
Benchmarks
These leaderboards are used to track progress in 3D Reconstruction
Libraries
Use these libraries to find 3D Reconstruction models and implementationsSubtasks
Latest papers
6Img-to-3D: Few-Image Large-Scale Outdoor Driving Scene Reconstruction
Current 3D reconstruction techniques struggle to infer unbounded scenes from a few images faithfully.
EventEgo3D: 3D Human Motion Capture from Egocentric Event Streams
In response to the existing limitations, this paper 1) introduces a new problem, i. e., 3D human motion capture from an egocentric monocular event camera with a fisheye lens, and 2) proposes the first approach to it called EventEgo3D (EE3D).
Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer
As a result, our CONTHO achieves state-of-the-art performance in both human-object contact estimation and joint reconstruction of 3D human and object.
3D Building Reconstruction from Monocular Remote Sensing Images with Multi-level Supervisions
3D building reconstruction from monocular remote sensing images is an important and challenging research problem that has received increasing attention in recent years, owing to its low cost of data acquisition and availability for large-scale applications.
OmniColor: A Global Camera Pose Optimization Approach of LiDAR-360Camera Fusion for Colorizing Point Clouds
A Colored point cloud, as a simple and efficient 3D representation, has many advantages in various fields, including robotic navigation and scene reconstruction.
WorDepth: Variational Language Prior for Monocular Depth Estimation
To test this, we focus on monocular depth estimation, the problem of predicting a dense depth map from a single image, but with an additional text caption describing the scene.
LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis
In light of this, we propose LiDAR4D, a differentiable LiDAR-only framework for novel space-time LiDAR view synthesis.
NeSLAM: Neural Implicit Mapping and Self-Supervised Feature Tracking With Depth Completion and Denoising
Second, the occupancy scene representation is replaced with Signed Distance Field (SDF) hierarchical scene representation for high-quality reconstruction and view synthesis.
Total-Decom: Decomposed 3D Scene Reconstruction with Minimal Interaction
Scene reconstruction from multi-view images is a fundamental problem in computer vision and graphics.
GRM: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation
We introduce GRM, a large-scale reconstructor capable of recovering a 3D asset from sparse-view images in around 0. 1s.