3D Reconstruction
563 papers with code • 8 benchmarks • 55 datasets
3D Reconstruction is the task of creating a 3D model or representation of an object or scene from 2D images or other data sources. The goal of 3D reconstruction is to create a virtual representation of an object or scene that can be used for a variety of purposes, such as visualization, animation, simulation, and analysis. It can be used in fields such as computer vision, robotics, and virtual reality.
Image: Gwak et al
Benchmarks
These leaderboards are used to track progress in 3D Reconstruction
Libraries
Use these libraries to find 3D Reconstruction models and implementationsSubtasks
Latest papers
LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis
In light of this, we propose LiDAR4D, a differentiable LiDAR-only framework for novel space-time LiDAR view synthesis.
NeSLAM: Neural Implicit Mapping and Self-Supervised Feature Tracking With Depth Completion and Denoising
Second, the occupancy scene representation is replaced with Signed Distance Field (SDF) hierarchical scene representation for high-quality reconstruction and view synthesis.
Total-Decom: Decomposed 3D Scene Reconstruction with Minimal Interaction
Scene reconstruction from multi-view images is a fundamental problem in computer vision and graphics.
GRM: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation
We introduce GRM, a large-scale reconstructor capable of recovering a 3D asset from sparse-view images in around 0. 1s.
MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images
We propose MVSplat, an efficient feed-forward 3D Gaussian Splatting model learned from sparse multi-view images.
Fed3DGS: Scalable 3D Gaussian Splatting with Federated Learning
In pursuit of a more scalable 3D reconstruction, we propose a federated learning framework with 3DGS, which is a decentralized framework and can potentially use distributed computational resources across millions of clients.
MicroDiffusion: Implicit Representation-Guided Diffusion for 3D Reconstruction from Limited 2D Microscopy Projections
This strategy enriches the diffusion process with structured 3D information, enhancing detail and reducing noise in localized 2D images.
Relaxing Accurate Initialization Constraint for 3D Gaussian Splatting
Through extensive analysis of SfM initialization in the frequency domain and analysis of a 1D regression task with multiple 1D Gaussians, we propose a novel optimization strategy dubbed RAIN-GS (Relaxing Accurate Initialization Constraint for 3D Gaussian Splatting), that successfully trains 3D Gaussians from random point clouds.
MARVIS: Motion & Geometry Aware Real and Virtual Image Segmentation
By creating realistic synthetic images that mimic the complexities of the water surface, we provide fine-grained training data for our network (MARVIS) to discern between real and virtual images effectively.
Efficient LoFTR: Semi-Dense Local Feature Matching with Sparse-Like Speed
Furthermore, we find spatial variance exists in LoFTR's fine correlation module, which is adverse to matching accuracy.