3D Reconstruction
560 papers with code • 8 benchmarks • 55 datasets
3D Reconstruction is the task of creating a 3D model or representation of an object or scene from 2D images or other data sources. The goal of 3D reconstruction is to create a virtual representation of an object or scene that can be used for a variety of purposes, such as visualization, animation, simulation, and analysis. It can be used in fields such as computer vision, robotics, and virtual reality.
Image: Gwak et al
Benchmarks
These leaderboards are used to track progress in 3D Reconstruction
Libraries
Use these libraries to find 3D Reconstruction models and implementationsSubtasks
Latest papers with no code
CryoMAE: Few-Shot Cryo-EM Particle Picking with Masked Autoencoders
Cryo-electron microscopy (cryo-EM) emerges as a pivotal technology for determining the architecture of cells, viruses, and protein assemblies at near-atomic resolution.
EGGS: Edge Guided Gaussian Splatting for Radiance Fields
Therefore, in this paper, we propose an Edge Guided Gaussian Splatting (EGGS) method that leverages the edges in the input images.
Probabilistic Directed Distance Fields for Ray-Based Shape Representations
One fundamental operation applied to such representations is differentiable rendering, as it enables inverse graphics approaches in learning frameworks.
MonoSelfRecon: Purely Self-Supervised Explicit Generalizable 3D Reconstruction of Indoor Scenes from Monocular RGB Views
MonoSelfRecon is not restricted by specific model design, which can be used to any models with voxel-SDF for purely self-supervised manner.
Binomial Self-compensation for Motion Error in Dynamic 3D Scanning
Phase shifting profilometry (PSP) is favored in high-precision 3D scanning due to its high accuracy, robustness, and pixel-wise property.
3D-COCO: extension of MS-COCO dataset for image detection and 3D reconstruction modules
We introduce 3D-COCO, an extension of the original MS-COCO dataset providing 3D models and 2D-3D alignment annotations.
Learning Topology Uniformed Face Mesh by Volume Rendering for Multi-view Reconstruction
Our goal is to leverage the superiority of neural volume rendering into multi-view reconstruction of face mesh with consistent topology.
RaFE: Generative Radiance Fields Restoration
Instead of reconstructing a blurred NeRF by averaging inconsistencies, we introduce a novel approach using Generative Adversarial Networks (GANs) for NeRF generation to better accommodate the geometric and appearance inconsistencies present in the multi-view images.
The More You See in 2D, the More You Perceive in 3D
Inspired by this behavior, we introduce SAP3D, a system for 3D reconstruction and novel view synthesis from an arbitrary number of unposed images.
Generalizable 3D Scene Reconstruction via Divide and Conquer from a Single View
We therefore propose a hybrid method following a divide-and-conquer strategy.