3D Object Reconstruction
61 papers with code • 4 benchmarks • 7 datasets
Image: Choy et al
Libraries
Use these libraries to find 3D Object Reconstruction models and implementationsDatasets
Most implemented papers
Pix2Vox++: Multi-scale Context-aware 3D Object Reconstruction from Single and Multiple Images
A multi-scale context-aware fusion module is then introduced to adaptively select high-quality reconstructions for different parts from all coarse 3D volumes to obtain a fused 3D volume.
Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision
We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes.
3D Object Reconstruction from a Single Depth View with Adversarial Learning
In this paper, we propose a novel 3D-RecGAN approach, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks.
Dense 3D Object Reconstruction from a Single Depth View
Unlike existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid with a high resolution of 256^3 by recovering the occluded/missing regions.
3D Point Capsule Networks
In this paper, we propose 3D point-capsule networks, an auto-encoder designed to process sparse 3D point clouds while preserving spatial arrangements of the input data.
Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning
Rendering bridges the gap between 2D vision and 3D scenes by simulating the physical process of image formation.
Deep Predictive Motion Tracking in Magnetic Resonance Imaging: Application to Fetal Imaging
Nevertheless, visual monitoring of fetal motion based on displayed slices, and navigation at the level of stacks-of-slices is inefficient.
UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction
At the same time, neural radiance fields have revolutionized novel view synthesis.
Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer
As a result, our CONTHO achieves state-of-the-art performance in both human-object contact estimation and joint reconstruction of 3D human and object.
Hierarchical Surface Prediction for 3D Object Reconstruction
A major limitation of such approaches is that they only predict a coarse resolution voxel grid, which does not capture the surface of the objects well.