Novel View Synthesis
329 papers with code • 17 benchmarks • 34 datasets
Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses.
See Wiki for more introdcutions.
The Synthesis method include: NeRF, MPI and so on.
( Image credit: Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence )
Libraries
Use these libraries to find Novel View Synthesis models and implementationsDatasets
Most implemented papers
UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction
At the same time, neural radiance fields have revolutionized novel view synthesis.
HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields
A common approach to reconstruct such non-rigid scenes is through the use of a learned deformation field mapping from coordinates in each input image into a canonical template coordinate space.
KITTI-360: A Novel Dataset and Benchmarks for Urban Scene Understanding in 2D and 3D
For the last few decades, several major subfields of artificial intelligence including computer vision, graphics, and robotics have progressed largely independently from each other.
ADOP: Approximate Differentiable One-Pixel Point Rendering
Like other neural renderers, our system takes as input calibrated camera images and a proxy geometry of the scene, in our case a point cloud.
Direct Voxel Grid Optimization: Super-fast Convergence for Radiance Fields Reconstruction
Finally, evaluation on five inward-facing benchmarks shows that our method matches, if not surpasses, NeRF's quality, yet it only takes about 15 minutes to train from scratch for a new scene.
GeoNeRF: Generalizing NeRF with Geometry Priors
To render a novel view, the geometry reasoner first constructs cascaded cost volumes for each nearby source view.
CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields
Furthermore, we propose an inverse optimization method that accurately projects an input image to the latent codes for manipulation to enable editing on real images.
InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
Stereo Magnification with Multi-Layer Images
The second stage infers the color and the transparency values for these layers producing the final representation for novel view synthesis.
Pix2NeRF: Unsupervised Conditional $π$-GAN for Single Image to Neural Radiance Fields Translation
We propose a pipeline to generate Neural Radiance Fields~(NeRF) of an object or a scene of a specific class, conditioned on a single input image.