Novel View Synthesis
329 papers with code • 17 benchmarks • 34 datasets
Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses.
See Wiki for more introdcutions.
The Synthesis method include: NeRF, MPI and so on.
( Image credit: Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence )
Libraries
Use these libraries to find Novel View Synthesis models and implementationsDatasets
Latest papers
Mitigating Motion Blur in Neural Radiance Fields with Events and Frames
Neural Radiance Fields (NeRFs) have shown great potential in novel view synthesis.
DN-Splatter: Depth and Normal Priors for Gaussian Splatting and Meshing
3D Gaussian splatting, a novel differentiable rendering technique, has achieved state-of-the-art novel view synthesis results with high rendering speeds and relatively low training times.
CG-SLAM: Efficient Dense RGB-D SLAM in a Consistent Uncertainty-aware 3D Gaussian Field
Recently neural radiance fields (NeRF) have been widely exploited as 3D representations for dense simultaneous localization and mapping (SLAM).
PKU-DyMVHumans: A Multi-View Video Benchmark for High-Fidelity Dynamic Human Modeling
To facilitate the development of these fields, in this paper, we present PKU-DyMVHumans, a versatile human-centric dataset for high-fidelity reconstruction and rendering of dynamic human scenarios from dense multi-view videos.
MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images
We propose MVSplat, an efficient feed-forward 3D Gaussian Splatting model learned from sparse multi-view images.
HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression
3D Gaussian Splatting (3DGS) has emerged as a promising framework for novel view synthesis, boasting rapid rendering speed with high fidelity.
CombiNeRF: A Combination of Regularization Techniques for Few-Shot Neural Radiance Field View Synthesis
When dealing with few-shot settings, i. e. with a small set of input views, the training could overfit those views, leading to artifacts and geometric and chromatic inconsistencies in the resulting rendering.
Gaussian Splatting on the Move: Blur and Rolling Shutter Compensation for Natural Camera Motion
High-quality scene reconstruction and novel view synthesis based on Gaussian Splatting (3DGS) typically require steady, high-quality photographs, often impractical to capture with handheld cameras.
BAD-Gaussians: Bundle Adjusted Deblur Gaussian Splatting
In this paper, we introduce a novel approach, named BAD-Gaussians (Bundle Adjusted Deblur Gaussian Splatting), which leverages explicit Gaussian representation and handles severe motion-blurred images with inaccurate camera poses to achieve high-quality scene reconstruction.
Aerial Lifting: Neural Urban Semantic and Building Instance Lifting from Aerial Imagery
We then introduce a novel cross-view instance label grouping strategy based on the 3D scene representation to mitigate the multi-view inconsistency problem in the 2D instance labels.