Neural Rendering
141 papers with code • 0 benchmarks • 7 datasets
Given a representation of a 3D scene of some kind (point cloud, mesh, voxels, etc.), the task is to create an algorithm that can produce photorealistic renderings of this scene from an arbitrary viewpoint. Sometimes, the task is accompanied by image/scene appearance manipulation.
Benchmarks
These leaderboards are used to track progress in Neural Rendering
Libraries
Use these libraries to find Neural Rendering models and implementationsMost implemented papers
Neural Scene Graphs for Dynamic Scenes
Recent implicit neural rendering methods have demonstrated that it is possible to learn accurate view synthesis for complex scenes by predicting their volumetric density and color supervised solely by a set of RGB images.
MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
ADOP: Approximate Differentiable One-Pixel Point Rendering
Like other neural renderers, our system takes as input calibrated camera images and a proxy geometry of the scene, in our case a point cloud.
GeoNeRF: Generalizing NeRF with Geometry Priors
To render a novel view, the geometry reasoner first constructs cascaded cost volumes for each nearby source view.
Efficient Geometry-aware 3D Generative Adversarial Networks
Unsupervised generation of high-quality multi-view-consistent images and 3D shapes using only collections of single-view 2D photographs has been a long-standing challenge.
Neural Rendering for Stereo 3D Reconstruction of Deformable Tissues in Robotic Surgery
Reconstruction of the soft tissues in robotic surgery from endoscopic stereo videos is important for many applications such as intra-operative navigation and image-guided robotic surgery automation.
ShAPO: Implicit Representations for Multi-Object Shape, Appearance, and Pose Optimization
A novel disentangled shape and appearance database of priors is first learned to embed objects in their respective shape and appearance space.
FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization
One is to regularize the frequency range of NeRF's inputs, while the other is to penalize the near-camera density fields.
Deformable Model-Driven Neural Rendering for High-Fidelity 3D Reconstruction of Human Heads Under Low-View Settings
To address this, we propose geometry decomposition and adopt a two-stage, coarse-to-fine training strategy, allowing for progressively capturing high-frequency geometric details.
NeRF-Supervised Deep Stereo
We introduce a novel framework for training deep stereo networks effortlessly and without any ground-truth.