Neural Rendering
144 papers with code • 0 benchmarks • 7 datasets
Given a representation of a 3D scene of some kind (point cloud, mesh, voxels, etc.), the task is to create an algorithm that can produce photorealistic renderings of this scene from an arbitrary viewpoint. Sometimes, the task is accompanied by image/scene appearance manipulation.
Benchmarks
These leaderboards are used to track progress in Neural Rendering
Libraries
Use these libraries to find Neural Rendering models and implementationsLatest papers
MorpheuS: Neural Dynamic 360° Surface Reconstruction from Monocular RGB-D Video
Thanks to the expressiveness of neural representations, prior works can accurately capture the motion and achieve high-fidelity reconstruction of the target object.
Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering
Neural rendering methods have significantly advanced photo-realistic 3D scene rendering in various academic and industrial applications.
HUGS: Human Gaussian Splats
We achieve state-of-the-art rendering quality with a rendering speed of 60 FPS while being ~100x faster to train over previous work.
LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS
Recent advancements in real-time neural rendering using point-based techniques have paved the way for the widespread adoption of 3D representations.
Photo-SLAM: Real-time Simultaneous Localization and Photorealistic Mapping for Monocular, Stereo, and RGB-D Cameras
In addition to actively densifying hyper primitives based on geometric features, we further introduce a Gaussian-Pyramid-based training method to progressively learn multi-level features, enhancing photorealistic mapping performance.
LiveNVS: Neural View Synthesis on Live RGB-D Streams
Based on the RGB-D input stream, novel views are rendered by projecting neural features into the target view via a densely fused depth map and aggregating the features in image-space to a target feature map.
Neural Texture Puppeteer: A Framework for Neural Geometry and Texture Rendering of Articulated Shapes, Enabling Re-Identification at Interactive Speed
Realistic looking novel view and pose synthesis for different synthetic cow textures further demonstrate the quality of our method.
CaesarNeRF: Calibrated Semantic Representation for Few-shot Generalizable Neural Rendering
CaesarNeRF explicitly models pose differences of reference views to combine scene-level semantic representations, providing a calibrated holistic understanding.
NeuRAD: Neural Rendering for Autonomous Driving
Neural radiance fields (NeRFs) have gained popularity in the autonomous driving (AD) community.
CVTHead: One-shot Controllable Head Avatar with Vertex-feature Transformer
Reconstructing personalized animatable head avatars has significant implications in the fields of AR/VR.