Neural Rendering
144 papers with code • 0 benchmarks • 7 datasets
Given a representation of a 3D scene of some kind (point cloud, mesh, voxels, etc.), the task is to create an algorithm that can produce photorealistic renderings of this scene from an arbitrary viewpoint. Sometimes, the task is accompanied by image/scene appearance manipulation.
Benchmarks
These leaderboards are used to track progress in Neural Rendering
Libraries
Use these libraries to find Neural Rendering models and implementationsMost implemented papers
TIFace: Improving Facial Reconstruction through Tensorial Radiance Fields and Implicit Surfaces
This report describes the solution that secured the first place in the "View Synthesis Challenge for Human Heads (VSCHH)" at the ICCV 2023 workshop.
Geometry-Aware Neural Rendering
Understanding the 3-dimensional structure of the world is a core challenge in computer vision and robotics.
Neural Voxel Renderer: Learning an Accurate and Controllable Rendering Tool
Finally, we show how our neural rendering framework can capture and faithfully render objects from real images and from a diverse set of classes.
Neural Voice Puppetry: Audio-driven Facial Reenactment
Neural Voice Puppetry has a variety of use-cases, including audio-driven video avatars, video dubbing, and text-driven video synthesis of a talking head.
Self6D: Self-Supervised Monocular 6D Object Pose Estimation
6D object pose estimation is a fundamental problem in computer vision.
Equivariant Neural Rendering
We propose a framework for learning neural scene representations directly from images, without 3D supervision.
Pose2RGBD. Generating Depth and RGB images from absolute positions
We propose a method at the intersection of Computer Vision and Computer Graphics fields, which automatically generates RGBD images using neural networks, based on previously seen and synchronized video, depth and pose signals.
Learning Adaptive Sampling and Reconstruction for Volume Visualization
A central challenge in data visualization is to understand which data samples are required to generate an image of a data set in which the relevant information is encoded.
End-to-End Optimization of Scene Layout
Experiments suggest that our model achieves higher accuracy and diversity in conditional scene synthesis and allows exemplar-based scene generation from various input forms.
Crowdsampling the Plenoptic Function
These photos represent a sparse and unstructured sampling of the plenoptic function for a particular scene.