Neural Rendering
141 papers with code • 0 benchmarks • 7 datasets
Given a representation of a 3D scene of some kind (point cloud, mesh, voxels, etc.), the task is to create an algorithm that can produce photorealistic renderings of this scene from an arbitrary viewpoint. Sometimes, the task is accompanied by image/scene appearance manipulation.
Benchmarks
These leaderboards are used to track progress in Neural Rendering
Libraries
Use these libraries to find Neural Rendering models and implementationsLatest papers
Octree-GS: Towards Consistent Real-time Rendering with LOD-Structured 3D Gaussians
The recent 3D Gaussian splatting (3D-GS) has shown remarkable rendering fidelity and efficiency compared to NeRF-based neural scene representations.
BAD-Gaussians: Bundle Adjusted Deblur Gaussian Splatting
In this paper, we introduce a novel approach, named BAD-Gaussians (Bundle Adjusted Deblur Gaussian Splatting), which leverages explicit Gaussian representation and handles severe motion-blurred images with inaccurate camera poses to achieve high-quality scene reconstruction.
GaussianObject: Just Taking Four Images to Get A High-Quality 3D Object with Gaussian Splatting
Then we construct a Gaussian repair model based on diffusion models to supplement the omitted object information, where Gaussians are further refined.
OASim: an Open and Adaptive Simulator based on Neural Rendering for Autonomous Driving
With the development of implicit rendering technology and in-depth research on using generative models to produce data at scale, we propose OASim, an open and adaptive simulator and autonomous driving data generator based on implicit neural rendering.
GPAvatar: Generalizable and Precise Head Avatar from Image(s)
Head avatar reconstruction, crucial for applications in virtual reality, online meetings, gaming, and film industries, has garnered substantial attention within the computer vision community.
Sharp-NeRF: Grid-based Fast Deblurring Neural Radiance Fields Using Sharpness Prior
Especially, defocus blur is quite common in the images when they are normally captured using cameras.
TIFace: Improving Facial Reconstruction through Tensorial Radiance Fields and Implicit Surfaces
This report describes the solution that secured the first place in the "View Synthesis Challenge for Human Heads (VSCHH)" at the ICCV 2023 workshop.
OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments
As a fundamental task of vision-based perception, 3D occupancy prediction reconstructs 3D structures of surrounding environments.
Towards Knowledge-driven Autonomous Driving
This paper explores the emerging knowledge-driven autonomous driving technologies.
MorpheuS: Neural Dynamic 360° Surface Reconstruction from Monocular RGB-D Video
Thanks to the expressiveness of neural representations, prior works can accurately capture the motion and achieve high-fidelity reconstruction of the target object.