Surface Normals Estimation
32 papers with code • 7 benchmarks • 11 datasets
Surface normal estimation deals with the task of predicting the surface orientation of the objects present inside a scene. Refer to Designing Deep Networks for Surface Normal Estimation (Wang et al.) to get a good overview of several design choices that led to the development of a CNN-based surface normal estimator.
Datasets
Latest papers
PolyMaX: General Dense Prediction with Mask Transformer
Despite this shift, methods based on the per-pixel prediction paradigm still dominate the benchmarks on the other dense prediction tasks that require continuous outputs, such as depth estimation and surface normal prediction.
Stanford-ORB: A Real-World 3D Object Inverse Rendering Benchmark
We introduce Stanford-ORB, a new real-world 3D Object inverse Rendering Benchmark.
MSECNet: Accurate and Robust Normal Estimation for 3D Point Clouds by Multi-Scale Edge Conditioning
MSECNet consists of a backbone network and a multi-scale edge conditioning (MSEC) stream.
MIMIC: Masked Image Modeling with Image Correspondences
We train multiple models with different masked image modeling objectives to showcase the following findings: Representations trained on our automatically generated MIMIC-3M outperform those learned from expensive crowdsourced datasets (ImageNet-1K) and those learned from synthetic environments (MULTIVIEW-HABITAT) on two dense geometric tasks: depth estimation on NYUv2 (1. 7%), and surface normals estimation on Taskonomy (2. 05%).
iDisc: Internal Discretization for Monocular Depth Estimation
Our method sets the new state of the art with significant improvements on NYU-Depth v2 and KITTI, outperforming all published methods on the official KITTI benchmark.
NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field Indirect Illumination
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
NeAF: Learning Neural Angle Fields for Point Normal Estimation
To resolve these issues, we propose an implicit function to learn an angle field around the normal of each point in the spherical coordinate system, which is dubbed as Neural Angle Fields (NeAF).
HSurf-Net: Normal Estimation for 3D Point Clouds by Learning Hyper Surfaces
To address these issues, we introduce hyper surface fitting to implicitly learn hyper surfaces, which are represented by multi-layer perceptron (MLP) layers that take point features as input and output surface patterns in a high dimensional feature space.
GraphFit: Learning Multi-scale Graph-Convolutional Representation for Point Cloud Normal Estimation
We propose a precise and efficient normal estimation method that can deal with noise and nonuniform density for unstructured 3D point clouds.
Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising
Unfortunately, Monte Carlo integration provides estimates with significant noise, even at large sample counts, which makes gradient-based inverse rendering very challenging.