Surface Normal Estimation
39 papers with code • 2 benchmarks • 4 datasets
Most implemented papers
Generic 3D Representation via Pose Estimation and Matching
Though a large body of computer vision research has investigated developing generic semantic representations, efforts towards developing a similar representation for 3D has been limited.
Pixel-wise Attentional Gating for Parsimonious Pixel Labeling
To achieve parsimonious inference in per-pixel labeling tasks with a limited computational budget, we propose a \emph{Pixel-wise Attentional Gating} unit (\emph{PAG}) that learns to selectively process a subset of spatial locations at each layer of a deep convolutional network.
GeoNet: Geometric Neural Network for Joint Depth and Surface Normal Estimation
In this paper, we propose Geometric Neural Network (GeoNet) to jointly predict depth and surface normal maps from a single image.
Revisiting Multi-Task Learning with ROCK: a Deep Residual Auxiliary Block for Visual Detection
Multi-Task Learning (MTL) is appealing for deep learning regularization.
FrameNet: Learning Local Canonical Frames of 3D Surfaces from a Single RGB Image
In this work, we introduce the novel problem of identifying dense canonical 3D coordinate frames from a single RGB image.
Deep Surface Normal Estimation with Hierarchical RGB-D Fusion
The growing availability of commodity RGB-D cameras has boosted the applications in the field of scene understanding.
IRS: A Large Naturalistic Indoor Robotics Stereo Dataset to Train Deep Models for Disparity and Surface Normal Estimation
Besides, we present DTN-Net, a two-stage deep model for surface normal estimation.
SharinGAN: Combining Synthetic and Real Data for Unsupervised Geometry Estimation
Ideally, this results in images from two domains that present shared information to the primary network.
Surface Normal Estimation of Tilted Images via Spatial Rectifier
Our two main hypotheses are: (1) visual scene layout is indicative of the gravity direction; and (2) not all surfaces are equally represented by a learned estimator due to the structured distribution of the training data, thus, there exists a transformation for each tilted image that is more responsive to the learned estimator than others.
HoliCity: A City-Scale Data Platform for Learning Holistic 3D Structures
We present HoliCity, a city-scale 3D dataset with rich structural information.