Intrinsic Image Decomposition

21 papers with code • 0 benchmarks • 6 datasets

Intrinsic Image Decomposition is the process of separating an image into its formation components such as reflectance (albedo) and shading (illumination). Reflectance is the color of the object, invariant to camera viewpoint and illumination conditions, whereas shading, dependent on camera viewpoint and object geometry, consists of different illumination effects, such as shadows, shading and inter-reflections. Using intrinsic images, instead of the original images, can be beneficial for many computer vision algorithms. For instance, for shape-from-shading algorithms, the shading images contain important visual cues to recover geometry, while for segmentation and detection algorithms, reflectance images can be beneficial as they are independent of confounding illumination effects. Furthermore, intrinsic images are used in a wide range of computational photography applications, such as material recoloring, relighting, retexturing and stylization.

Source: CNN based Learning using Reflection and Retinex Models for Intrinsic Image Decomposition

Most implemented papers

Unsupervised Learning for Intrinsic Image Decomposition from a Single Image

DreamtaleCore/USI3D CVPR 2020

Intrinsic image decomposition, which is an essential task in computer vision, aims to infer the reflectance and shading of the scene.

Intrinsic Image Decomposition via Ordinal Shading

compphoto/Intrinsic ACM Transactions on Graphics 2023

We encourage the model to learn an accurate decomposition by computing losses on the estimated shading as well as the albedo implied by the intrinsic model.

Exploiting Diffusion Prior for Generalizable Dense Prediction

shinying/dmp 30 Nov 2023

Contents generated by recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate due to the immitigable domain gap.

Unified Depth Prediction and Intrinsic Image Decomposition from a Single Image via Joint Convolutional Neural Fields

seungryong/JCNF 21 Mar 2016

We present a method for jointly predicting a depth map and intrinsic images from single-image input.

Learning Intrinsic Image Decomposition from Watching the World

lixx2938/unsupervised-learning-intrinsic-images CVPR 2018

However, it is difficult to collect ground truth training data at scale for intrinsic images.

Joint Learning of Intrinsic Images and Semantic Segmentation

Morpheus3000/intrinseg ECCV 2018

To that end, we propose a supervised end-to-end CNN architecture to jointly learn intrinsic image decomposition and semantic segmentation.

Learning Blind Video Temporal Consistency

phoenix104104/fast_blind_video_consistency ECCV 2018

Our method takes the original unprocessed and per-frame processed videos as inputs to produce a temporally consistent video.

Intrinsic Decomposition of Document Images In-the-Wild

cvlab-stonybrook/DocIIW 29 Nov 2020

However, document shadow or shading removal results still suffer because: (a) prior methods rely on uniformity of local color statistics, which limit their application on real-scenarios with complex document shapes and textures and; (b) synthetic or hybrid datasets with non-realistic, simulated lighting conditions are used to train the models.

Outdoor inverse rendering from a single image using multiview self-supervision

YeeU/InverseRenderNet 12 Feb 2021

In this paper we show how to perform scene-level inverse rendering to recover shape, reflectance and lighting from a single, uncontrolled image using a fully convolutional neural network.

Physically Inspired Dense Fusion Networks for Relighting

yazdaniamir38/Depth-guided-Image-Relighting 5 May 2021

While our proposed method applies to both one-to-one and any-to-any relighting problems, for each case we introduce problem-specific components that enrich the model performance: 1) For one-to-one relighting we incorporate normal vectors of the surfaces in the scene to adjust gloss and shadows accordingly in the image.