Intrinsic Image Decomposition

21 papers with code • 0 benchmarks • 6 datasets

Intrinsic Image Decomposition is the process of separating an image into its formation components such as reflectance (albedo) and shading (illumination). Reflectance is the color of the object, invariant to camera viewpoint and illumination conditions, whereas shading, dependent on camera viewpoint and object geometry, consists of different illumination effects, such as shadows, shading and inter-reflections. Using intrinsic images, instead of the original images, can be beneficial for many computer vision algorithms. For instance, for shape-from-shading algorithms, the shading images contain important visual cues to recover geometry, while for segmentation and detection algorithms, reflectance images can be beneficial as they are independent of confounding illumination effects. Furthermore, intrinsic images are used in a wide range of computational photography applications, such as material recoloring, relighting, retexturing and stylization.

Source: CNN based Learning using Reflection and Retinex Models for Intrinsic Image Decomposition

Latest papers with no code

A Survey on Intrinsic Images: Delving Deep Into Lambert and Beyond

no code yet • 7 Dec 2021

Intrinsic imaging or intrinsic image decomposition has traditionally been described as the problem of decomposing an image into two layers: a reflectance, the albedo invariant color of the material; and a shading, produced by the interaction between light and geometry.

Learning Intrinsic Images for Clothing

no code yet • 16 Nov 2021

A more interpretable edge-aware metric and an annotation scheme is designed for the testing set, which allows diagnostic evaluation for intrinsic models.

Self-Supervised Intrinsic Image Decomposition Network Considering Reflectance Consistency

no code yet • 5 Nov 2021

Intrinsic image decomposition aims to decompose an image into illumination-invariant and illumination-variant components, referred to as ``reflectance'' and ``shading,'' respectively.

An Optical physics inspired CNN approach for intrinsic image decomposition

no code yet • 21 May 2021

There is a lack of unsupervised learning approaches for decomposing an image into reflectance and shading using a single image.

DeRenderNet: Intrinsic Image Decomposition of Urban Scenes with Shape-(In)dependent Shading Rendering

no code yet • 28 Apr 2021

We propose DeRenderNet, a deep neural network to decompose the albedo and latent lighting, and render shape-(in)dependent shadings, given a single image of an outdoor urban scene, trained in a self-supervised manner.

Intrinsic Image Decomposition using Paradigms

no code yet • 20 Nov 2020

The best modern intrinsic image methods learn a map from image to albedo using rendered models and human judgements.

A deep learning based interactive sketching system for fashion images design

no code yet • 9 Oct 2020

In this work, we propose an interactive system to design diverse high-quality garment images from fashion sketches and the texture information.

Physics-based Shading Reconstruction for Intrinsic Image Decomposition

no code yet • 3 Sep 2020

We investigate the use of photometric invariance and deep learning to compute intrinsic images (albedo and shading).

Towards Geometry Guided Neural Relighting with Flash Photography

no code yet • 12 Aug 2020

By incorporating the depth map, our approach is able to extrapolate realistic high-frequency effects under novel lighting via geometry guided image decomposition from the flashlight image, and predict the cast shadow map from the shadow-encoding transformed depth map.

Learning to Factorize and Relight a City

no code yet • ECCV 2020

We propose a learning-based framework for disentangling outdoor scenes into temporally-varying illumination and permanent scene factors.