Background Matting: The World is Your Green Screen

We propose a method for creating a matte -- the per-pixel foreground color and alpha -- of a person by taking photos or videos in an everyday setting with a handheld camera. Most existing matting methods require a green screen background or a manually created trimap to produce a good matte. Automatic, trimap-free methods are appearing, but are not of comparable quality. In our trimap free approach, we ask the user to take an additional photo of the background without the subject at the time of capture. This step requires a small amount of foresight but is far less time-consuming than creating a trimap. We train a deep network with an adversarial loss to predict the matte. We first train a matting network with supervised loss on ground truth data with synthetic composites. To bridge the domain gap to real imagery with no labeling, we train another matting network guided by the first network and by a discriminator that judges the quality of composites. We demonstrate results on a wide variety of photos and videos and show significant improvement over the state of the art.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Matting Adobe Matting Adobe LS-GAN SAD 1.72 # 4
MSE 0.97 # 1
Image Matting Adobe Matting IM SAD 1.92 # 3
MSE 1.16 # 2
Image Matting Adobe Matting CAM SAD 3.67 # 1
MSE 4.5 # 4
Image Matting Adobe Matting BM SAD 2.53 # 2
MSE 1.33 # 3

Methods


No methods listed for this paper. Add relevant methods here