Image Matting
96 papers with code • 8 benchmarks • 8 datasets
Image Matting is the process of accurately estimating the foreground object in images and videos. It is a very important technique in image and video editing applications, particularly in film production for creating visual effects. In case of image segmentation, we segment the image into foreground and background by labeling the pixels. Image segmentation generates a binary image, in which a pixel either belongs to foreground or background. However, Image Matting is different from the image segmentation, wherein some pixels may belong to foreground as well as background, such pixels are called partial or mixed pixels. In order to fully separate the foreground from the background in an image, accurate estimation of the alpha values for partial or mixed pixels is necessary.
Source: Automatic Trimap Generation for Image Matting
Image Source: Real-Time High-Resolution Background Matting
Libraries
Use these libraries to find Image Matting models and implementationsDatasets
Latest papers
dugMatting: Decomposed-Uncertainty-Guided Matting
Cutting out an object and estimating its opacity mask, known as image matting, is a key task in image and video editing.
ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers
Recently, plain vision Transformers (ViTs) have shown impressive performance on various computer vision tasks, thanks to their strong modeling capacity and large-scale pretraining.
RenderMe-360: A Large Digital Asset Library and Benchmarks Towards High-fidelity Head Avatars
It is a large-scale digital library for head avatars with three key attributes: 1) High Fidelity: all subjects are captured by 60 synchronized, high-resolution 2K cameras in 360 degrees.
Adversarially-Guided Portrait Matting
We present a method for generating alpha mattes using a limited data source.
Adaptive Human Matting for Dynamic Videos
The most recent efforts in video matting have focused on eliminating trimap dependency since trimap annotations are expensive and trimap-based methods are less adaptable for real-time applications.
Deep Image Matting: A Comprehensive Survey
Image matting refers to extracting precise alpha matte from natural images, and it plays a critical role in various downstream applications, such as image editing.
Rethinking Context Aggregation in Natural Image Matting
For natural image matting, context information plays a crucial role in estimating alpha mattes especially when it is challenging to distinguish foreground from its background.
Disentangled Pre-training for Image Matting
The pre-training task is designed in a similar manner as image matting, where random trimap and alpha matte are generated to achieve an image disentanglement objective.
CAP-VSTNet: Content Affinity Preserved Versatile Style Transfer
Content affinity loss including feature and pixel affinity is a main problem which leads to artifacts in photorealistic and video style transfer.
TransMatting: Tri-token Equipped Transformer Model for Image Matting
However, existing methods perform poorly when faced with highly transparent foreground objects due to the large area of uncertainty to predict and the small receptive field of convolutional networks.