Image Matting
96 papers with code • 8 benchmarks • 8 datasets
Image Matting is the process of accurately estimating the foreground object in images and videos. It is a very important technique in image and video editing applications, particularly in film production for creating visual effects. In case of image segmentation, we segment the image into foreground and background by labeling the pixels. Image segmentation generates a binary image, in which a pixel either belongs to foreground or background. However, Image Matting is different from the image segmentation, wherein some pixels may belong to foreground as well as background, such pixels are called partial or mixed pixels. In order to fully separate the foreground from the background in an image, accurate estimation of the alpha values for partial or mixed pixels is necessary.
Source: Automatic Trimap Generation for Image Matting
Image Source: Real-Time High-Resolution Background Matting
Libraries
Use these libraries to find Image Matting models and implementationsDatasets
Latest papers
Ultrahigh Resolution Image/Video Matting With Spatio-Temporal Sparsity
Instead, our method resorts to spatial and temporal sparsity for solving general UHR matting.
End-to-End Video Matting With Trimap Propagation
Although recent studies exploit video object segmentation methods to propagate the given trimaps, they suffer inconsistent results.
Infusing Definiteness into Randomness: Rethinking Composition Styles for Deep Image Matting
Inspired by this, we introduce a novel composition style that binds the source and combined foregrounds in a definite triplet.
Lightweight Alpha Matting Network Using Distillation-Based Channel Pruning
Therefore, there has been a demand for a lightweight alpha matting model due to the limited computational resources of commercial portable devices.
Robust Human Matting via Semantic Guidance
Unlike previous works, our framework is data efficient, which requires a small amount of matting ground-truth to learn to estimate high quality object mattes.
SAPA: Similarity-Aware Point Affiliation for Feature Upsampling
We introduce point affiliation into feature upsampling, a notion that describes the affiliation of each upsampled point to a semantic cluster formed by local decoder feature points with semantic similarity.
Self-supervised Matting-specific Portrait Enhancement and Generation
Particularly, we invert an input portrait into the latent code of StyleGAN, and our aim is to discover whether there is an enhanced version in the latent space which is more compatible with a reference matting model.
TransMatting: Enhancing Transparent Objects Matting with Transformers
Image matting refers to predicting the alpha values of unknown foreground areas from natural images.
One-Trimap Video Matting
A key of OTVM is the joint modeling of trimap propagation and alpha prediction.
Referring Image Matting
Different from conventional image matting, which either requires user-defined scribbles/trimap to extract a specific foreground object or directly extracts all the foreground objects in the image indiscriminately, we introduce a new task named Referring Image Matting (RIM) in this paper, which aims to extract the meticulous alpha matte of the specific object that best matches the given natural language description, thus enabling a more natural and simpler instruction for image matting.