Referring Image Matting (Expression-based)

3 papers with code • 1 benchmarks • 1 datasets

Expression-based referring image matting, taking an image and a flowery expression as the input.

Datasets


Most implemented papers

Image Segmentation Using Text and Image Prompts

timojl/clipseg CVPR 2022

After training on an extended version of the PhraseCut dataset, our system generates a binary segmentation map for an image based on a free-text prompt or on an additional image expressing the query.

MDETR -- Modulated Detection for End-to-End Multi-Modal Understanding

ashkamath/mdetr 26 Apr 2021

We also investigate the utility of our model as an object detector on a given label set when fine-tuned in a few-shot setting.

Referring Image Matting

jizhizili/rim CVPR 2023

Different from conventional image matting, which either requires user-defined scribbles/trimap to extract a specific foreground object or directly extracts all the foreground objects in the image indiscriminately, we introduce a new task named Referring Image Matting (RIM) in this paper, which aims to extract the meticulous alpha matte of the specific object that best matches the given natural language description, thus enabling a more natural and simpler instruction for image matting.