Search Results for author: Jingfeng Yao

Found 3 papers, 3 papers with code

ViTGaze: Gaze Following with Interaction Features in Vision Transformers

1 code implementation19 Mar 2024 Yuehao Song, Xinggang Wang, Jingfeng Yao, Wenyu Liu, Jinglin Zhang, Xiangmin Xu

Our method achieves state-of-the-art (SOTA) performance among all single-modality methods (3. 4% improvement on AUC, 5. 1% improvement on AP) and very comparable performance against multi-modality methods with 59% number of parameters less.

Matte Anything: Interactive Natural Image Matting with Segment Anything Models

1 code implementation7 Jun 2023 Jingfeng Yao, Xinggang Wang, Lang Ye, Wenyu Liu

In our work, we leverage vision foundation models to enhance the performance of natural image matting.

Image Matting

ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers

4 code implementations24 May 2023 Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang

Recently, plain vision Transformers (ViTs) have shown impressive performance on various computer vision tasks, thanks to their strong modeling capacity and large-scale pretraining.

Image Matting

Cannot find the paper you are looking for? You can Submit a new open access paper.