1 code implementation • 19 Mar 2024 • Yuehao Song, Xinggang Wang, Jingfeng Yao, Wenyu Liu, Jinglin Zhang, Xiangmin Xu
Our method achieves state-of-the-art (SOTA) performance among all single-modality methods (3. 4% improvement on AUC, 5. 1% improvement on AP) and very comparable performance against multi-modality methods with 59% number of parameters less.
1 code implementation • 7 Jun 2023 • Jingfeng Yao, Xinggang Wang, Lang Ye, Wenyu Liu
In our work, we leverage vision foundation models to enhance the performance of natural image matting.
4 code implementations • 24 May 2023 • Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang
Recently, plain vision Transformers (ViTs) have shown impressive performance on various computer vision tasks, thanks to their strong modeling capacity and large-scale pretraining.
Ranked #2 on Image Matting on Distinctions-646