1 code implementation • 6 Feb 2024 • Zhengbo Wang, Jian Liang, Lijun Sheng, Ran He, Zilei Wang, Tieniu Tan
Extensive results on 17 datasets validate that our method surpasses or achieves comparable results with state-of-the-art methods on few-shot classification, imbalanced learning, and out-of-distribution generalization.
no code implementations • 6 Feb 2024 • Zhengbo Wang, Jian Liang, Ran He, Zilei Wang, Tieniu Tan
This paper proposes a \textbf{C}ollabo\textbf{ra}tive \textbf{F}ine-\textbf{T}uning (\textbf{CraFT}) approach for fine-tuning black-box VLMs to downstream tasks, where one only has access to the input prompts and the output predictions of the model.
1 code implementation • 28 Nov 2023 • Lijun Sheng, Zhengbo Wang, Jian Liang
Our solution adopts a two-stage source-free domain adaptation framework with a Swin Transformer backbone to achieve knowledge transfer from the USA (source) domain to Asia (target) domain.
no code implementations • 24 Aug 2023 • Jian Liang, Lijun Sheng, Zhengbo Wang, Ran He, Tieniu Tan
The emergence of vision-language models (VLMs), such as CLIP, has spurred a significant research effort towards their application for downstream supervised learning tasks.
1 code implementation • ICCV 2023 • Zhengbo Wang, Jian Liang, Ran He, Nan Xu, Zilei Wang, Tieniu Tan
Thereafter, we fine-tune CLIP with off-the-shelf methods by combining labeled and synthesized features.
1 code implementation • 17 Mar 2023 • Zhengbo Wang, Jian Liang, Zilei Wang, Tieniu Tan
To address this issue, we present a novel transductive ZSL method that produces semantic attributes of the unseen data and imposes them on the generative process.