Search Results for author: Jie-Jing Shao

Found 2 papers, 1 papers with code

Investigating the Limitation of CLIP Models: The Worst-Performing Categories

no code implementations5 Oct 2023 Jie-Jing Shao, Jiang-Xin Shi, Xiao-Wen Yang, Lan-Zhe Guo, Yu-Feng Li

Contrastive Language-Image Pre-training (CLIP) provides a foundation model by integrating natural language into visual concepts, enabling zero-shot recognition on downstream tasks.

Prompt Engineering Zero-Shot Learning

Parameter-Efficient Long-Tailed Recognition

1 code implementation18 Sep 2023 Jiang-Xin Shi, Tong Wei, Zhi Zhou, Xin-Yan Han, Jie-Jing Shao, Yu-Feng Li

In this paper, we propose PEL, a fine-tuning method that can effectively adapt pre-trained models to long-tailed recognition tasks in fewer than 20 epochs without the need for extra data.

 Ranked #1 on Long-tail Learning on CIFAR-100-LT (ρ=10) (using extra training data)

Fine-Grained Image Classification Long-tail learning with class descriptors

Cannot find the paper you are looking for? You can Submit a new open access paper.