no code implementations • 5 Oct 2023 • Jie-Jing Shao, Jiang-Xin Shi, Xiao-Wen Yang, Lan-Zhe Guo, Yu-Feng Li
Contrastive Language-Image Pre-training (CLIP) provides a foundation model by integrating natural language into visual concepts, enabling zero-shot recognition on downstream tasks.
1 code implementation • 18 Sep 2023 • Jiang-Xin Shi, Tong Wei, Zhi Zhou, Xin-Yan Han, Jie-Jing Shao, Yu-Feng Li
In this paper, we propose PEL, a fine-tuning method that can effectively adapt pre-trained models to long-tailed recognition tasks in fewer than 20 epochs without the need for extra data.
Ranked #1 on Long-tail Learning on CIFAR-100-LT (ρ=10) (using extra training data)
Fine-Grained Image Classification Long-tail learning with class descriptors