Search Results for author: Xiao-Wen Yang

Found 1 papers, 0 papers with code

Investigating the Limitation of CLIP Models: The Worst-Performing Categories

no code implementations5 Oct 2023 Jie-Jing Shao, Jiang-Xin Shi, Xiao-Wen Yang, Lan-Zhe Guo, Yu-Feng Li

Contrastive Language-Image Pre-training (CLIP) provides a foundation model by integrating natural language into visual concepts, enabling zero-shot recognition on downstream tasks.

Prompt Engineering Zero-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.