Search Results for author: Zenghui Ding

Found 1 papers, 0 papers with code

RankCLIP: Ranking-Consistent Language-Image Pretraining

no code implementations15 Apr 2024 Yiming Zhang, Zhuokai Zhao, Zhaorun Chen, Zhili Feng, Zenghui Ding, Yining Sun

Among the ever-evolving development of vision-language models, contrastive language-image pretraining (CLIP) has set new benchmarks in many downstream tasks such as zero-shot classifications by leveraging self-supervised contrastive learning on large amounts of text-image pairs.

Contrastive Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.