2 code implementations • NeurIPS 2023 • Cristina Menghini, Andrew Delworth, Stephen H. Bach
We find that (1) unexplored prompt tuning strategies that iteratively refine pseudolabels consistently improve CLIP accuracy, by 19. 5 points in semi-supervised learning, by 28. 4 points in transductive zero-shot learning, and by 15. 2 points in unsupervised learning, and (2) unlike conventional semi-supervised pseudolabeling, which exacerbates model biases toward classes with higher-quality pseudolabels, prompt tuning leads to a more equitable distribution of per-class accuracy.