Search Results for author: Beier Zhu

Found 7 papers, 4 papers with code

Classes Are Not Equal: An Empirical Study on Image Recognition Fairness

1 code implementation28 Feb 2024 Jiequan Cui, Beier Zhu, Xin Wen, Xiaojuan Qi, Bei Yu, Hanwang Zhang

Second, with the proposed concept of Model Prediction Bias, we investigate the origins of problematic representation during optimization.

Contrastive Learning Data Augmentation +3

Generalized Logit Adjustment: Calibrating Fine-tuned Models by Removing Label Bias in Foundation Models

1 code implementation NeurIPS 2023 Beier Zhu, Kaihua Tang, Qianru Sun, Hanwang Zhang

In this study, we systematically examine the biases in foundation models and demonstrate the efficacy of our proposed Generalized Logit Adjustment (GLA) method.

Debiased Fine-Tuning for Vision-language Models by Prompt Regularization

no code implementations29 Jan 2023 Beier Zhu, Yulei Niu, Saeil Lee, Minhoe Hur, Hanwang Zhang

We present a new paradigm for fine-tuning large-scale visionlanguage pre-trained models on downstream task, dubbed Prompt Regularization (ProReg).

Prompt-aligned Gradient for Prompt Tuning

1 code implementation ICCV 2023 Beier Zhu, Yulei Niu, Yucheng Han, Yue Wu, Hanwang Zhang

Thanks to the large pre-trained vision-language models (VLMs) like CLIP, we can craft a zero-shot classifier by "prompt", e. g., the confidence score of an image being "[CLASS]" can be obtained by using the VLM provided similarity measure between the image and the prompt sentence "a photo of a [CLASS]".

Domain Adaptation Few-Shot Learning +2

Cross-Domain Empirical Risk Minimization for Unbiased Long-tailed Classification

1 code implementation29 Dec 2021 Beier Zhu, Yulei Niu, Xian-Sheng Hua, Hanwang Zhang

We address the overlooked unbiasedness in existing long-tailed classification methods: we find that their overall improvement is mostly attributed to the biased preference of tail over head, as the test distribution is assumed to be balanced; however, when the test is as imbalanced as the long-tailed training data -- let the test respect Zipf's law of nature -- the tail bias is no longer beneficial overall because it hurts the head majorities.

Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.