Improving Evaluation of Debiasing in Image Classification

8 Jun 2022  ·  Jungsoo Lee, Juyoung Lee, Sanghun Jung, Jaegul Choo ·

Image classifiers often rely overly on peripheral attributes that have a strong correlation with the target class (i.e., dataset bias) when making predictions. Due to the dataset bias, the model correctly classifies data samples including bias attributes (i.e., bias-aligned samples) while failing to correctly predict those without bias attributes (i.e., bias-conflicting samples). Recently, a myriad of studies focus on mitigating such dataset bias, the task of which is referred to as debiasing. However, our comprehensive study indicates several issues need to be improved when conducting evaluation of debiasing in image classification. First, most of the previous studies do not specify how they select their hyper-parameters and model checkpoints (i.e., tuning criterion). Second, the debiasing studies until now evaluated their proposed methods on datasets with excessively high bias-severities, showing degraded performance on datasets with low bias severity. Third, the debiasing studies do not share consistent experimental settings (e.g., datasets and neural networks) which need to be standardized for fair comparisons. Based on such issues, this paper 1) proposes an evaluation metric `Align-Conflict (AC) score' for the tuning criterion, 2) includes experimental settings with low bias severity and shows that they are yet to be explored, and 3) unifies the standardized experimental settings to promote fair comparisons between debiasing methods. We believe that our findings and lessons inspire future researchers in debiasing to further push state-of-the-art performances with fair comparisons.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods