BiaSwap: Removing dataset bias with bias-tailored swapping augmentation

ICCV 2021  ·  Eungyeup Kim, Jihyeon Lee, Jaegul Choo ·

Deep neural networks often make decisions based on the spurious correlations inherent in the dataset, failing to generalize in an unbiased data distribution. Although previous approaches pre-define the type of dataset bias to prevent the network from learning it, recognizing the bias type in the real dataset is often prohibitive. This paper proposes a novel bias-tailored augmentation-based approach, BiaSwap, for learning debiased representation without requiring supervision on the bias type. Assuming that the bias corresponds to the easy-to-learn attributes, we sort the training images based on how much a biased classifier can exploits them as shortcut and divide them into bias-guiding and bias-contrary samples in an unsupervised manner. Afterwards, we integrate the style-transferring module of the image translation model with the class activation maps of such biased classifier, which enables to primarily transfer the bias attributes learned by the classifier. Therefore, given the pair of bias-guiding and bias-contrary, BiaSwap generates the bias-swapped image which contains the bias attributes from the bias-contrary images, while preserving bias-irrelevant ones in the bias-guiding images. Given such augmented images, BiaSwap demonstrates the superiority in debiasing against the existing baselines over both synthetic and real-world datasets. Even without careful supervision on the bias, BiaSwap achieves a remarkable performance on both unbiased and bias-guiding samples, implying the improved generalization capability of the model.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Datasets


Introduced in the Paper:

bFFHQ

Used in the Paper:

CIFAR-10 FFHQ BAR
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Recognition BAR BiaSwap Accuracy 52.44 # 4
Facial Attribute Classification bFFHQ BiaSwap Bias-Conflicting Accuracy 58.87 # 3

Methods


No methods listed for this paper. Add relevant methods here