Multi-fairness under class-imbalance

27 Apr 2021  ·  Arjun Roy, Vasileios Iosifidis, Eirini Ntoutsi ·

Recent studies showed that datasets used in fairness-aware machine learning for multiple protected attributes (referred to as multi-discrimination hereafter) are often imbalanced. The class-imbalance problem is more severe for the often underrepresented protected group (e.g. female, non-white, etc.) in the critical minority class. Still, existing methods focus only on the overall error-discrimination trade-off, ignoring the imbalance problem, thus amplifying the prevalent bias in the minority classes. Therefore, solutions are needed to solve the combined problem of multi-discrimination and class-imbalance. To this end, we introduce a new fairness measure, Multi-Max Mistreatment (MMM), which considers both (multi-attribute) protected group and class membership of instances to measure discrimination. To solve the combined problem, we propose a boosting approach that incorporates MMM-costs in the distribution update and post-training selects the optimal trade-off among accurate, balanced, and fair solutions. The experimental results show the superiority of our approach against state-of-the-art methods in producing the best balanced performance across groups and classes and the best accuracy for the protected groups in the minority class.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here