Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher Mixture Model

In spite of the high performance and reliability of deep learning algorithms in a wide range of everyday applications, many investigations tend to show that a lot of models exhibit biases, discriminating against specific subgroups of the population (e.g. gender, ethnicity). This urges the practitioner to develop fair systems with a uniform/comparable performance across sensitive groups. In this work, we investigate the gender bias of deep Face Recognition networks. In order to measure this bias, we introduce two new metrics, $\mathrm{BFAR}$ and $\mathrm{BFRR}$, that better reflect the inherent deployment needs of Face Recognition systems. Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology which transforms the deep embeddings of a pre-trained model to give more representation power to discriminated subgroups. It consists in training a shallow neural network by minimizing a Fair von Mises-Fisher loss whose hyperparameters account for the intra-class variance of each gender. Interestingly, we empirically observe that these hyperparameters are correlated with our fairness metrics. In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias. The code used for the experiments can be found at https://github.com/JRConti/EthicalModule_vMF.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Face Verification LFW ArcFaceR50 + EM-FRR FRR@FAR(%) 0.100 # 3
BFRR 5.89 # 3
BFAR 33.65 # 1
Face Verification LFW ArcFaceR50 + EM-C FRR@FAR(%) 0.164 # 1
BFRR 9.18 # 2
BFAR 2.44 # 2
Face Verification LFW ArcFaceR50 + EM-FAR FRR@FAR(%) 0.151 # 2
BFRR 11.22 # 1
BFAR 2.11 # 3

Methods


No methods listed for this paper. Add relevant methods here