Mitigating Bias in Facial Analysis Systems by Incorporating Label Diversity

13 Apr 2022  ·  Camila Kolling, Victor Araujo, Adriano Veloso, Soraia Raupp Musse ·

Facial analysis models are increasingly applied in real-world applications that have significant impact on peoples' lives. However, as literature has shown, models that automatically classify facial attributes might exhibit algorithmic discrimination behavior with respect to protected groups, potentially posing negative impacts on individuals and society. It is therefore critical to develop techniques that can mitigate unintended biases in facial classifiers. Hence, in this work, we introduce a novel learning method that combines both subjective human-based labels and objective annotations based on mathematical definitions of facial traits. Specifically, we generate new objective annotations from two large-scale human-annotated dataset, each capturing a different perspective of the analyzed facial trait. We then propose an ensemble learning method, which combines individual models trained on different types of annotations. We provide an in-depth analysis of the annotation procedure as well as the datasets distribution. Moreover, we empirically demonstrate that, by incorporating label diversity, our method successfully mitigates unintended biases, while maintaining significant accuracy on the downstream tasks.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here