no code implementations • 21 Jul 2023 • Faisal Hamman, Sanghamitra Dutta
This work presents an information-theoretic perspective to group fairness trade-offs in federated learning (FL) with respect to sensitive attributes, such as gender, race, etc.
1 code implementation • 19 May 2023 • Faisal Hamman, Erfaun Noorani, Saumitra Mishra, Daniele Magazzeni, Sanghamitra Dutta
There is an emerging interest in generating robust counterfactual explanations that would remain valid if the model is updated or changed even slightly.
no code implementations • 3 Nov 2022 • Faisal Hamman, Jiahao Chen, Sanghamitra Dutta
In this paper, we first demonstrate that simply querying for fairness metrics, such as statistical parity and equalized odds can leak the protected attributes of individuals to the model developers.