Unified Group Fairness on Federated Learning

9 Nov 2021  ·  Fengda Zhang, Kun Kuang, Yuxuan Liu, Long Chen, Chao Wu, Fei Wu, Jiaxun Lu, Yunfeng Shao, Jun Xiao ·

Federated learning (FL) has emerged as an important machine learning paradigm where a global model is trained based on the private data from distributed clients. However, most of existing FL algorithms cannot guarantee the performance fairness towards different groups because of data distribution shift over groups. In this paper, we formulate the problem of unified group fairness on FL, where the groups can be formed by clients (including existing clients and newly added clients) and sensitive attribute(s). To solve this problem, we first propose a general fair federated framework. Then we construct a unified group fairness risk from the view of federated uncertainty set with theoretical analyses to guarantee unified group fairness on FL. We also develop an efficient federated optimization algorithm named Federated Mirror Descent Ascent with Momentum Acceleration (FMDA-M) with convergence guarantee. We validate the advantages of the FMDA-M algorithm with various kinds of distribution shift settings in experiments, and the results show that FMDA-M algorithm outperforms the existing fair FL algorithms on unified group fairness.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here