Robust Federated Recommendation System

15 Jun 2020  ·  Chen Chen, Jingfeng Zhang, Anthony K. H. Tung, Mohan Kankanhalli, Gang Chen ·

Federated recommendation systems can provide good performance without collecting users' private data, making them attractive. However, they are susceptible to low-cost poisoning attacks that can degrade their performance. In this paper, we develop a novel federated recommendation technique that is robust against the poisoning attack where Byzantine clients prevail. We argue that the key to Byzantine detection is monitoring of gradients of the model parameters of clients. We then propose a robust learning strategy where instead of using model parameters, the central server computes and utilizes the gradients to filter out Byzantine clients. Theoretically, we justify our robust learning strategy by our proposed definition of Byzantine resilience. Empirically, we confirm the efficacy of our robust learning strategy employing four datasets in a federated recommendation system.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here