Distributed Learning with Strategic Users: A Repeated Game Approach

NeurIPS 2021  ·  Abdullah Basar Akbay, Junshan Zhang ·

We consider a distributed learning setting where strategic users are incentivized, by a cost-sensitive fusion center, to train a learning model based on local data. The users are not obliged to provide their true gradient updates and the fusion center is not capable of validating the authenticity of reported updates. Thus motivated, we formulate the interactions between the fusion center and the users as repeated games, manifesting an under-explored interplay between machine learning and game theory. We then develop an incentive mechanism for the fusion center based on a joint gradient estimation and user action classification scheme, and study its impact on the convergence performance of distributed learning. Further, we devise an adaptive zero-determinant (ZD) strategy, thereby generalizing the celebrated ZD strategy to the repeated games with time-varying stochastic errors. Theoretical and empirical analysis show that the fusion center can incentivize the strategic users to cooperate and report informative gradient updates, thus ensuring the convergence.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here