Learning Mean-Field Games

NeurIPS 2019  ·  Xin Guo, Anran Hu, Renyuan Xu, Junzi Zhang ·

This paper presents a general mean-field game (GMFG) framework for simultaneous learning and decision-making in stochastic games with a large population. It first establishes the existence of a unique Nash Equilibrium to this GMFG, and explains that naively combining Q-learning with the fixed-point approach in classical MFGs yields unstable algorithms. It then proposes a Q-learning algorithm with Boltzmann policy (GMF-Q), with analysis of convergence property and computational complexity. The experiments on repeated Ad auction problems demonstrate that this GMF-Q algorithm is efficient and robust in terms of convergence and learning accuracy. Moreover, its performance is superior in convergence, stability, and learning ability, when compared with existing algorithms for multi-agent reinforcement learning.

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract
No code implementations yet. Submit your code now

Categories


Optimization and Control