Rethinking the Implementation Matters in Cooperative Multi-Agent Reinforcement Learning

6 Feb 2021  ·  Jian Hu, Siyang Jiang, Seth Austin Harding, Haibin Wu, Shih-wei Liao ·

Multi-Agent Reinforcement Learning (MARL) has seen revolutionary breakthroughs with its successful application to multi-agent cooperative tasks such as computer games and robot swarms. QMIX, a widely popular MARL algorithm, has been used to solve cooperative tasks, e.g. Starcraft Multi-Agent Challenge (SMAC), Difficulty-Enhanced Predator-Prey (DEPP). Recent variants of QMIX target relaxing the monotonicity constraint of QMIX, allowing for performance improvement in SMAC. However, in this paper, we investigate the code-level optimizations of these variants and the monotonicity constraint. We find that (1) such improvements of the variants are significantly affected by various code-level optimizations; (2) QMIX with normalized optimizations outperforms other previous works in SMAC; (3) the monotonicity constraint may improve sample efficiency in SMAC and DEPP. Last, a discussion with theoretical analysis is demonstrated about why QMIX works well in SMAC. We open-source the code at \url{https://github.com/hijkzzz/pymarl2}.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here