Starcraft II
81 papers with code • 3 benchmarks • 4 datasets
Starcraft II is a RTS game; the task is to train an agent to play the game.
( Image credit: The StarCraft Multi-Agent Challenge )
Libraries
Use these libraries to find Starcraft II models and implementationsLatest papers
Semantic HELM: A Human-Readable Memory for Reinforcement Learning
Then we feed these tokens to a pretrained language model that serves the agent as memory and provides it with a coherent and human-readable representation of the past.
EXPODE: EXploiting POlicy Discrepancy for Efficient Exploration in Multi-agent Reinforcement Learning
Recently, Multi-Agent Reinforcement Learning (MARL) has been applied to a large number of scenarios and has shown promising performance.
Is Centralized Training with Decentralized Execution Framework Centralized Enough for MARL?
Despite the encouraging results achieved, CTDE makes an independence assumption on agent policies, which limits agents to adopt global cooperative information from each other during centralized training.
SMAClite: A Lightweight Environment for Multi-Agent Reinforcement Learning
The Starcraft Multi-Agent Challenge (SMAC) has been widely used in MARL research, but is built on top of a heavy, closed-source computer game, StarCraft II.
Effective and Stable Role-Based Multi-Agent Collaboration by Structural Information Principles
Role-based learning is a promising approach to improving the performance of Multi-Agent Reinforcement Learning (MARL).
Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial Minority Influence
To achieve maximum deviation in victim policies under complex agent-wise interactions, our unilateral attack aims to characterize and maximize the impact of the adversary on the victims.
TransfQMix: Transformers for Leveraging the Graph Structure of Multi-Agent Reinforcement Learning Problems
Coordination is one of the most difficult aspects of multi-agent reinforcement learning (MARL).
Self-Motivated Multi-Agent Exploration
In cooperative multi-agent reinforcement learning (CMARL), it is critical for agents to achieve a balance between self-exploration and team collaboration.
Learning Explicit Credit Assignment for Cooperative Multi-Agent Reinforcement Learning via Polarization Policy Gradient
Empirically, we evaluate MAPPG on the well-known matrix game and differential game, and verify that MAPPG can converge to the global optimum for both discrete and continuous action spaces.
On Efficient Reinforcement Learning for Full-length Game of StarCraft II
In this work, we investigate a set of RL techniques for the full-length game of StarCraft II.