Starcraft II
81 papers with code • 3 benchmarks • 4 datasets
Starcraft II is a RTS game; the task is to train an agent to play the game.
( Image credit: The StarCraft Multi-Agent Challenge )
Libraries
Use these libraries to find Starcraft II models and implementationsLatest papers
Group-Aware Coordination Graph for Multi-Agent Reinforcement Learning
To overcome these limitations, we present a novel approach to infer the Group-Aware Coordination Graph (GACG), which is designed to capture both the cooperation between agent pairs based on current observations and group-level dependencies from behaviour patterns observed across trajectories.
Inferring Latent Temporal Sparse Coordination Graph for Multi-Agent Reinforcement Learning
The LTS-CG leverages agents' historical observations to calculate an agent-pair probability matrix, where a sparse graph is sampled from and used for knowledge exchange between agents, thereby simultaneously capturing agent dependencies and relation uncertainty.
Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning
To address this, we introduce Efficient episodic Memory Utilization (EMU) for MARL, with two primary objectives: (a) accelerating reinforcement learning by leveraging semantically coherent memory from an episodic buffer and (b) selectively promoting desirable transitions to prevent local convergence.
SwarmBrain: Embodied agent for real-time strategy game StarCraft II via large language models
In this paper, we introduce SwarmBrain, an embodied agent leveraging LLM for real-time strategy implementation in the StarCraft II game environment.
Large Language Models Play StarCraft II: Benchmarks and A Chain of Summarization Approach
StarCraft II is a challenging benchmark for AI agents due to the necessity of both precise micro level operations and strategic macro awareness.
CODEX: A Cluster-Based Method for Explainable Reinforcement Learning
Despite the impressive feats demonstrated by Reinforcement Learning (RL), these algorithms have seen little adoption in high-risk, real-world applications due to current difficulties in explaining RL agent actions and building user trust.
JaxMARL: Multi-Agent RL Environments in JAX
This not only enables GPU acceleration, but also provides a more flexible MARL environment, unlocking the potential for self-play, meta-learning, and other future applications in MARL.
FoX: Formation-aware exploration in multi-agent reinforcement learning
Recently, deep multi-agent reinforcement learning (MARL) has gained significant popularity due to its success in various cooperative multi-agent tasks.
AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning
StarCraft II is one of the most challenging simulated reinforcement learning environments; it is partially observable, stochastic, multi-agent, and mastering StarCraft II requires strategic planning over long time horizons with real-time low-level execution.
Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization
Offline reinforcement learning (RL) has received considerable attention in recent years due to its attractive capability of learning policies from offline datasets without environmental interactions.