Starcraft II

81 papers with code • 3 benchmarks • 4 datasets

Starcraft II is a RTS game; the task is to train an agent to play the game.

( Image credit: The StarCraft Multi-Agent Challenge )

Libraries

Use these libraries to find Starcraft II models and implementations
4 papers
15
2 papers
1,730
2 papers
740
2 papers
554
See all 6 libraries.

Latest papers with no code

Forecasting Evolution of Clusters in Game Agents with Hebbian Learning

no code yet • 19 Aug 2022

In this light, clustering the agents in the game has been used for various purposes such as the efficient control of the agents in multi-agent reinforcement learning and game analytic tools for the game users.

Unsupervised Hebbian Learning on Point Sets in StarCraft II

no code yet • 13 Jul 2022

Learning the evolution of real-time strategy (RTS) game is a challenging problem in artificial intelligent (AI) system.

Evolutionary Game-Theoretical Analysis for General Multiplayer Asymmetric Games

no code yet • 22 Jun 2022

First, there is inaccuracy when analysing the simplified payoff table.

S2RL: Do We Really Need to Perceive All States in Deep Multi-Agent Reinforcement Learning?

no code yet • 20 Jun 2022

To this end, we propose a sparse state based MARL (S2RL) framework, which utilizes a sparse attention mechanism to discard irrelevant information in local observations.

Beyond Rewards: a Hierarchical Perspective on Offline Multiagent Behavioral Analysis

no code yet • 17 Jun 2022

Each year, expert-level performance is attained in increasingly-complex multiagent domains, where notable examples include Go, Poker, and StarCraft II.

Off-Beat Multi-Agent Reinforcement Learning

no code yet • 27 May 2022

During execution durations, the environment changes are influenced by, but not synchronised with, action execution.

Learning to Guide Multiple Heterogeneous Actors from a Single Human Demonstration via Automatic Curriculum Learning in StarCraft II

no code yet • 11 May 2022

Traditionally, learning from human demonstrations via direct behavior cloning can lead to high-performance policies given that the algorithm has access to large amounts of high-quality data covering the most likely scenarios to be encountered when the agent is operating.

LDSA: Learning Dynamic Subtask Assignment in Cooperative Multi-Agent Reinforcement Learning

no code yet • 5 May 2022

In this way, agents dealing with the same subtask share their learning of specific abilities and different subtasks correspond to different specific abilities.

Learning to Transfer Role Assignment Across Team Sizes

no code yet • 17 Apr 2022

In particular, we train a role assignment network for small teams by demonstration and transfer the network to larger teams, which continue to learn through interaction with the environment.

Depthwise Convolution for Multi-Agent Communication with Enhanced Mean-Field Approximation

no code yet • 6 Mar 2022

In this paper, we propose a new method based on local communication learning to tackle the multi-agent RL (MARL) challenge within a large number of agents coexisting.