SMAC

37 papers with code • 11 benchmarks • 1 datasets

The StarCraft Multi-Agent Challenge (SMAC) is a benchmark that provides elements of partial observability, challenging dynamics, and high-dimensional observation spaces. SMAC is built using the StarCraft II game engine, creating a testbed for research in cooperative MARL where each game unit is an independent RL agent.

Libraries

Use these libraries to find SMAC models and implementations
2 papers
1,692
2 papers
684

Datasets


Most implemented papers

The StarCraft Multi-Agent Challenge

oxwhirl/pymarl 11 Feb 2019

In this paper, we propose the StarCraft Multi-Agent Challenge (SMAC) as a benchmark problem to fill this gap.

Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge?

cyanrain7/trpo-in-marl 18 Nov 2020

Most recently developed approaches to cooperative multi-agent reinforcement learning in the \emph{centralized training with decentralized execution} setting involve estimating a centralized, joint value function.

mlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions

mlr-org/mlrMBO 9 Mar 2017

We present mlrMBO, a flexible and comprehensive R toolbox for model-based optimization (MBO), also known as Bayesian optimization, which addresses the problem of expensive black-box optimization by approximating the given objective function through a surrogate regression model.

MAVEN: Multi-Agent Variational Exploration

AnujMahajanOxf/MAVEN NeurIPS 2019

We specifically focus on QMIX [40], the current state-of-the-art in this domain.

FACMAC: Factored Multi-Agent Centralised Policy Gradients

schroederdewitt/multiagent_mujoco NeurIPS 2021

We propose FACtored Multi-Agent Centralised policy gradients (FACMAC), a new method for cooperative multi-agent reinforcement learning in both discrete and continuous action spaces.

Rethinking the Implementation Matters in Cooperative Multi-Agent Reinforcement Learning

hijkzzz/pymarl2 6 Feb 2021

Multi-Agent Reinforcement Learning (MARL) has seen revolutionary breakthroughs with its successful application to multi-agent cooperative tasks such as computer games and robot swarms.

Efficient Hyperparameter Optimization of Deep Learning Algorithms Using Deterministic RBF Surrogates

jekyllstein/HORDOpt.jl 28 Jul 2016

Those methods adopt probabilistic surrogate models like Gaussian processes to approximate and minimize the validation error function of hyperparameter values.

Efficient Evolutionary Methods for Game Agent Optimisation: Model-Based is Best

SimonLucas/ntbea 3 Jan 2019

This paper introduces a simple and fast variant of Planet Wars as a test-bed for statistical planning based Game AI agents, and for noisy hyper-parameter optimisation.

On the Performance of Differential Evolution for Hyperparameter Tuning

MLStruckmann/mutation-misery 15 Apr 2019

This empirical study involves a range of different machine learning algorithms and datasets with various characteristics to compare the performance of Differential Evolution with Sequential Model-based Algorithm Configuration (SMAC), a reference Bayesian Optimization approach.

SMIX($λ$): Enhancing Centralized Value Functions for Cooperative Multi-Agent Reinforcement Learning

chaovven/SMIX 11 Nov 2019

Learning a stable and generalizable centralized value function (CVF) is a crucial but challenging task in multi-agent reinforcement learning (MARL), as it has to deal with the issue that the joint action space increases exponentially with the number of agents in such scenarios.