Board Games
42 papers with code • 0 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in Board Games
Libraries
Use these libraries to find Board Games models and implementationsMost implemented papers
Solving Royal Game of Ur Using Reinforcement Learning
Reinforcement Learning has recently surfaced as a very powerful tool to solve complex problems in the domain of board games, wherein an agent is generally required to learn complex strategies and moves based on its own experiences and rewards received.
Move Evaluation in Go Using Deep Convolutional Neural Networks
The game of Go is more challenging than other board games, due to the difficulty of constructing a position or move evaluation function.
Beating the World's Best at Super Smash Bros. with Deep Reinforcement Learning
There has been a recent explosion in the capabilities of game-playing artificial intelligence.
The Text-Based Adventure AI Competition
In 2016, 2017, and 2018 at the IEEE Conference on Computational Intelligence in Games, the authors of this paper ran a competition for agents that can play classic text-based adventure games.
Assessing the Potential of Classical Q-learning in General Game Playing
For small games, simple classical table-based Q-learning might still be the algorithm of choice.
Biasing MCTS with Features for General Games
This paper proposes using a linear function approximator, rather than a deep neural network (DNN), to bias a Monte Carlo tree search (MCTS) player for general games.
State Representation and Polyomino Placement for the Game Patchwork
Modern board games are a rich source of entertainment for many people, but also contain interesting and challenging structures for game playing research and implementing game playing agents.
Nmbr9 as a Constraint Programming Challenge
Modern board games are a rich source of interesting and new challenges for combinatorial problems.
Explain Your Move: Understanding Agent Actions Using Focused Feature Saliency
We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that our approach generates saliency maps that are more interpretable for humans than existing approaches.
Manipulating the Distributions of Experience used for Self-Play Learning in Expert Iteration
ExIt involves training a policy to mimic the search behaviour of a tree search algorithm - such as Monte-Carlo tree search - and using the trained policy to guide it.