Heuristic Search Algorithms

Monte-Carlo Tree Search

Monte-Carlo Tree Search is a planning algorithm that accumulates value estimates obtained from Monte Carlo simulations in order to successively direct simulations towards more highly-rewarded trajectories. We execute MCTS after encountering each new state to select an agent's action for that state: it is executed again to select the action for the next state. Each execution is an iterative process that simulates many trajectories starting from the current state to the terminal state. The core idea is to successively focus multiple simulations starting at the current state by extending the initial portions of trajectories that have received high evaluations from earlier simulations.

Source: Sutton and Barto, Reinforcement Learning (2nd Edition)

Image Credit: Chaslot et al

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Reinforcement Learning (RL) 43 23.63%
Model-based Reinforcement Learning 19 10.44%
Decision Making 17 9.34%
Board Games 13 7.14%
Atari Games 9 4.95%
Continuous Control 6 3.30%
Game of Go 5 2.75%
Thompson Sampling 3 1.65%
Offline RL 3 1.65%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories