Control with Prametrised Actions

2 papers with code • 3 benchmarks • 0 datasets

Most reinforcement learning research papers focus on environments where the agent’s actions are either discrete or continuous. However, when training an agent to play a video game, it is common to encounter situations where actions have both discrete and continuous components. For example, a set of high-level discrete actions (ex: move, jump, fire), each of them being associated with continuous parameters (ex: target coordinates for the move action, direction for the jump action, aiming angle for the fire action). These kinds of tasks are included in Control with Parameterised Actions.

Most implemented papers

Multi-Pass Q-Networks for Deep Reinforcement Learning with Parameterised Action Spaces

cycraig/MP-DQN 10 May 2019

Parameterised actions in reinforcement learning are composed of discrete actions with continuous action-parameters.

Discrete and Continuous Action Representation for Practical RL in Video Games

nisheeth-golakiya/hybrid-sac 23 Dec 2019

While most current research in Reinforcement Learning (RL) focuses on improving the performance of the algorithms in controlled environments, the use of RL under constraints like those met in the video game industry is rarely studied.