Continuous Control
407 papers with code • 73 benchmarks • 9 datasets
Continuous control in the context of playing games, especially within artificial intelligence (AI) and machine learning (ML), refers to the ability to make a series of smooth, ongoing adjustments or actions to control a game or a simulation. This is in contrast to discrete control, where the actions are limited to a set of specific, distinct choices. Continuous control is crucial in environments where precision, timing, and the magnitude of actions matter, such as driving a car in a racing game, controlling a character in a simulation, or managing the flight of an aircraft in a flight simulator.
Libraries
Use these libraries to find Continuous Control models and implementationsDatasets
Latest papers with no code
Demystifying Deep Reinforcement Learning-Based Autonomous Vehicle Decision-Making
With the advent of universal function approximators in the domain of reinforcement learning, the number of practical applications leveraging deep reinforcement learning (DRL) has exploded.
Quality-Diversity Actor-Critic: Learning High-Performing and Diverse Behaviors via Value and Successor Features Critics
A key aspect of intelligence is the ability to demonstrate a broad spectrum of behaviors for adapting to unexpected situations.
Online Policy Learning from Offline Preferences
To address this problem, the present study introduces a framework that consolidates offline preferences and \emph{virtual preferences} for PbRL, which are comparisons between the agent's behaviors and the offline data.
Symmetric Q-learning: Reducing Skewness of Bellman Error in Online Reinforcement Learning
In deep reinforcement learning, estimating the value function to evaluate the quality of states and actions is essential.
Sample-Optimal Zero-Violation Safety For Continuous Control
In this paper, we study the problem of ensuring safety with a few shots of samples for partially unknown systems.
Noisy Spiking Actor Network for Exploration
As a general method for exploration in deep reinforcement learning (RL), NoisyNet can produce problem-specific exploration strategies.
SplAgger: Split Aggregation for Meta-Reinforcement Learning
However, it remains unclear whether task inference sequence models are beneficial even when task inference objectives are not.
Iterated $Q$-Network: Beyond the One-Step Bellman Operator
Value-based Reinforcement Learning (RL) methods rely on the application of the Bellman operator, which needs to be approximated from samples.
EfficientZero V2: Mastering Discrete and Continuous Control with Limited Data
We have expanded the performance of EfficientZero to multiple domains, encompassing both continuous and discrete actions, as well as visual and low-dimensional inputs.
A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations
This paper investigates how to incorporate expert observations (without explicit information on expert actions) into a deep reinforcement learning setting to improve sample efficiency.