Continuous Control

407 papers with code • 73 benchmarks • 9 datasets

Continuous control in the context of playing games, especially within artificial intelligence (AI) and machine learning (ML), refers to the ability to make a series of smooth, ongoing adjustments or actions to control a game or a simulation. This is in contrast to discrete control, where the actions are limited to a set of specific, distinct choices. Continuous control is crucial in environments where precision, timing, and the magnitude of actions matter, such as driving a car in a racing game, controlling a character in a simulation, or managing the flight of an aircraft in a flight simulator.

Libraries

Use these libraries to find Continuous Control models and implementations

Latest papers with no code

Demystifying Deep Reinforcement Learning-Based Autonomous Vehicle Decision-Making

no code yet • 18 Mar 2024

With the advent of universal function approximators in the domain of reinforcement learning, the number of practical applications leveraging deep reinforcement learning (DRL) has exploded.

Quality-Diversity Actor-Critic: Learning High-Performing and Diverse Behaviors via Value and Successor Features Critics

no code yet • 15 Mar 2024

A key aspect of intelligence is the ability to demonstrate a broad spectrum of behaviors for adapting to unexpected situations.

Online Policy Learning from Offline Preferences

no code yet • 15 Mar 2024

To address this problem, the present study introduces a framework that consolidates offline preferences and \emph{virtual preferences} for PbRL, which are comparisons between the agent's behaviors and the offline data.

Symmetric Q-learning: Reducing Skewness of Bellman Error in Online Reinforcement Learning

no code yet • 12 Mar 2024

In deep reinforcement learning, estimating the value function to evaluate the quality of states and actions is essential.

Sample-Optimal Zero-Violation Safety For Continuous Control

no code yet • 9 Mar 2024

In this paper, we study the problem of ensuring safety with a few shots of samples for partially unknown systems.

Noisy Spiking Actor Network for Exploration

no code yet • 7 Mar 2024

As a general method for exploration in deep reinforcement learning (RL), NoisyNet can produce problem-specific exploration strategies.

SplAgger: Split Aggregation for Meta-Reinforcement Learning

no code yet • 5 Mar 2024

However, it remains unclear whether task inference sequence models are beneficial even when task inference objectives are not.

Iterated $Q$-Network: Beyond the One-Step Bellman Operator

no code yet • 4 Mar 2024

Value-based Reinforcement Learning (RL) methods rely on the application of the Bellman operator, which needs to be approximated from samples.

EfficientZero V2: Mastering Discrete and Continuous Control with Limited Data

no code yet • 1 Mar 2024

We have expanded the performance of EfficientZero to multiple domains, encompassing both continuous and discrete actions, as well as visual and low-dimensional inputs.

A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations

no code yet • 29 Feb 2024

This paper investigates how to incorporate expert observations (without explicit information on expert actions) into a deep reinforcement learning setting to improve sample efficiency.