Continuous Control

422 papers with code • 73 benchmarks • 10 datasets

Continuous control in the context of playing games, especially within artificial intelligence (AI) and machine learning (ML), refers to the ability to make a series of smooth, ongoing adjustments or actions to control a game or a simulation. This is in contrast to discrete control, where the actions are limited to a set of specific, distinct choices. Continuous control is crucial in environments where precision, timing, and the magnitude of actions matter, such as driving a car in a racing game, controlling a character in a simulation, or managing the flight of an aircraft in a flight simulator.

Libraries

Use these libraries to find Continuous Control models and implementations

Latest papers with no code

Iterated $Q$-Network: Beyond One-Step Bellman Updates in Deep Reinforcement Learning

no code yet • 4 Mar 2024

It has been observed that this scheme can be potentially generalized to carry out multiple iterations of the Bellman operator at once, benefiting the underlying learning algorithm.

EfficientZero V2: Mastering Discrete and Continuous Control with Limited Data

no code yet • 1 Mar 2024

We have expanded the performance of EfficientZero to multiple domains, encompassing both continuous and discrete actions, as well as visual and low-dimensional inputs.

A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations

no code yet • 29 Feb 2024

This paper investigates how to incorporate expert observations (without explicit information on expert actions) into a deep reinforcement learning setting to improve sample efficiency.

DynaMITE-RL: A Dynamic Model for Improved Temporal Meta-Reinforcement Learning

no code yet • 25 Feb 2024

We introduce DynaMITE-RL, a meta-reinforcement learning (meta-RL) approach to approximate inference in environments where the latent state evolves at varying rates.

ACE : Off-Policy Actor-Critic with Causality-Aware Entropy Regularization

no code yet • 22 Feb 2024

The varying significance of distinct primitive behaviors during the policy learning process has been overlooked by prior model-free RL algorithms.

Exploiting Estimation Bias in Deep Double Q-Learning for Actor-Critic Methods

no code yet • 14 Feb 2024

This paper introduces innovative methods in Reinforcement Learning (RL), focusing on addressing and exploiting estimation biases in Actor-Critic methods for continuous control tasks, using Deep Double Q-Learning.

Offline Actor-Critic Reinforcement Learning Scales to Large Models

no code yet • 8 Feb 2024

We show that offline actor-critic reinforcement learning can scale to large models - such as transformers - and follows similar scaling laws as supervised learning.

Reinforcement Learning as a Catalyst for Robust and Fair Federated Learning: Deciphering the Dynamics of Client Contributions

no code yet • 8 Feb 2024

Recent advancements in federated learning (FL) have produced models that retain user privacy by training across multiple decentralized devices or systems holding local data samples.

Learning Diverse Policies with Soft Self-Generated Guidance

no code yet • 7 Feb 2024

However, existing methods often require these experiences to be successful and may overly exploit them, which can cause the agent to adopt suboptimal behaviors.

Understanding What Affects Generalization Gap in Visual Reinforcement Learning: Theory and Empirical Evidence

no code yet • 5 Feb 2024

Recently, there are many efforts attempting to learn useful policies for continuous control in visual reinforcement learning (RL).