Continuous Control

417 papers with code • 73 benchmarks • 9 datasets

Continuous control in the context of playing games, especially within artificial intelligence (AI) and machine learning (ML), refers to the ability to make a series of smooth, ongoing adjustments or actions to control a game or a simulation. This is in contrast to discrete control, where the actions are limited to a set of specific, distinct choices. Continuous control is crucial in environments where precision, timing, and the magnitude of actions matter, such as driving a car in a racing game, controlling a character in a simulation, or managing the flight of an aircraft in a flight simulator.

Libraries

Use these libraries to find Continuous Control models and implementations

Latest papers with no code

DynaMITE-RL: A Dynamic Model for Improved Temporal Meta-Reinforcement Learning

no code yet • 25 Feb 2024

We introduce DynaMITE-RL, a meta-reinforcement learning (meta-RL) approach to approximate inference in environments where the latent state evolves at varying rates.

ACE : Off-Policy Actor-Critic with Causality-Aware Entropy Regularization

no code yet • 22 Feb 2024

The varying significance of distinct primitive behaviors during the policy learning process has been overlooked by prior model-free RL algorithms.

Exploiting Estimation Bias in Deep Double Q-Learning for Actor-Critic Methods

no code yet • 14 Feb 2024

This paper introduces innovative methods in Reinforcement Learning (RL), focusing on addressing and exploiting estimation biases in Actor-Critic methods for continuous control tasks, using Deep Double Q-Learning.

Offline Actor-Critic Reinforcement Learning Scales to Large Models

no code yet • 8 Feb 2024

We show that offline actor-critic reinforcement learning can scale to large models - such as transformers - and follows similar scaling laws as supervised learning.

Reinforcement Learning as a Catalyst for Robust and Fair Federated Learning: Deciphering the Dynamics of Client Contributions

no code yet • 8 Feb 2024

Recent advancements in federated learning (FL) have produced models that retain user privacy by training across multiple decentralized devices or systems holding local data samples.

Learning Diverse Policies with Soft Self-Generated Guidance

no code yet • 7 Feb 2024

However, existing methods often require these experiences to be successful and may overly exploit them, which can cause the agent to adopt suboptimal behaviors.

Understanding What Affects Generalization Gap in Visual Reinforcement Learning: Theory and Empirical Evidence

no code yet • 5 Feb 2024

Recently, there are many efforts attempting to learn useful policies for continuous control in visual reinforcement learning (RL).

Probabilistic Actor-Critic: Learning to Explore with PAC-Bayes Uncertainty

no code yet • 5 Feb 2024

We introduce Probabilistic Actor-Critic (PAC), a novel reinforcement learning algorithm with improved continuous control performance thanks to its ability to mitigate the exploration-exploitation trade-off.

Frugal Actor-Critic: Sample Efficient Off-Policy Deep Reinforcement Learning Using Unique Experiences

no code yet • 5 Feb 2024

Efficient utilization of the replay buffer plays a significant role in the off-policy actor-critic reinforcement learning (RL) algorithms used for model-free control policy synthesis for complex dynamical systems.

A Strategy for Preparing Quantum Squeezed States Using Reinforcement Learning

no code yet • 29 Jan 2024

It is exemplified by the application to prepare spin-squeezed states for an open collective spin model where a linear control field is designed to govern the dynamics.