Continuous Control

413 papers with code • 73 benchmarks • 9 datasets

Continuous control in the context of playing games, especially within artificial intelligence (AI) and machine learning (ML), refers to the ability to make a series of smooth, ongoing adjustments or actions to control a game or a simulation. This is in contrast to discrete control, where the actions are limited to a set of specific, distinct choices. Continuous control is crucial in environments where precision, timing, and the magnitude of actions matter, such as driving a car in a racing game, controlling a character in a simulation, or managing the flight of an aircraft in a flight simulator.

Libraries

Use these libraries to find Continuous Control models and implementations

World Models via Policy-Guided Trajectory Diffusion

marc-rigter/polygrad-world-models 13 Dec 2023

Our results demonstrate that PolyGRAD outperforms state-of-the-art baselines in terms of trajectory prediction error for short trajectories, with the exception of autoregressive diffusion.

38
13 Dec 2023

Decoupling Meta-Reinforcement Learning with Gaussian Task Contexts and Skills

hehongc/DCMRL 11 Dec 2023

We propose a framework called decoupled meta-reinforcement learning (DCMRL), which (1) contrastively restricts the learning of task contexts through pulling in similar task contexts within the same task and pushing away different task contexts of different tasks, and (2) utilizes a Gaussian quantization variational autoencoder (GQ-VAE) for clustering the Gaussian distributions of the task contexts and skills respectively, and decoupling the exploration and learning processes of their spaces.

3
11 Dec 2023

DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization

XuGW-Kevin/DrM 30 Oct 2023

To quantify this inactivity, we adopt dormant ratio as a metric to measure inactivity in the RL agent's network.

37
30 Oct 2023

TD-MPC2: Scalable, Robust World Models for Continuous Control

nicklashansen/tdmpc2 25 Oct 2023

TD-MPC is a model-based reinforcement learning (RL) algorithm that performs local trajectory optimization in the latent space of a learned implicit (decoder-free) world model.

205
25 Oct 2023

Absolute Policy Optimization

intelligent-control-lab/absolute-policy-optimization 20 Oct 2023

In recent years, trust region on-policy reinforcement learning has achieved impressive results in addressing complex control tasks and gaming scenarios.

2
20 Oct 2023

Reduced Policy Optimization for Continuous Control with Hard Constraints

wadx2019/rpo NeurIPS 2023

To the best of our knowledge, RPO is the first attempt that introduces GRG to RL as a way of efficiently handling both equality and inequality hard constraints.

14
14 Oct 2023

Boosting Continuous Control with Consistency Policy

cccedric/cpql 10 Oct 2023

By establishing a mapping from the reverse diffusion trajectories to the desired policy, we simultaneously address the issues of time efficiency and inaccurate guidance when updating diffusion model-based policy with the learned Q-function.

14
10 Oct 2023

Policy Optimization in a Noisy Neighborhood: On Return Landscapes in Continuous Control

nathanrahn/return-landscapes NeurIPS 2023

To conclude, we develop a distribution-aware procedure which finds such paths, navigating away from noisy neighborhoods in order to improve the robustness of a policy.

6
26 Sep 2023

Learning Shared Safety Constraints from Multi-task Demonstrations

konwook/mticl NeurIPS 2023

Regardless of the particular task we want them to perform in an environment, there are often shared safety constraints we want our agents to respect.

4
01 Sep 2023

Stabilizing Unsupervised Environment Design with a Learned Adversary

facebookresearch/dcd 21 Aug 2023

As a result, we make it possible for PAIRED to match or exceed state-of-the-art methods, producing robust agents in several established challenging procedurally-generated environments, including a partially-observed maze navigation task and a continuous-control car racing environment.

110
21 Aug 2023