About

Safe Exploration is an approach to collect ground truth data by safely interacting with the environment.

Source: Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems

Benchmarks

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

AI Safety Gridworlds

27 Nov 2017deepmind/ai-safety-gridworlds

We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents.

SAFE EXPLORATION

Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning

27 Jun 2019befelix/safe-exploration

We evaluate the resulting algorithm to safely explore the dynamics of an inverted pendulum and to solve a reinforcement learning task on a cart-pole system with safety constraints.

SAFE EXPLORATION

Learning-based Model Predictive Control for Safe Exploration

22 Mar 2018befelix/safe-exploration

However, these methods typically do not provide any safety guarantees, which prevents their use in safety-critical, real-world applications.

SAFE EXPLORATION

Verifiably Safe Exploration for End-to-End Reinforcement Learning

2 Jul 2020IBM/vsrl-framework

We also prove that our method of enforcing the safety constraints preserves all safe policies from the original environment.

OBJECT DETECTION SAFE EXPLORATION

Safe Exploration in Finite Markov Decision Processes with Gaussian Processes

NeurIPS 2016 befelix/SafeMDP

We define safety in terms of an, a priori unknown, safety constraint that depends on states and actions.

GAUSSIAN PROCESSES SAFE EXPLORATION

Safe Exploration in Continuous Action Spaces

26 Jan 2018AgrawalAmey/safe-explorer

We address the problem of deploying a reinforcement learning (RL) agent on a physical system such as a datacenter cooling unit or robot, where critical constraints must never be violated.

SAFE EXPLORATION

SafeML: Safety Monitoring of Machine Learning Classifiers through Statistical Difference Measure

27 May 2020ISorokos/SafeML

Ensuring safety and explainability of machine learning (ML) is a topic of increasing relevance as data-driven applications venture into safety-critical application domains, traditionally committed to high safety standards that are not satisfied with an exclusive testing approach of otherwise inaccessible black-box systems.

DOMAIN ADAPTATION IMAGE CLASSIFICATION INTRUSION DETECTION SAFE EXPLORATION

Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution

29 Sep 2020ml-jku/align-rudder

Align-RUDDER outperforms competitors on complex artificial tasks with delayed reward and few demonstrations.

GENERAL REINFORCEMENT LEARNING MINECRAFT MULTIPLE SEQUENCE ALIGNMENT SAFE EXPLORATION

Provably Safe PAC-MDP Exploration Using Analogies

7 Jul 2020locuslab/ase

A key challenge in applying reinforcement learning to safety-critical domains is understanding how to balance exploration (needed to attain good performance on the task) with safety (needed to avoid catastrophic failure).

SAFE EXPLORATION

Neurosymbolic Reinforcement Learning with Formally Verified Exploration

NeurIPS 2020 gavlegoat/safe-learning

We present Revel, a partially neural reinforcement learning (RL) framework for provably safe exploration in continuous state and action spaces.

SAFE EXPLORATION