Browse SoTA > Methodology > Efficient Exploration

# Efficient Exploration Edit

38 papers with code · Methodology

Efficient Exploration is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions.

No evaluation results yet. Help compare methods by submit evaluation metrics.

# Deep Exploration via Bootstrapped DQN

Efficient exploration in complex environments remains a major challenge for reinforcement learning.

65,521

# Noisy Networks for Exploration

We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration.

874

# Stochastic Gradient Hamiltonian Monte Carlo

17 Feb 2014JavierAntoran/Bayesian-Neural-Networks

Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining distant proposals with high acceptance probabilities in a Metropolis-Hastings framework, enabling more efficient exploration of the state space than standard random-walk proposals.

663

# Batch Bayesian Optimization via Local Penalization

29 May 2015SheffieldML/GPyOpt

The approach assumes that the function of interest, $f$, is a Lipschitz continuous function.

641

# Automatic chemical design using a data-driven continuous representation of molecules

7 Oct 2016maxhodak/keras-molecules

We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation.

457

# Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables

19 Mar 2019katerakelly/oyster

In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience.

252

# NSGA-Net: Neural Architecture Search using Multi-Objective Genetic Algorithm

8 Oct 2018ianwhale/nsga-net

This paper introduces NSGA-Net -- an evolutionary approach for neural architecture search (NAS).

134

# Model-Based Active Exploration

29 Oct 2018ramanans1/plan2explore

Efficient exploration is an unsolved problem in Reinforcement Learning which is usually addressed by reactively rewarding the agent for fortuitously encountering novel situations.

104

# Information-Directed Exploration for Deep Reinforcement Learning

Efficient exploration remains a major challenge for reinforcement learning.

75

# Playing Text-Adventure Games with Graph-Based Deep Reinforcement Learning

Text-based adventure games provide a platform on which to explore reinforcement learning in the context of a combinatorial action space, such as natural language.

58