Efficient Exploration
144 papers with code • 0 benchmarks • 2 datasets
Efficient Exploration is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions.
Source: Randomized Value Functions via Multiplicative Normalizing Flows
Benchmarks
These leaderboards are used to track progress in Efficient Exploration
Libraries
Use these libraries to find Efficient Exploration models and implementationsMost implemented papers
Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning
Exploration in multi-agent reinforcement learning is a challenging problem, especially in environments with sparse rewards.
Online Limited Memory Neural-Linear Bandits with Likelihood Matching
To alleviate this, we propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
Noisy Natural Gradient as Variational Inference
Variational Bayesian neural nets combine the flexibility of deep learning with Bayesian uncertainty estimation.
Variance Networks: When Expectation Does Not Meet Your Expectations
Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging.
Count-Based Exploration with the Successor Representation
In this paper we introduce a simple approach for exploration in reinforcement learning (RL) that allows us to develop theoretically justified algorithms in the tabular case but that is also extendable to settings where function approximation is required.
NSGA-Net: Neural Architecture Search using Multi-Objective Genetic Algorithm
This paper introduces NSGA-Net -- an evolutionary approach for neural architecture search (NAS).
Model-Based Active Exploration
Efficient exploration is an unsolved problem in Reinforcement Learning which is usually addressed by reactively rewarding the agent for fortuitously encountering novel situations.
Playing Text-Adventure Games with Graph-Based Deep Reinforcement Learning
Text-based adventure games provide a platform on which to explore reinforcement learning in the context of a combinatorial action space, such as natural language.
Learning Exploration Policies for Navigation
Numerous past works have tackled the problem of task-driven navigation.
Estimating Risk and Uncertainty in Deep Reinforcement Learning
Reinforcement learning agents are faced with two types of uncertainty.