Efficient Exploration

144 papers with code • 0 benchmarks • 2 datasets

Efficient Exploration is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions.

Source: Randomized Value Functions via Multiplicative Normalizing Flows

Libraries

Use these libraries to find Efficient Exploration models and implementations
2 papers
25

Latest papers with no code

Vlearn: Off-Policy Learning with Efficient State-Value Function Estimation

no code yet • 7 Mar 2024

Existing off-policy reinforcement learning algorithms typically necessitate an explicit state-action-value function representation, which becomes problematic in high-dimensional action spaces.

Noisy Spiking Actor Network for Exploration

no code yet • 7 Mar 2024

As a general method for exploration in deep reinforcement learning (RL), NoisyNet can produce problem-specific exploration strategies.

A Natural Extension To Online Algorithms For Hybrid RL With Limited Coverage

no code yet • 7 Mar 2024

Hybrid Reinforcement Learning (RL), leveraging both online and offline data, has garnered recent interest, yet research on its provable benefits remains sparse.

ACE : Off-Policy Actor-Critic with Causality-Aware Entropy Regularization

no code yet • 22 Feb 2024

The varying significance of distinct primitive behaviors during the policy learning process has been overlooked by prior model-free RL algorithms.

Efficient Low-Rank Matrix Estimation, Experimental Design, and Arm-Set-Dependent Low-Rank Bandits

no code yet • 17 Feb 2024

Assuming access to the distribution of the covariates, we propose a novel low-rank matrix estimation method called LowPopArt and provide its recovery guarantee that depends on a novel quantity denoted by B(Q) that characterizes the hardness of the problem, where Q is the covariance matrix of the measurement distribution.

Diffusion Models Meet Contextual Bandits with Large Action Spaces

no code yet • 15 Feb 2024

Efficient exploration is a key challenge in contextual bandits due to the large size of their action space, where uninformed exploration can result in computational and statistical inefficiencies.

Noise-Adaptive Confidence Sets for Linear Bandits and Application to Bayesian Optimization

no code yet • 12 Feb 2024

First, we propose a novel confidence set that is `semi-adaptive' to the unknown sub-Gaussian parameter $\sigma_*^2$ in the sense that the (normalized) confidence width scales with $\sqrt{d\sigma_*^2 + \sigma_0^2}$ where $d$ is the dimension and $\sigma_0^2$ is the specified sub-Gaussian parameter (known) that can be much larger than $\sigma_*^2$.

Diffusion-ES: Gradient-free Planning with Diffusion for Autonomous Driving and Zero-Shot Instruction Following

no code yet • 9 Feb 2024

Diffusion-ES samples trajectories during evolutionary search from a diffusion model and scores them using a black-box reward function.

TopoNav: Topological Navigation for Efficient Exploration in Sparse Reward Environments

no code yet • 6 Feb 2024

Additionally, TopoNav incorporates intrinsic motivation to guide exploration toward relevant regions and frontier nodes in the topological map, addressing the challenges of sparse extrinsic rewards.

Efficient Exploration for LLMs

no code yet • 1 Feb 2024

We present evidence of substantial benefit from efficient exploration in gathering human feedback to improve large language models.