Efficient Exploration

145 papers with code • 0 benchmarks • 2 datasets

Efficient Exploration is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions.

Source: Randomized Value Functions via Multiplicative Normalizing Flows

Libraries

Use these libraries to find Efficient Exploration models and implementations
2 papers
25

Latest papers with no code

ACE : Off-Policy Actor-Critic with Causality-Aware Entropy Regularization

no code yet • 22 Feb 2024

The varying significance of distinct primitive behaviors during the policy learning process has been overlooked by prior model-free RL algorithms.

Efficient Low-Rank Matrix Estimation, Experimental Design, and Arm-Set-Dependent Low-Rank Bandits

no code yet • 17 Feb 2024

Assuming access to the distribution of the covariates, we propose a novel low-rank matrix estimation method called LowPopArt and provide its recovery guarantee that depends on a novel quantity denoted by B(Q) that characterizes the hardness of the problem, where Q is the covariance matrix of the measurement distribution.

Diffusion Models Meet Contextual Bandits with Large Action Spaces

no code yet • 15 Feb 2024

Efficient exploration is a key challenge in contextual bandits due to the large size of their action space, where uninformed exploration can result in computational and statistical inefficiencies.

Noise-Adaptive Confidence Sets for Linear Bandits and Application to Bayesian Optimization

no code yet • 12 Feb 2024

First, we propose a novel confidence set that is `semi-adaptive' to the unknown sub-Gaussian parameter $\sigma_*^2$ in the sense that the (normalized) confidence width scales with $\sqrt{d\sigma_*^2 + \sigma_0^2}$ where $d$ is the dimension and $\sigma_0^2$ is the specified sub-Gaussian parameter (known) that can be much larger than $\sigma_*^2$.

Diffusion-ES: Gradient-free Planning with Diffusion for Autonomous Driving and Zero-Shot Instruction Following

no code yet • 9 Feb 2024

Diffusion-ES samples trajectories during evolutionary search from a diffusion model and scores them using a black-box reward function.

TopoNav: Topological Navigation for Efficient Exploration in Sparse Reward Environments

no code yet • 6 Feb 2024

Additionally, TopoNav incorporates intrinsic motivation to guide exploration toward relevant regions and frontier nodes in the topological map, addressing the challenges of sparse extrinsic rewards.

Efficient Exploration for LLMs

no code yet • 1 Feb 2024

We present evidence of substantial benefit from efficient exploration in gathering human feedback to improve large language models.

Scheduled Curiosity-Deep Dyna-Q: Efficient Exploration for Dialog Policy Learning

no code yet • 31 Jan 2024

Therefore, we propose Scheduled Curiosity-Deep Dyna-Q (SC-DDQ), a curiosity-driven curriculum learning framework based on a state-of-the-art model-based reinforcement learning dialog model, Deep Dyna-Q (DDQ).

FIT-SLAM -- Fisher Information and Traversability estimation-based Active SLAM for exploration in 3D environments

no code yet • 17 Jan 2024

Through this work, we propose FIT-SLAM (Fisher Information and Traversability estimation-based Active SLAM), a new exploration method tailored for unmanned ground vehicles (UGVs) to explore 3D environments.

Go-Explore for Residential Energy Management

no code yet • 15 Jan 2024

We use the Go-Explore algorithm to solve the cost-saving task in residential energy management problems and achieve an improvement of up to 19. 84\% compared to the well-known reinforcement learning algorithms.