Search Results for author: Alain Andres

Found 5 papers, 3 papers with code

Enhanced Generalization through Prioritization and Diversity in Self-Imitation Reinforcement Learning over Procedural Environments with Sparse Rewards

no code implementations1 Nov 2023 Alain Andres, Daochen Zha, Javier Del Ser

Exploration poses a fundamental challenge in Reinforcement Learning (RL) with sparse rewards, limiting an agent's ability to learn optimal decision-making due to a lack of informative feedback signals.

Imitation Learning Reinforcement Learning (RL)

Using Offline Data to Speed-up Reinforcement Learning in Procedurally Generated Environments

no code implementations18 Apr 2023 Alain Andres, Lukas Schäfer, Esther Villar-Rodriguez, Stefano V. Albrecht, Javier Del Ser

Motivated by the recent success of Offline RL and Imitation Learning (IL), we conduct a study to investigate whether agents can leverage offline data in the form of trajectories to improve the sample-efficiency in procedurally generated environments.

Imitation Learning Offline RL +2

Towards Improving Exploration in Self-Imitation Learning using Intrinsic Motivation

1 code implementation30 Nov 2022 Alain Andres, Esther Villar-Rodriguez, Javier Del Ser

Unfortunately, in a broad range of problems the design of a good reward function is not trivial, so in such cases sparse reward signals are instead adopted.

Imitation Learning

An Evaluation Study of Intrinsic Motivation Techniques applied to Reinforcement Learning over Hard Exploration Environments

1 code implementation23 May 2022 Alain Andres, Esther Villar-Rodriguez, Javier Del Ser

In the last few years, the research activity around reinforcement learning tasks formulated over environments with sparse rewards has been especially notable.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.