Search Results for author: Laetitia Matignon

Found 10 papers, 3 papers with code

Task-conditioned adaptation of visual features in multi-task policy learning

no code implementations12 Feb 2024 Pierre Marza, Laetitia Matignon, Olivier Simonin, Christian Wolf

We evaluate the method on a wide variety of tasks from the CortexBench benchmark and show that, compared to existing work, it can be addressed with a single policy.

Decision Making

AutoNeRF: Training Implicit Scene Representations with Autonomous Agents

1 code implementation21 Apr 2023 Pierre Marza, Laetitia Matignon, Olivier Simonin, Dhruv Batra, Christian Wolf, Devendra Singh Chaplot

Empirical results show that NeRFs can be trained on actively collected data using just a single episode of experience in an unseen environment, and can be used for several downstream robotic tasks, and that modular trained exploration models outperform other classical and end-to-end baselines.

Novel View Synthesis

An information-theoretic perspective on intrinsic motivation in reinforcement learning: a survey

no code implementations19 Sep 2022 Arthur Aubret, Laetitia Matignon, Salima Hassas

The reinforcement learning (RL) research area is very active, with an important number of new contributions; especially considering the emergent field of deep RL (DRL).

reinforcement-learning Reinforcement Learning (RL)

Teaching Agents how to Map: Spatial Reasoning for Multi-Object Navigation

2 code implementations13 Jul 2021 Pierre Marza, Laetitia Matignon, Olivier Simonin, Christian Wolf

In the context of visual navigation, the capacity to map a novel environment is necessary for an agent to exploit its observation history in the considered place and efficiently reach known goals.

Reinforcement Learning (RL) Visual Navigation

DisTop: Discovering a Topological representation to learn diverse and rewarding skills

no code implementations6 Jun 2021 Arthur Aubret, Laetitia Matignon, Salima Hassas

The optimal way for a deep reinforcement learning (DRL) agent to explore is to learn a set of skills that achieves a uniform distribution of states.

Hierarchical Reinforcement Learning reinforcement-learning +2

ELSIM: End-to-end learning of reusable skills through intrinsic motivation

no code implementations ICML Workshop LifelongML 2020 Arthur Aubret, Laetitia Matignon, Salima Hassas

Then we show that our approach can scale on more difficult MuJoCo environments in which our agent is able to build a representation of skills which improve over a baseline both transfer learning and exploration when rewards are sparse.

Developmental Learning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.