Search Results for author: Chris Reinke

Found 8 papers, 1 papers with code

Univariate Radial Basis Function Layers: Brain-inspired Deep Neural Layers for Low-Dimensional Inputs

1 code implementation7 Nov 2023 Daniel Jost, Basavasagar Patil, Xavier Alameda-Pineda, Chris Reinke

Deep Neural Networks (DNNs) became the standard tool for function approximation with most of the introduced architectures being developed for high-dimensional input data.

Variational Meta Reinforcement Learning for Social Robotics

no code implementations7 Jun 2022 Anand Ballou, Xavier Alameda-Pineda, Chris Reinke

We demonstrate the interest of the RBF layer and the usage of meta-RL for social robotics on four robotic simulation tasks.

Meta Reinforcement Learning Navigate +2

Successor Feature Neural Episodic Control

no code implementations4 Nov 2021 David Emukpere, Xavier Alameda-Pineda, Chris Reinke

A longstanding goal in reinforcement learning is to build intelligent agents that show fast learning and a flexible transfer of skills akin to humans and animals.

reinforcement-learning Reinforcement Learning (RL) +1

Successor Feature Representations

no code implementations29 Oct 2021 Chris Reinke, Xavier Alameda-Pineda

Successor Representations (SR) and their extension Successor Features (SF) are prominent transfer mechanisms in domains where reward functions change between tasks.

Transfer Learning

Progressive growing of self-organized hierarchical representations for exploration

no code implementations13 May 2020 Mayalen Etcheverry, Pierre-Yves Oudeyer, Chris Reinke

A central challenge is how to learn incrementally representations in order to progressively build a map of the discovered structures and re-use it to further explore.

Representation Learning

Time Adaptive Reinforcement Learning

no code implementations18 Apr 2020 Chris Reinke

Reinforcement learning (RL) allows to solve complex tasks such as Go often with a stronger performance than humans.

reinforcement-learning Reinforcement Learning (RL)

Intrinsically Motivated Discovery of Diverse Patterns in Self-Organizing Systems

no code implementations ICLR 2020 Chris Reinke, Mayalen Etcheverry, Pierre-Yves Oudeyer

Using a continuous GOL as a testbed, we show that recent intrinsically-motivated machine learning algorithms (POP-IMGEPs), initially developed for learning of inverse models in robotics, can be transposed and used in this novel application area.

Cannot find the paper you are looking for? You can Submit a new open access paper.