no code implementations • 2 Sep 2022 • Chang Rajani, Karol Arndt, David Blanco-Mulero, Kevin Sebastian Luck, Ville Kyrki
To this end we propose a co-imitation methodology for adapting behaviour and morphology by matching state distributions of the demonstrator.
1 code implementation • 12 Jul 2022 • Mhairi Dunion, Trevor McInroe, Kevin Sebastian Luck, Josiah P. Hanna, Stefano V. Albrecht
Reinforcement Learning (RL) agents are often unable to generalise well to environment variations in the state space that were not observed during training.
no code implementations • 3 Nov 2021 • Kevin Sebastian Luck, Roberto Calandra, Michael Mistry
The co-adaptation of robot morphology and behaviour becomes increasingly important with the advent of fast 3D-manufacturing methods and efficient deep reinforcement learning algorithms.
no code implementations • 18 Aug 2020 • Todor Davchev, Kevin Sebastian Luck, Michael Burke, Franziska Meier, Stefan Schaal, Subramanian Ramamoorthy
Dynamic Movement Primitives (DMP) are a popular way of extracting such policies through behaviour cloning (BC) but can struggle in the context of insertion.
no code implementations • 15 Nov 2019 • Kevin Sebastian Luck, Heni Ben Amor, Roberto Calandra
Key to our approach is the possibility of leveraging previously tested morphologies and behaviors to estimate the performance of new candidate morphologies.
no code implementations • 15 Nov 2019 • Kevin Sebastian Luck, Mel Vecerik, Simon Stepputtis, Heni Ben Amor, Jonathan Scholz
This work evaluates the use of model-based trajectory optimization methods used for exploration in Deep Deterministic Policy Gradient when trained on a latent image embedding.