no code implementations • 28 Mar 2024 • Norman Di Palo, Edward Johns
We show that off-the-shelf text-based Transformers, with no additional training, can perform few-shot in-context visual imitation learning, mapping visual observations to action sequences that emulate the demonstrator's behaviour.
no code implementations • 20 Feb 2024 • Norman Di Palo, Edward Johns
We propose DINOBot, a novel imitation learning framework for robot manipulation, which leverages the image-level and pixel-level capabilities of features extracted from Vision Transformers trained with DINO.
no code implementations • 19 Dec 2023 • Norman Di Palo, Edward Johns
And third, a replay phase, which informs the robot how to interact with the object.
no code implementations • 17 Oct 2023 • Teyun Kwon, Norman Di Palo, Edward Johns
Large Language Models (LLMs) have recently shown promise as high-level planners for robots when given access to a selection of low-level skills.
no code implementations • 18 Jul 2023 • Norman Di Palo, Arunkumar Byravan, Leonard Hasenclever, Markus Wulfmeier, Nicolas Heess, Martin Riedmiller
Language Models and Vision Language Models have recently demonstrated unprecedented capabilities in terms of understanding human intentions, reasoning, scene understanding, and planning-like behaviour, in text form, among many others.
no code implementations • 6 Apr 2022 • Eugene Valassakis, Georgios Papagiannis, Norman Di Palo, Edward Johns
We present DOME, a novel method for one-shot imitation learning, where a task can be learned from just a single demonstration and then be deployed immediately, without any further data collection or training.
no code implementations • 14 Nov 2021 • Norman Di Palo, Edward Johns
In this work, we introduce a novel method to learn everyday-like multi-stage tasks from a single human demonstration, without requiring any prior object knowledge.
no code implementations • 24 May 2021 • Eugene Valassakis, Norman Di Palo, Edward Johns
In this paper, we study the problem of zero-shot sim-to-real when the task requires both highly precise control with sub-millimetre error tolerance, and wide task space generalisation.
no code implementations • 18 Nov 2020 • Norman Di Palo, Edward Johns
We empirically demonstrate how this method increases the performance on a set of manipulation tasks with respect to passive Imitation Learning, by gathering more informative demonstrations and by minimizing state-distribution shift at test time.
no code implementations • NeurIPS 2019 • Rinu Boney, Norman Di Palo, Mathias Berglund, Alexander Ilin, Juho Kannala, Antti Rasmus, Harri Valpola
Trajectory optimization using a learned model of the environment is one of the core elements of model-based reinforcement learning.
no code implementations • 10 Dec 2018 • Norman Di Palo, Harri Valpola
Model based predictions of future trajectories of a dynamical system often suffer from inaccuracies, forcing model based control algorithms to re-plan often, thus being computationally expensive, suboptimal and not reliable.