Search Results for author: Rituraj Kaushik

Found 5 papers, 5 papers with code

SafeAPT: Safe Simulation-to-Real Robot Learning using Diverse Policies Learned in Simulation

1 code implementation27 Jan 2022 Rituraj Kaushik, Karol Arndt, Ville Kyrki

In this work, we introduce a novel learning algorithm called SafeAPT that leverages a diverse repertoire of policies evolved in the simulation and transfers the most promising safe policy to the real robot through episodic interaction.

Bayesian Optimization

Fast Online Adaptation in Robotics through Meta-Learning Embeddings of Simulated Priors

1 code implementation10 Mar 2020 Rituraj Kaushik, Timothée Anne, Jean-Baptiste Mouret

Meta-learning algorithms can accelerate the model-based reinforcement learning (MBRL) algorithms by finding an initial set of parameters for the dynamical model such that the model can be trained to match the actual dynamics of the system with only a few data-points.

Meta-Learning Model-based Reinforcement Learning

Adaptive Prior Selection for Repertoire-based Online Adaptation in Robotics

1 code implementation16 Jul 2019 Rituraj Kaushik, Pierre Desreumaux, Jean-Baptiste Mouret

Repertoire-based learning is a data-efficient adaptation approach based on a two-step process in which (1) a large and diverse set of policies is learned in simulation, and (2) a planning or learning algorithm chooses the most appropriate policies according to the current situation (e. g., a damaged robot, a new object, etc.).

Meta-Learning RTE

Multi-objective Model-based Policy Search for Data-efficient Learning with Sparse Rewards

1 code implementation25 Jun 2018 Rituraj Kaushik, Konstantinos Chatzilygeroudis, Jean-Baptiste Mouret

The most data-efficient algorithms for reinforcement learning in robotics are model-based policy search algorithms, which alternate between learning a dynamical model of the robot and optimizing a policy to maximize the expected return given the model and its uncertainties.

Continuous Control Efficient Exploration

Black-Box Data-efficient Policy Search for Robotics

1 code implementation21 Mar 2017 Konstantinos Chatzilygeroudis, Roberto Rama, Rituraj Kaushik, Dorian Goepp, Vassilis Vassiliades, Jean-Baptiste Mouret

The most data-efficient algorithms for reinforcement learning (RL) in robotics are based on uncertain dynamical models: after each episode, they first learn a dynamical model of the robot, then they use an optimization algorithm to find a policy that maximizes the expected return given the model and its uncertainties.

Continuous Control Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.