Imitation Learning

509 papers with code • 0 benchmarks • 18 datasets

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Libraries

Use these libraries to find Imitation Learning models and implementations

Most implemented papers

ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst

aidriver/ChauffeurNet 7 Dec 2018

Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle.

Proof Artifact Co-training for Theorem Proving with Language Models

jesse-michael-han/lean-step-public ICLR 2022

Labeled data for imitation learning of theorem proving in large libraries of formalized mathematics is scarce as such libraries require years of concentrated effort by human specialists to be built.

A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning

zhejz/carla-roach 2 Nov 2010

Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i. i. d.

The Arcade Learning Environment: An Evaluation Platform for General Agents

mgbellemare/Arcade-Learning-Environment 19 Jul 2012

We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning.

A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models

justinjfu/inverse_rl 11 Nov 2016

In particular, we demonstrate an equivalence between a sample-based algorithm for maximum entropy IRL and a GAN in which the generator's density can be evaluated and is provided as an additional input to the discriminator.

One-Shot Visual Imitation Learning via Meta-Learning

tianheyu927/mil 14 Sep 2017

In this work, we present a meta-imitation learning method that enables a robot to learn how to learn more efficiently, allowing it to acquire new skills from just a single demonstration.

Deep Imitation Learning for Complex Manipulation Tasks from Virtual Reality Teleoperation

h2r/ImitateLearning-Movo 12 Oct 2017

Imitation learning is a powerful paradigm for robot skill acquisition.

Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation

ShaniGam/RL-GAN ICLR 2019

Despite the remarkable success of Deep RL in learning control policies from raw pixels, the resulting models do not generalize.

Bipedal Walking Robot using Deep Deterministic Policy Gradient

nav74neet/rl4biped 16 Jul 2018

The control systems community has started to show interest towards several machine learning algorithms from the sub-domains such as supervised learning, imitation learning and reinforcement learning to achieve autonomous control and intelligent decision making.

Sample-Efficient Imitation Learning via Generative Adversarial Nets

lionelblonde/sam-tf 6 Sep 2018

GAIL is a recent successful imitation learning architecture that exploits the adversarial training procedure introduced in GANs.