Imitation Learning

509 papers with code • 0 benchmarks • 18 datasets

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Libraries

Use these libraries to find Imitation Learning models and implementations

Latest papers with no code

Uncertainty-Aware Deployment of Pre-trained Language-Conditioned Imitation Learning Policies

no code yet • 27 Mar 2024

Large-scale robotic policies trained on data from diverse tasks and robotic platforms hold great promise for enabling general-purpose robots; however, reliable generalization to new environment conditions remains a major challenge.

Imitating Cost-Constrained Behaviors in Reinforcement Learning

no code yet • 26 Mar 2024

Generally speaking, imitation learning is designed to learn either the reward (or preference) model or directly the behavioral policy by observing the behavior of an expert.

Grounding Language Plans in Demonstrations Through Counterfactual Perturbations

no code yet • 25 Mar 2024

Grounding the common-sense reasoning of Large Language Models in physical domains remains a pivotal yet unsolved problem for embodied AI.

Dyna-LfLH: Learning Agile Navigation in Dynamic Environments from Learned Hallucination

no code yet • 25 Mar 2024

In our new Dynamic Learning from Learned Hallucination (Dyna-LfLH), we design and learn a novel latent distribution and sample dynamic obstacles from it, so the generated training data can be used to learn a motion planner to navigate in dynamic environments.

Interpretable Modeling of Deep Reinforcement Learning Driven Scheduling

no code yet • 24 Mar 2024

In this work, we present a framework called IRL (Interpretable Reinforcement Learning) to address the issue of interpretability of DRL scheduling.

IBCB: Efficient Inverse Batched Contextual Bandit for Behavioral Evolution History

no code yet • 24 Mar 2024

This poses a new challenge for existing imitation learning approaches that can only utilize data from experienced experts.

Automated Feature Selection for Inverse Reinforcement Learning

no code yet • 22 Mar 2024

Inverse reinforcement learning (IRL) is an imitation learning approach to learning reward functions from expert demonstrations.

Self-Improvement for Neural Combinatorial Optimization: Sample without Replacement, but Improvement

no code yet • 22 Mar 2024

Current methods for end-to-end constructive neural combinatorial optimization usually train a policy using behavior cloning from expert solutions or policy gradient methods from reinforcement learning.

Information-Theoretic Distillation for Reference-less Summarization

no code yet • 20 Mar 2024

The current winning recipe for automatic summarization is using proprietary large-scale language models (LLMs) such as ChatGPT as is, or imitation learning from them as teacher models.

Augmented Reality Demonstrations for Scalable Robot Imitation Learning

no code yet • 20 Mar 2024

Our framework facilitates scalable and diverse demonstration collection for real-world tasks.