Imitation Learning
520 papers with code • 0 benchmarks • 18 datasets
Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.
Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning
Source: Learning to Imitate
Benchmarks
These leaderboards are used to track progress in Imitation Learning
Libraries
Use these libraries to find Imitation Learning models and implementationsDatasets
Latest papers
JUICER: Data-Efficient Imitation Learning for Robotic Assembly
While learning from demonstrations is powerful for acquiring visuomotor policies, high-performance imitation without large demonstration datasets remains challenging for tasks requiring precise, long-horizon manipulation.
Human-compatible driving partners through data-regularized self-play reinforcement learning
Therefore, incorporating realistic human agents is essential for scalable training and evaluation of autonomous driving systems in simulation.
Uncertainty-Aware Deployment of Pre-trained Language-Conditioned Imitation Learning Policies
Large-scale robotic policies trained on data from diverse tasks and robotic platforms hold great promise for enabling general-purpose robots; however, reliable generalization to new environment conditions remains a major challenge.
Imitating Cost-Constrained Behaviors in Reinforcement Learning
Generally speaking, imitation learning is designed to learn either the reward (or preference) model or directly the behavioral policy by observing the behavior of an expert.
Self-Improvement for Neural Combinatorial Optimization: Sample without Replacement, but Improvement
Current methods for end-to-end constructive neural combinatorial optimization usually train a policy using behavior cloning from expert solutions or policy gradient methods from reinforcement learning.
Rethinking Adversarial Inverse Reinforcement Learning: From the Angles of Policy Imitation and Transferable Reward Recovery
Criticism 3 lies in Unsatisfactory Proof from the Perspective of Potential Equilibrium.
3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations
Imitation learning provides an efficient way to teach robots dexterous skills; however, learning complex skills robustly and generalizablely usually consumes large amounts of human demonstrations.
Imitation Learning Datasets: A Toolkit For Creating Datasets, Training Agents and Benchmarking
Imitation learning field requires expert data to train agents in a task.
HiMAP: Learning Heuristics-Informed Policies for Large-Scale Multi-Agent Pathfinding
With a simple training scheme and implementation, HiMAP demonstrates competitive results in terms of success rate and scalability in the field of imitation-learning-only MAPF, showing the potential of imitation-learning-only MAPF equipped with inference techniques.
Deep Generative Models for Offline Policy Learning: Tutorial, Survey, and Perspectives on Future Directions
This work offers a hands-on reference for the research progress in deep generative models for offline policy learning, and aims to inspire improved DGM-based offline RL or IL algorithms.