Behavioural cloning
11 papers with code • 0 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in Behavioural cloning
Libraries
Use these libraries to find Behavioural cloning models and implementationsLatest papers with no code
Policy Improvement using Language Feedback Models
We introduce Language Feedback Models (LFMs) that identify desirable behaviour - actions that help achieve tasks specified in the instruction - for imitation learning in instruction following.
OIL-AD: An Anomaly Detection Framework for Sequential Decision Sequences
Our offline learning model is an adaptation of behavioural cloning with a transformer policy network, where we modify the training process to learn a Q function and a state value function from normal trajectories.
Robust Imitation Learning for Automated Game Testing
Game development is a long process that involves many stages before a product is ready for the market.
Behavioural Cloning in VizDoom
This paper describes methods for training autonomous agents to play the game "Doom 2" through Imitation Learning (IL) using only pixel data as input.
On the Effectiveness of Retrieval, Alignment, and Replay in Manipulation
And third, a replay phase, which informs the robot how to interact with the object.
RObotic MAnipulation Network (ROMAN) $\unicode{x2013}$ Hybrid Hierarchical Learning for Solving Complex Sequential Tasks
In this work, we present a Hybrid Hierarchical Learning framework, the Robotic Manipulation Network (ROMAN), to address the challenge of solving multiple complex tasks over long time horizons in robotic manipulation.
Behavioral Cloning via Search in Embedded Demonstration Dataset
Actions from a selected similar situation can be performed by the agent until representations of the agent's current situation and the selected experience diverge in the latent space.
Quantum Imitation Learning
Despite remarkable successes in solving various complex decision-making tasks, training an imitation learning (IL) algorithm with deep neural networks (DNNs) suffers from the high computation burden.
Improving Behavioural Cloning with Positive Unlabeled Learning
Learning control policies offline from pre-recorded datasets is a promising avenue for solving challenging real-world problems.
Model-based trajectory stitching for improved behavioural cloning and its applications
Furthermore, using the D4RL benchmarking suite, we demonstrate that state-of-the-art results are obtained by combining TS with two existing offline learning methodologies reliant on BC, model-based offline planning (MBOP) and policy constraint (TD3+BC).