Search Results for author: Daniele Reda

Found 8 papers, 2 papers with code

Flexible Motion In-betweening with Diffusion Models

no code implementations17 May 2024 Setareh Cohan, Guy Tevet, Daniele Reda, Xue Bin Peng, Michiel Van de Panne

To this end, we propose Conditional Motion Diffusion In-betweening (CondMDI) which allows for arbitrary dense-or-sparse keyframe placement and partial keyframe constraints while generating high-quality motions that are diverse and coherent with the given keyframes.

Physics-based Motion Retargeting from Sparse Inputs

no code implementations4 Jul 2023 Daniele Reda, Jungdam Won, Yuting Ye, Michiel Van de Panne, Alexander Winkler

We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies.

motion retargeting

Learning to Brachiate via Simplified Model Imitation

1 code implementation8 May 2022 Daniele Reda, Hung Yu Ling, Michiel Van de Panne

Key to our method is the use of a simplified model, a point mass with a virtual arm, for which we first learn a policy that can brachiate across handhold sequences with a prescribed order.

Humanoid Control Motion Synthesis +1

Evaluating Vision Transformer Methods for Deep Reinforcement Learning from Pixels

no code implementations11 Apr 2022 Tianxin Tao, Daniele Reda, Michiel Van de Panne

Vision Transformers (ViT) have recently demonstrated the significant potential of transformer architectures for computer vision.

Contrastive Learning reinforcement-learning +1

Urban Driving with Conditional Imitation Learning

no code implementations30 Nov 2019 Jeffrey Hawke, Richard Shen, Corina Gurau, Siddharth Sharma, Daniele Reda, Nikolay Nikolov, Przemyslaw Mazur, Sean Micklethwaite, Nicolas Griffiths, Amar Shah, Alex Kendall

As our main contribution, we present an end-to-end conditional imitation learning approach, combining both lateral and longitudinal control on a real vehicle for following urban routes with simple traffic.

Autonomous Driving Imitation Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.