Search Results for author: Tenglong Ao

Found 4 papers, 2 papers with code

MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete Representations

no code implementations16 Oct 2023 Heyuan Yao, Zhenhua Song, Yuyang Zhou, Tenglong Ao, Baoquan Chen, Libin Liu

In this work, we present MoConVQ, a novel unified framework for physics-based motion control leveraging scalable discrete representations.

In-Context Learning Model-based Reinforcement Learning

Robust Dancer: Long-term 3D Dance Synthesis Using Unpaired Data

1 code implementation29 Mar 2023 Bin Feng, Tenglong Ao, Zequn Liu, Wei Ju, Libin Liu, Ming Zhang

How to automatically synthesize natural-looking dance movements based on a piece of music is an incrementally popular yet challenging task.

Disentanglement

GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents

no code implementations26 Mar 2023 Tenglong Ao, Zeyi Zhang, Libin Liu

We leverage the power of the large-scale Contrastive-Language-Image-Pre-training (CLIP) model and present a novel CLIP-guided mechanism that extracts efficient style representations from multiple input modalities, such as a piece of text, an example motion clip, or a video.

Contrastive Learning Gesture Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.