no code implementations • 16 Oct 2023 • Heyuan Yao, Zhenhua Song, Yuyang Zhou, Tenglong Ao, Baoquan Chen, Libin Liu
In this work, we present MoConVQ, a novel unified framework for physics-based motion control leveraging scalable discrete representations.
1 code implementation • 29 Mar 2023 • Bin Feng, Tenglong Ao, Zequn Liu, Wei Ju, Libin Liu, Ming Zhang
How to automatically synthesize natural-looking dance movements based on a piece of music is an incrementally popular yet challenging task.
no code implementations • 26 Mar 2023 • Tenglong Ao, Zeyi Zhang, Libin Liu
We leverage the power of the large-scale Contrastive-Language-Image-Pre-training (CLIP) model and present a novel CLIP-guided mechanism that extracts efficient style representations from multiple input modalities, such as a piece of text, an example motion clip, or a video.
1 code implementation • 4 Oct 2022 • Tenglong Ao, Qingzhe Gao, Yuke Lou, Baoquan Chen, Libin Liu
We present a novel co-speech gesture synthesis method that achieves convincing results both on the rhythm and semantics.
Ranked #2 on Gesture Generation on TED Gesture Dataset