Search Results for author: Jenny Sheng

Found 4 papers, 2 papers with code

Exploring Text-to-Motion Generation with Human Preference

1 code implementation15 Apr 2024 Jenny Sheng, Matthieu Lin, Andrew Zhao, Kevin Pruvost, Yu-Hui Wen, Yangguang Li, Gao Huang, Yong-Jin Liu

This paper presents an exploration of preference learning in text-to-motion generation.

Text-Image Conditioned Diffusion for Consistent Text-to-3D Generation

no code implementations19 Dec 2023 Yuze He, Yushi Bai, Matthieu Lin, Jenny Sheng, Yubin Hu, Qi Wang, Yu-Hui Wen, Yong-Jin Liu

By lifting the pre-trained 2D diffusion models into Neural Radiance Fields (NeRFs), text-to-3D generation methods have made great progress.

3D Generation Text to 3D

DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models

no code implementations30 Sep 2023 Zhiyao Sun, Tian Lv, Sheng Ye, Matthieu Gaetan Lin, Jenny Sheng, Yu-Hui Wen, MinJing Yu, Yong-Jin Liu

The generation of stylistic 3D facial animations driven by speech poses a significant challenge as it requires learning a many-to-many mapping between speech, style, and the corresponding natural facial motion.

Cannot find the paper you are looking for? You can Submit a new open access paper.