Search Results for author: Yongkang Cheng

Found 3 papers, 1 papers with code

Freetalker: Controllable Speech and Text-Driven Gesture Generation Based on Diffusion Models for Enhanced Speaker Naturalness

no code implementations7 Jan 2024 Sicheng Yang, Zunnan Xu, Haiwei Xue, Yongkang Cheng, Shaoli Huang, Mingming Gong, Zhiyong Wu

To tackle these issues, we introduce FreeTalker, which, to the best of our knowledge, is the first framework for the generation of both spontaneous (e. g., co-speech gesture) and non-spontaneous (e. g., moving around the podium) speaker motions.

Gesture Generation

SignAvatars: A Large-scale 3D Sign Language Holistic Motion Dataset and Benchmark

no code implementations31 Oct 2023 Zhengdi Yu, Shaoli Huang, Yongkang Cheng, Tolga Birdal

We present SignAvatars, the first large-scale, multi-prompt 3D sign language (SL) motion dataset designed to bridge the communication gap for Deaf and hard-of-hearing individuals.

Sign Language Production Sign Language Recognition

BoPR: Body-aware Part Regressor for Human Shape and Pose Estimation

1 code implementation21 Mar 2023 Yongkang Cheng, Shaoli Huang, Jifeng Ning, Ying Shan

This paper presents a novel approach for estimating human body shape and pose from monocular images that effectively addresses the challenges of occlusions and depth ambiguity.

3D Human Pose Estimation Occlusion Handling

Cannot find the paper you are looking for? You can Submit a new open access paper.