Search Results for author: Zhaohan Wang

Found 2 papers, 0 papers with code

Audio is all in one: speech-driven gesture synthetics using WavLM pre-trained model

no code implementations11 Aug 2023 Fan Zhang, Naye Ji, Fuxing Gao, Siyuan Zhao, Zhaohan Wang, Shunman Li

Firstly, considering that speech audio not only contains acoustic and semantic features but also conveys personality traits, emotions, and more subtle information related to accompanying gestures, we pioneer the adaptation of WavLM, a large-scale pre-trained model, to extract low-level and high-level audio information.

Gesture Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.