no code implementations • 20 Apr 2024 • Xi Wang, Yichen Peng, Heng Fang, Haoran Xie, Xi Yang, Chuntao Li
Achieving this requires the effective decoupling of key attributes within the input image data, aiming to get representations accurately.
2 code implementations • 31 Dec 2023 • Haiyang Liu, Zihao Zhu, Giorgio Becherini, Yichen Peng, Mingyang Su, You Zhou, Xuefei Zhe, Naoya Iwamoto, Bo Zheng, Michael J. Black
We propose EMAGE, a framework to generate full-body human gestures from audio and masked gestures, encompassing facial, local body, hands, and global movements.
Ranked #1 on 3D Face Animation on BEAT2
1 code implementation • 14 Feb 2023 • Yichen Peng, Chunqi Zhao, Haoran Xie, Tsukasa Fukusato, Kazunori Miyata
We then introduce a Stochastic Region Abstraction (SRA), an approach to augment our dataset to improve the robustness of SGLDM to handle sketch input with arbitrary abstraction.
2 code implementations • 10 Mar 2022 • Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You Zhou, Elif Bozkurt, Bo Zheng
Achieving realistic, vivid, and human-like synthesized conversational gestures conditioned on multi-modal data is still an unsolved problem due to the lack of available datasets, models and standard evaluation metrics.
Ranked #1 on Gesture Generation on BEAT
1 code implementation • 26 Apr 2021 • Zhengyu Huang, Yichen Peng, Tomohiro Hibino, Chunqi Zhao, Haoran Xie, Tsukasa Fukusato, Kazunori Miyata
In the stage of local guidance, we synthesize detailed portrait images with a deep generative model from user-drawn contour lines, but use the synthesized results as detailed drawing guidance.