no code implementations • 10 May 2024 • Kebing Xue, Hyewon Seo
To address this issue, we propose a Shape-conditioned Motion Diffusion model (SMD), which enables the generation of motion sequences directly in mesh format, conditioned on a specified target mesh.
no code implementations • 20 Mar 2024 • Diwei Wang, Kun Yuan, Candice Muller, Frédéric Blanc, Nicolas Padoy, Hyewon Seo
Based on a large-scale pre-trained Vision Language Model (VLM), our model learns and improves visual, textual, and numerical representations of patient gait videos, through a collective learning across three distinct modalities: gait videos, class-specific descriptions, and numerical gait parameters.
1 code implementation • 29 Mar 2023 • Kaifeng Zou, Sylvain Faisan, Boyang Yu, Sébastien Valette, Hyewon Seo
In this paper, we introduce a generative framework for generating 3D facial expression sequences (i. e. 4D faces) that can be conditioned on different inputs to animate an arbitrary 3D face mesh.
no code implementations • 26 Nov 2021 • Hyewon Seo, Kaifeng Zou, Frederic Cordier
Our network also learns to predict the variation of skin dynamics across different individuals with varying body shapes.