Search Results for author: Anna Deichler

Found 3 papers, 1 papers with code

Fake it to make it: Using synthetic data to remedy the data shortage in joint multimodal speech-and-gesture synthesis

no code implementations30 Apr 2024 Shivam Mehta, Anna Deichler, Jim O'Regan, Birger Moëll, Jonas Beskow, Gustav Eje Henter, Simon Alexanderson

Specifically, we use unimodal synthesis models trained on large datasets to create multimodal (but synthetic) parallel training data, and then pre-train a joint synthesis model on that material.

Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation

no code implementations11 Sep 2023 Anna Deichler, Shivam Mehta, Simon Alexanderson, Jonas Beskow

The output of the CSMP module is used as a conditioning signal in the diffusion-based gesture synthesis model in order to achieve semantically-aware co-speech gesture generation.

Gesture Generation Motion Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.