Search Results for author: Jonas Beskow

Found 23 papers, 9 papers with code

Fake it to make it: Using synthetic data to remedy the data shortage in joint multimodal speech-and-gesture synthesis

no code implementations30 Apr 2024 Shivam Mehta, Anna Deichler, Jim O'Regan, Birger Moëll, Jonas Beskow, Gustav Eje Henter, Simon Alexanderson

Specifically, we use unimodal synthesis models trained on large datasets to create multimodal (but synthetic) parallel training data, and then pre-train a joint synthesis model on that material.

Unified speech and gesture synthesis using flow matching

no code implementations8 Oct 2023 Shivam Mehta, Ruibo Tu, Simon Alexanderson, Jonas Beskow, Éva Székely, Gustav Eje Henter

As text-to-speech technologies achieve remarkable naturalness in read-aloud tasks, there is growing interest in multimodal synthesis of verbal and non-verbal communicative behaviour, such as spontaneous speech and associated body gestures.

Audio Synthesis Motion Synthesis +1

Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation

no code implementations11 Sep 2023 Anna Deichler, Shivam Mehta, Simon Alexanderson, Jonas Beskow

The output of the CSMP module is used as a conditioning signal in the diffusion-based gesture synthesis model in order to achieve semantically-aware co-speech gesture generation.

Gesture Generation Motion Synthesis

Matcha-TTS: A fast TTS architecture with conditional flow matching

1 code implementation6 Sep 2023 Shivam Mehta, Ruibo Tu, Jonas Beskow, Éva Székely, Gustav Eje Henter

We introduce Matcha-TTS, a new encoder-decoder architecture for speedy TTS acoustic modelling, trained using optimal-transport conditional flow matching (OT-CFM).

 Ranked #1 on Text-To-Speech Synthesis on LJSpeech (MOS metric)

Acoustic Modelling Speech Synthesis +1

Diff-TTSG: Denoising probabilistic integrated speech and gesture synthesis

no code implementations15 Jun 2023 Shivam Mehta, Siyang Wang, Simon Alexanderson, Jonas Beskow, Éva Székely, Gustav Eje Henter

With read-aloud speech synthesis achieving high naturalness scores, there is a growing research interest in synthesising spontaneous speech.

Denoising Speech Synthesis

Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models

1 code implementation17 Nov 2022 Simon Alexanderson, Rajmund Nagy, Jonas Beskow, Gustav Eje Henter

Diffusion models have experienced a surge of interest as highly expressive yet efficiently trainable probabilistic models.

Gesture Generation Motion Synthesis

OverFlow: Putting flows on top of neural transducers for better TTS

2 code implementations13 Nov 2022 Shivam Mehta, Ambika Kirkland, Harm Lameris, Jonas Beskow, Éva Székely, Gustav Eje Henter

Neural HMMs are a type of neural transducer recently proposed for sequence-to-sequence modelling in text-to-speech.

Ranked #11 on Text-To-Speech Synthesis on LJSpeech (using extra training data)

Normalising Flows Speech Synthesis +1

Neural HMMs are all you need (for high-quality attention-free TTS)

2 code implementations30 Aug 2021 Shivam Mehta, Éva Székely, Jonas Beskow, Gustav Eje Henter

Neural sequence-to-sequence TTS has achieved significantly better output quality than statistical speech synthesis using HMMs.

Speech Synthesis

Integrated Speech and Gesture Synthesis

1 code implementation25 Aug 2021 Siyang Wang, Simon Alexanderson, Joakim Gustafson, Jonas Beskow, Gustav Eje Henter, Éva Székely

Text-to-speech and co-speech gesture synthesis have until now been treated as separate areas by two different research communities, and applications merely stack the two technologies using a simple system-level pipeline.

Speech Synthesis

Transflower: probabilistic autoregressive dance generation with multimodal attention

no code implementations25 Jun 2021 Guillermo Valle-Pérez, Gustav Eje Henter, Jonas Beskow, André Holzapfel, Pierre-Yves Oudeyer, Simon Alexanderson

First, we present a novel probabilistic autoregressive architecture that models the distribution over future poses with a normalizing flow conditioned on previous poses as well as music context, using a multimodal transformer encoder.

Generating coherent spontaneous speech and gesture from text

no code implementations14 Jan 2021 Simon Alexanderson, Éva Székely, Gustav Eje Henter, Taras Kucherenko, Jonas Beskow

In contrast to previous approaches for joint speech-and-gesture generation, we generate full-body gestures from speech synthesis trained on recordings of spontaneous speech from the same person as the motion-capture data.

Gesture Generation Speech Synthesis

Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation of Facial Gestures in Dyadic Settings

1 code implementation11 Jun 2020 Patrik Jonell, Taras Kucherenko, Gustav Eje Henter, Jonas Beskow

Our contributions are: a) a method for feature extraction from multi-party video and speech recordings, resulting in a representation that allows for independent control and manipulation of expression and speech articulation in a 3D avatar; b) an extension to MoGlow, a recent motion-synthesis method based on normalizing flows, to also take multi-modal signals from the interlocutor as input and subsequently output interlocutor-aware facial gestures; and c) a subjective evaluation assessing the use and relative importance of the input modalities.

Motion Synthesis

MoGlow: Probabilistic and controllable motion synthesis using normalising flows

3 code implementations16 May 2019 Gustav Eje Henter, Simon Alexanderson, Jonas Beskow

Data-driven modelling and synthesis of motion is an active research area with applications that include animation, games, and social robotics.

Motion Synthesis Normalising Flows

A Neural Network Approach to Missing Marker Reconstruction in Human Motion Capture

1 code implementation7 Mar 2018 Taras Kucherenko, Jonas Beskow, Hedvig Kjellström

Optical motion capture systems have become a widely used technology in various fields, such as augmented reality, robotics, movie production, etc.

3D Reconstruction Missing Markers Reconstruction

Self-Supervised Vision-Based Detection of the Active Speaker as Support for Socially-Aware Language Acquisition

no code implementations24 Nov 2017 Kalin Stefanov, Jonas Beskow, Giampiero Salvi

Active speaker detection is a fundamental prerequisite for any artificial cognitive system attempting to acquire language in social settings.

Language Acquisition

Machine Learning and Social Robotics for Detecting Early Signs of Dementia

no code implementations5 Sep 2017 Patrik Jonell, Joseph Mendelson, Thomas Storskog, Goran Hagman, Per Ostberg, Iolanda Leite, Taras Kucherenko, Olga Mikheeva, Ulrika Akenine, Vesna Jelic, Alina Solomon, Jonas Beskow, Joakim Gustafson, Miia Kivipelto, Hedvig Kjellstrom

This paper presents the EACare project, an ambitious multi-disciplinary collaboration with the aim to develop an embodied system, capable of carrying out neuropsychological tests to detect early signs of dementia, e. g., due to Alzheimer's disease.

BIG-bench Machine Learning

3rd party observer gaze as a continuous measure of dialogue flow

no code implementations LREC 2012 Jens Edlund, Alex, Simon ersson, Jonas Beskow, Lisa Gustavsson, Mattias Heldner, Anna Hjalmarsson, Petter Kallionen, Ellen Marklund

We present an attempt at using 3rd party observer gaze to get a measure of how appropriate each segment in a dialogue is for a speaker change.

Action Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.