Search Results for author: Jouni Paulus

Found 6 papers, 0 papers with code

Geometrically-Motivated Primary-Ambient Decomposition With Center-Channel Extraction

no code implementations5 Jun 2022 Jouni Paulus, Matteo Torcoli

A geometrically-motivated method for primary-ambient decomposition is proposed and evaluated in an up-mixing application.

Dialog+ in Broadcasting: First Field Tests Using Deep-Learning-Based Dialogue Enhancement

no code implementations17 Dec 2021 Matteo Torcoli, Christian Simon, Jouni Paulus, Davide Straninger, Alfred Riedel, Volker Koch, Stefan Wits, Daniela Rieger, Harald Fuchs, Christian Uhle, Stefan Meltzer, Adrian Murtaza

To address this, Fraunhofer IIS has developed a deep-learning solution called Dialog+, capable of enabling speech level personalization also for content with only the final audio tracks available.

Object

Controlling the Perceived Sound Quality for Dialogue Enhancement with Deep Learning

no code implementations22 Jul 2021 Christian Uhle, Matteo Torcoli, Jouni Paulus

Speech enhancement attenuates interfering sounds in speech signals but may introduce artifacts that perceivably deteriorate the output signal.

Speech Enhancement

Controlling the Remixing of Separated Dialogue with a Non-Intrusive Quality Estimate

no code implementations21 Jul 2021 Matteo Torcoli, Jouni Paulus, Thorsten Kastner, Christian Uhle

The 2f-model requires the reference target source as an input, but this is not available in many applications.

A Hands-on Comparison of DNNs for Dialog Separation Using Transfer Learning from Music Source Separation

no code implementations16 Jun 2021 Martin Strauss, Jouni Paulus, Matteo Torcoli, Bernd Edler

The music separation models are selected as they share the number of channels (2) and sampling rate (44. 1 kHz or higher) with the considered broadcast content, and vocals separation in music is considered as a parallel for dialog separation in the target application domain.

Music Source Separation Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.