Search Results for author: Bernd T. Meyer

Found 11 papers, 3 papers with code

Binaural multichannel blind speaker separation with a causal low-latency and low-complexity approach

no code implementations8 Dec 2023 Nils L. Westhausen, Bernd T. Meyer

In this paper, we introduce a causal low-latency low-complexity approach for binaural multichannel blind speaker separation in noisy reverberant conditions.

Speaker Separation

Low bit rate binaural link for improved ultra low-latency low-complexity multichannel speech enhancement in Hearing Aids

no code implementations17 Jul 2023 Nils L. Westhausen, Bernd T. Meyer

The performance of an oracle binaural LCMV beamformer in non-low-latency configuration can be matched even by a unilateral configuration of the GCFSnet in terms of objective metrics.

Quantization Speech Enhancement

Multilingual Query-by-Example Keyword Spotting with Metric Learning and Phoneme-to-Embedding Mapping

no code implementations19 Apr 2023 Paul M. Reuter, Christian Rollwage, Bernd T. Meyer

Our system achieves a promising accuracy for streaming keyword spotting and keyword search on Common Voice audio using just 5 examples per keyword.

Keyword Spotting Metric Learning +1

tPLCnet: Real-time Deep Packet Loss Concealment in the Time Domain Using a Short Temporal Context

1 code implementation4 Apr 2022 Nils L. Westhausen, Bernd T. Meyer

The model with the lowest complexity described in this paper reaches a robust PLC performance and consistent improvements over the zero-filling baseline for all metrics.

Packet Loss Concealment

Prediction of speech intelligibility with DNN-based performance measures

no code implementations17 Mar 2022 Angel Mario Castro Martinez, Constantin Spille, Jana Roßbach, Birger Kollmeier, Bernd T. Meyer

This paper presents a speech intelligibility model based on automatic speech recognition (ASR), combining phoneme probabilities from deep neural networks (DNN) and a performance measure that estimates the word error rate from these probabilities.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Reduction of Subjective Listening Effort for TV Broadcast Signals with Recurrent Neural Networks

no code implementations2 Nov 2021 Nils L. Westhausen, Rainer Huber, Hannah Baumgartner, Ragini Sinha, Jan Rennies, Bernd T. Meyer

Listening to the audio of TV broadcast signals can be challenging for hearing-impaired as well as normal-hearing listeners, especially when background sounds are prominent or too loud compared to the speech signal.

Audio Source Separation Speech Enhancement

Acoustic echo cancellation with the dual-signal transformation LSTM network

1 code implementation27 Oct 2020 Nils L. Westhausen, Bernd T. Meyer

This paper applies the dual-signal transformation LSTM network (DTLN) to the task of real-time acoustic echo cancellation (AEC).

Acoustic echo cancellation Data Augmentation

EEG-based Auditory Attention Decoding: Towards Neuro-Steered Hearing Devices

no code implementations11 Aug 2020 Simon Geirnaert, Servaas Vandecappelle, Emina Alickovic, Alain de Cheveigné, Edmund Lalor, Bernd T. Meyer, Sina Miran, Tom Francart, Alexander Bertrand

People suffering from hearing impairment often have difficulties participating in conversations in so-called `cocktail party' scenarios with multiple people talking simultaneously.

EEG Speaker Separation

Dual-Signal Transformation LSTM Network for Real-Time Noise Suppression

2 code implementations Interspeech 2020 Nils L. Westhausen, Bernd T. Meyer

This paper introduces a dual-signal transformation LSTM network (DTLN) for real-time speech enhancement as part of the Deep Noise Suppression Challenge (DNS-Challenge).

Speech Enhancement Audio and Speech Processing Sound

DNN-Based Speech Presence Probability Estimation for Multi-Frame Single-Microphone Speech Enhancement

no code implementations21 May 2019 Marvin Tammen, Dörte Fischer, Bernd T. Meyer, Simon Doclo

In contrast to single-frame approaches such as the Wiener gain, it has been shown that multi-frame approaches achieve a substantial noise reduction with hardly any speech distortion, provided that an accurate estimate of the correlation matrices and especially the speech interframe correlation (IFC) vector is available.

Speech Enhancement

On the Relevance of Auditory-Based Gabor Features for Deep Learning in Automatic Speech Recognition

no code implementations14 Feb 2017 Angel Mario Castro Martinez, Sri Harish Mallidi, Bernd T. Meyer

Previous studies support the idea of merging auditory-based Gabor features with deep learning architectures to achieve robust automatic speech recognition, however, the cause behind the gain of such combination is still unknown.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.