Search Results for author: Yeunju Choi

Found 7 papers, 0 papers with code

Learning to Maximize Speech Quality Directly Using MOS Prediction for Neural Text-to-Speech

no code implementations2 Nov 2020 Yeunju Choi, Youngmoon Jung, Youngjoo Suh, Hoirin Kim

Although recent neural text-to-speech (TTS) systems have achieved high-quality speech synthesis, there are cases where a TTS system generates low-quality speech, mainly caused by limited training data or information loss during knowledge distillation.

Knowledge Distillation Speech Synthesis +1

A Unified Deep Learning Framework for Short-Duration Speaker Verification in Adverse Environments

no code implementations6 Oct 2020 Youngmoon Jung, Yeunju Choi, Hyungjun Lim, Hoirin Kim

At the same time, there is an increasing requirement for an SV system: it should be robust to short speech segments, especially in noisy and reverberant environments.

Action Detection Activity Detection +2

Deep MOS Predictor for Synthetic Speech Using Cluster-Based Modeling

no code implementations9 Aug 2020 Yeunju Choi, Youngmoon Jung, Hoirin Kim

While deep learning has made impressive progress in speech synthesis and voice conversion, the assessment of the synthesized speech is still carried out by human participants.

Speech Synthesis Voice Conversion

Neural MOS Prediction for Synthesized Speech Using Multi-Task Learning With Spoofing Detection and Spoofing Type Classification

no code implementations16 Jul 2020 Yeunju Choi, Youngmoon Jung, Hoirin Kim

In this paper, we propose a multi-task learning (MTL) method to improve the performance of a MOS prediction model using the following two auxiliary tasks: spoofing detection (SD) and spoofing type classification (STC).

Multi-Task Learning Voice Conversion

Improving Multi-Scale Aggregation Using Feature Pyramid Module for Robust Speaker Verification of Variable-Duration Utterances

no code implementations7 Apr 2020 Youngmoon Jung, Seong Min Kye, Yeunju Choi, Myunghun Jung, Hoirin Kim

In this approach, we obtain a speaker embedding vector by pooling single-scale features that are extracted from the last layer of a speaker feature extractor.

Text-Independent Speaker Verification

Self-Adaptive Soft Voice Activity Detection using Deep Neural Networks for Robust Speaker Verification

no code implementations26 Sep 2019 Youngmoon Jung, Yeunju Choi, Hoirin Kim

The first approach is soft VAD, which performs a soft selection of frame-level features extracted from a speaker feature extractor.

Action Detection Activity Detection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.