Style Generalization

4 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Latest papers with no code

GenerTTS: Pronunciation Disentanglement for Timbre and Style Generalization in Cross-Lingual Text-to-Speech

no code yet • 27 Jun 2023

Cross-lingual timbre and style generalizable text-to-speech (TTS) aims to synthesize speech with a specific reference timbre or style that is never trained in the target language.

Mega-TTS: Zero-Shot Text-to-Speech at Scale with Intrinsic Inductive Bias

no code yet • 6 Jun 2023

3) We further use a VQGAN-based acoustic model to generate the spectrogram and a latent code language model to fit the distribution of prosody, since prosody changes quickly over time in a sentence, and language models can capture both local and long-range dependencies.

Domain Generalization for Mammographic Image Analysis with Contrastive Learning

no code yet • 20 Apr 2023

The training of an efficacious deep learning model requires large data with diverse styles and qualities.

Discrepancy-Optimal Meta-Learning for Domain Generalization

no code yet • 29 Sep 2021

This work attempts to tackle the problem of domain generalization (DG) via learning to reduce domain shift with an episodic training procedure.

A Unified System for Aggression Identification in English Code-Mixed and Uni-Lingual Texts

no code yet • 15 Jan 2020

To solve these problems, we have introduced a unified and robust multi-modal deep learning architecture which works for English code-mixed dataset and uni-lingual English dataset both. The devised system, uses psycho-linguistic features and very ba-sic linguistic features.

Adapting a FrameNet Semantic Parser for Spoken Language Understanding Using Adversarial Learning

no code yet • 7 Oct 2019

We show that adversarial learning increases all models generalization capabilities both on manual and automatic speech transcription as well as on encyclopedic data.

Complementary Attributes: A New Clue to Zero-Shot Learning

no code yet • 17 Apr 2018

Extensive experiments on five ZSL benchmark datasets and the large-scale ImageNet dataset demonstrate that the proposed complementary attributes and rank aggregation can significantly and robustly improve existing ZSL methods and achieve the state-of-the-art performance.

(Almost) No Label No Cry

no code yet • NeurIPS 2014

In Learning with Label Proportions (LLP), the objective is to learn a supervised classifier when, instead of labels, only label proportions for bags of observations are known.