Text Style Transfer
81 papers with code • 2 benchmarks • 6 datasets
Text Style Transfer is the task of controlling certain attributes of generated text. The state-of-the-art methods can be categorized into two main types which are used on parallel and non-parallel data. Methods on parallel data are typically supervised methods that use a neural sequence-to-sequence model with the encoder-decoder architecture. Methods on non-parallel data are usually unsupervised approaches using Disentanglement, Prototype Editing and Pseudo-Parallel Corpus Construction.
The popular benchmark for this task is the Yelp Review Dataset. Models are typically evaluated with the metrics of Sentiment Accuracy, BLEU, and PPL.
Libraries
Use these libraries to find Text Style Transfer models and implementationsLatest papers
Style-transfer counterfactual explanations: An application to mortality prevention of ICU patients
In this paper, we propose a counterfactual solution MedSeqCF for preventing the mortality of three cohorts of ICU patients, by representing their electronic health records as medical event sequences, and generating counterfactuals by adopting and employing a text style-transfer technique.
Pay Attention to Your Tone: Introducing a New Dataset for Polite Language Rewrite
We introduce \textsc{PoliteRewrite} -- a dataset for polite language rewrite which is a novel sentence rewrite task.
Replacing Language Model for Style Transfer
The new span is generated via a non-autoregressive masked language model, which can better preserve the local-contextual meaning of the replaced token.
StoryTrans: Non-Parallel Story Author-Style Transfer with Discourse Representations and Content Enhancing
Moreover, to enhance content preservation, we design a mask-and-fill framework to explicitly fuse style-specific keywords of source texts into generation.
Composable Text Controls in Latent Space with ODEs
This paper proposes a new efficient approach for composable text operations in the compact latent space of text.
Studying the role of named entities for content preservation in text style transfer
Text style transfer techniques are gaining popularity in Natural Language Processing, finding various applications such as text detoxification, sentiment, or formality transfer.
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
RLPrompt formulates a parameter-efficient policy network that generates the desired discrete prompt after training with reward.
Learning to Model Editing Processes
We introduce baseline results and metrics on this task, finding that modeling editing processes improves performance on a variety of axes on both our proposed task and related downstream tasks compared to previous single-step models of edits.
Learning from Bootstrapping and Stepwise Reinforcement Reward: A Semi-Supervised Framework for Text Style Transfer
To take advantage of both supervised and unsupervised paradigms and tackle the challenges, in this work, we propose a semi-supervised framework for text style transfer.
So Different Yet So Alike! Constrained Unsupervised Text Style Transfer
Automatic transfer of text between domains has become popular in recent times.