DAML-ST5: Low Resource Style Transfer via Domain Adaptive Meta Learning

ACL ARR November 2021  ·  Anonymous ·

Text style transfer (TST) without parallel data has achieved some practical success. However, most of the existing unsupervised text style transfer methods suffer from (i) requiring massive amounts of nonparallel data to guide the transferring of different text styles. (ii) huge performance degradation when fine-tuning the model in new domains. In this work, we propose DAML-ST5, which consists of two parts, DAML and ST5. DAML is a domain adaptive meta-learning approach to refine general knowledge in multi-heterogeneous source domains, which is capable of adapting to new unseen domains with a small amount data. Moreover, we propose a new unsupervised TST model Style-T5 (ST5), which is composed of a sequence-to-sequence pre-trained language model T5 and uses style adversarial training for better content preservation and style transfer. Results on multi-domain datasets demonstrate that our approach generalize well on unseen low-resource domains, achieving state of the art results against ten strong baselines.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods