Context-aware Style Learning and Content Recovery Networks for Neural Style Transfer

Neural text transfer aims to change the style of a text sequence while keeping its original content. Due to the lack of parallel data, unsupervised learning-based approaches have gained considerable development. However, there are still several problems in these approaches: (1) The generated transferred sequences sometimes have inconsistencies between the transferred style and content, and (2) It is difficult to ensure sufficient preservation of the core semantics of original sequences in the transferred sequences. To address these defects, we propose Context-aware Style Learning and Content Recovery networks (CSLCR) for neural text transfer. Specifically, to improve the consistency between the transferred style and content, the designed context-aware style learning layer (CSL) retrieves target style samples with similar semantics to the original sequence, and promotes deep interactive fusion with the original sequence, so as to generate transferred sequence with context-aware style. To tackle the second problem, we explore content constraint recovery layer (CCR) from an indirect perspective, which decodes and recovers the core content semantics of the original sequence and the transferred sequence by both recovery decoding layers, respectively, and intensifies the preservation of the core semantics of both the sequences by a multi-level constraint mechanism. Experiments on two public datasets demonstrate the superiority of our proposed method.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here