Search Results for author: Ruoyu Jia

Found 3 papers, 1 papers with code

Denoising based Sequence-to-Sequence Pre-training for Text Generation

no code implementations IJCNLP 2019 Liang Wang, Wei Zhao, Ruoyu Jia, Sujian Li, Jingming Liu

This paper presents a new sequence-to-sequence (seq2seq) pre-training method PoDA (Pre-training of Denoising Autoencoders), which learns representations suitable for text generation tasks.

Abstractive Text Summarization Denoising +2

Cannot find the paper you are looking for? You can Submit a new open access paper.