|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build.
Ranked #9 on Text Summarization on DUC 2004 Task 1
Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization.
Ranked #3 on Text Summarization on arXiv
We further confirm the flexibility of our model by showing a Levenshtein Transformer trained by machine translation can straightforwardly be used for automatic post-editing.
Ranked #3 on Machine Translation on WMT2016 Romanian-English
Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks.
Ranked #1 on Text Summarization on GigaWord-10k (using extra training data)
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text).
Ranked #10 on Abstractive Text Summarization on CNN / Daily Mail
As part of this survey, we also develop an open source library, namely, Neural Abstractive Text Summarizer (NATS) toolkit, for the abstractive text summarization.
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks.
Ranked #2 on Generative Question Answering on CoQA (using extra training data)
Pre-training and fine-tuning, e. g., BERT, have achieved great success in language understanding by transferring knowledge from rich-resource pre-training task to the low/zero-resource downstream tasks.
Summarization of speech is a difficult problem due to the spontaneity of the flow, disfluencies, and other issues that are not usually encountered in written texts.
Ranked #1 on Text Summarization on WikiHow