1 code implementation • 6 Mar 2023 • David Wan, Mengwen Liu, Kathleen McKeown, Markus Dreyer, Mohit Bansal
We present a systematic study of the effect of generation techniques such as beam search and nucleus sampling on faithfulness in abstractive summarization.
2 code implementations • NAACL 2022 • Leonardo F. R. Ribeiro, Mengwen Liu, Iryna Gurevych, Markus Dreyer, Mohit Bansal
Despite recent improvements in abstractive summarization, most current approaches generate summaries that are not factually consistent with the source document, severely restricting their trust and usage in real-world applications.
no code implementations • 5 Aug 2021 • Markus Dreyer, Mengwen Liu, Feng Nan, Sandeep Atluri, Sujith Ravi
Neural models for abstractive summarization tend to generate output that is fluent and well-formed but lacks semantic faithfulness, or factuality, with respect to the input documents.
1 code implementation • NAACL 2021 • Ramakanth Pasunuru, Mengwen Liu, Mohit Bansal, Sujith Ravi, Markus Dreyer
We also show improvements in a transfer-only setup on the DUC-2004 dataset.
no code implementations • 17 Apr 2021 • Arthur Bražinskas, Mengwen Liu, Ramesh Nallapati, Sujith Ravi, Markus Dreyer
This applies to scenarios such as a news publisher training a summarizer on dated news and summarizing incoming recent news.
no code implementations • ACL 2019 • Shiva Pentyala, Mengwen Liu, Markus Dreyer
We present methods for multi-task learning that take advantage of natural groupings of related tasks.