Text Summarization

369 papers with code • 33 benchmarks • 87 datasets

Text Summarization is a natural language processing (NLP) task that involves condensing a lengthy text document into a shorter, more compact version while still retaining the most important information and meaning. The goal is to produce a summary that accurately represents the content of the original text in a concise form.

There are different approaches to text summarization, including extractive methods that identify and extract important sentences or phrases from the text, and abstractive methods that generate new text based on the content of the original text.

Libraries

Use these libraries to find Text Summarization models and implementations

The Radiation Oncology NLP Database

zl-liu/radiation-oncology-nlp-database 19 Jan 2024

ROND is specifically designed to address this gap in the domain of radiation oncology, a field that offers many opportunities for NLP exploration.

8
19 Jan 2024

Hyperparameter-Free Approach for Faster Minimum Bayes Risk Decoding

CyberAgentAILab/adaptive-mbr 5 Jan 2024

Minimum Bayes-Risk (MBR) decoding is shown to be a powerful alternative to beam search decoding for a wide range of text generation tasks.

1
05 Jan 2024

Lookahead: An Inference Acceleration Framework for Large Language Model with Lossless Generation Accuracy

alipay/PainlessInferenceAcceleration 20 Dec 2023

Hence, this paper presents a generic framework for accelerating the inference process, resulting in a substantial increase in speed and cost reduction for our RAG system, with lossless generation accuracy.

239
20 Dec 2023

Ascle: A Python Natural Language Processing Toolkit for Medical Text Generation

yale-lily/ascle 28 Nov 2023

This study introduces Ascle, a pioneering natural language processing (NLP) toolkit designed for medical text generation.

59
28 Nov 2023

Exploring Prompting Large Language Models as Explainable Metrics

ghazaleh-mahmoodi/Prompting_LLMs_AS_Explainable_Metrics 20 Nov 2023

This paper describes the IUST NLP Lab submission to the Prompting Large Language Models as Explainable Metrics Shared Task at the Eval4NLP 2023 Workshop on Evaluation & Comparison of NLP Systems.

1
20 Nov 2023

DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines

awslabs/optimizing-multitask-training-through-dynamic-pipelines 17 Nov 2023

This paper proposes a dynamic micro-batching approach to tackle sequence length variation and enable efficient multi-task model training.

7
17 Nov 2023

Benchmarking Generation and Evaluation Capabilities of Large Language Models for Instruction Controllable Summarization

yale-nlp/instrusum 15 Nov 2023

Our study reveals that instruction controllable text summarization remains a challenging task for LLMs, since (1) all LLMs evaluated still make factual and other types of errors in their summaries; (2) all LLM-based evaluation methods cannot achieve a strong alignment with human annotators when judging the quality of candidate summaries; (3) different LLMs show large performance gaps in summary generation and evaluation.

12
15 Nov 2023

Controllable Text Summarization: Unraveling Challenges, Approaches, and Prospects -- A Survey

ashokurlana/controllable_text_summarization_survey 15 Nov 2023

Generic text summarization approaches often fail to address the specific intent and needs of individual users.

1
15 Nov 2023

GreekT5: A Series of Greek Sequence-to-Sequence Models for News Summarization

nc0der/greekt5 13 Nov 2023

The proposed models were thoroughly evaluated on the same dataset against GreekBART, which is the state-of-the-art model in Greek abstractive news summarization.

1
13 Nov 2023

Boosting Summarization with Normalizing Flows and Aggressive Training

yuyangstat/flowsum 1 Nov 2023

This paper presents FlowSUM, a normalizing flows-based variational encoder-decoder framework for Transformer-based summarization.

0
01 Nov 2023