Text Summarization

369 papers with code • 33 benchmarks • 88 datasets

Text Summarization is a natural language processing (NLP) task that involves condensing a lengthy text document into a shorter, more compact version while still retaining the most important information and meaning. The goal is to produce a summary that accurately represents the content of the original text in a concise form.

There are different approaches to text summarization, including extractive methods that identify and extract important sentences or phrases from the text, and abstractive methods that generate new text based on the content of the original text.

Libraries

Use these libraries to find Text Summarization models and implementations

Latest papers with no code

Neural Sequence-to-Sequence Modeling with Attention by Leveraging Deep Learning Architectures for Enhanced Contextual Understanding in Abstractive Text Summarization

no code yet • 8 Apr 2024

A deep sequence-to-sequence (seq2seq) model with an attention mechanism is employed to predict a generalized summary based on the vector representation.

FFN-SkipLLM: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping

no code yet • 5 Apr 2024

In this work, we observed the saturation of computationally expensive feed-forward blocks of LLM layers and proposed FFN-SkipLLM, which is a novel fine-grained skip strategy of autoregressive LLMs.

Hallucination Diversity-Aware Active Learning for Text Summarization

no code yet • 2 Apr 2024

Large Language Models (LLMs) have shown propensity to generate hallucinated outputs, i. e., texts that are factually incorrect or unsupported.

Transformer-Lite: High-efficiency Deployment of Large Language Models on Mobile Phone GPUs

no code yet • 29 Mar 2024

The Large Language Model (LLM) is widely employed for tasks such as intelligent assistants, text summarization, translation, and multi-modality on mobile phones.

Large Language Models Are State-of-the-Art Evaluator for Grammatical Error Correction

no code yet • 26 Mar 2024

Large Language Models (LLMs) have been reported to outperform existing automatic evaluation metrics in some tasks, such as text summarization and machine translation.

Improving Sequence-to-Sequence Models for Abstractive Text Summarization Using Meta Heuristic Approaches

no code yet • 24 Mar 2024

As human society transitions into the information age, reduction in our attention span is a contingency, and people who spend time reading lengthy news articles are decreasing rapidly and the need for succinct information is higher than ever before.

Optimal path for Biomedical Text Summarization Using Pointer GPT

no code yet • 22 Mar 2024

To address these limitations, we replaced the attention mechanism in the GPT model with a pointer network.

Automatic Summarization of Doctor-Patient Encounter Dialogues Using Large Language Model through Prompt Tuning

no code yet • 19 Mar 2024

We examined the prompt-tuning strategies, the size of soft prompts, and the few-short learning ability of GatorTronGPT, a generative clinical LLM developed using 277 billion clinical and general English words with up to 20 billion parameters.

Aligning Uncertainty: Leveraging LLMs to Analyze Uncertainty Transfer in Text Summarization

no code yet • Proceedings of the 1st Workshop on Uncertainty-Aware NLP (UncertaiNLP 2024) 2024

The method capitalizes on a small amount of expert annotations and on the capabilities of Large language models (LLMs) to evaluate how the uncertainty of the source text aligns with the uncertainty expressions in the summary.

Read between the lines -- Functionality Extraction From READMEs

no code yet • 15 Mar 2024

While text summarization is a well-known NLP task, in this paper, we introduce a novel and useful variant of it called functionality extraction from Git README files.