Text Generation

1495 papers with code • 21 benchmarks • 150 datasets

Text Generation is the task of generating text with the goal of appearing indistinguishable to human-written text. This task is more formally known as "natural language generation" in the literature.

Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include BART, GPT and other GAN-based approaches. Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU.

Further readings:

( Image credit: Adversarial Ranking for Language Generation )

Libraries

Use these libraries to find Text Generation models and implementations
10 papers
125,059
6 papers
204

Latest papers with no code

Related Work and Citation Text Generation: A Survey

no code yet • 17 Apr 2024

To convince readers of the novelty of their research paper, authors must perform a literature review and compose a coherent story that connects and relates prior works to the current work.

A Survey on Retrieval-Augmented Text Generation for Large Language Models

no code yet • 17 Apr 2024

Retrieval-Augmented Generation (RAG) merges retrieval methods with deep learning advancements to address the static limitations of large language models (LLMs) by enabling the dynamic integration of up-to-date external information.

Modeling Low-Resource Health Coaching Dialogues via Neuro-Symbolic Goal Summarization and Text-Units-Text Generation

no code yet • 16 Apr 2024

Health coaching helps patients achieve personalized and lifestyle-related goals, effectively managing chronic conditions and alleviating mental health issues.

Generative Text Steganography with Large Language Model

no code yet • 16 Apr 2024

In this paper, we explore a black-box generative text steganographic method based on the user interfaces of large language models, which is called LLM-Stega.

KG-CTG: Citation Generation through Knowledge Graph-guided Large Language Models

no code yet • 15 Apr 2024

Citation Text Generation (CTG) is a task in natural language processing (NLP) that aims to produce text that accurately cites or references a cited document within a source document.

Unveiling LLM Evaluation Focused on Metrics: Challenges and Solutions

no code yet • 14 Apr 2024

The overarching goal is to furnish researchers with a pragmatic guide for effective LLM evaluation and metric selection, thereby advancing the understanding and application of these large language models.

Gaining More Insight into Neural Semantic Parsing with Challenging Benchmarks

no code yet • 12 Apr 2024

The Parallel Meaning Bank (PMB) serves as a corpus for semantic processing with a focus on semantic parsing and text generation.

Language Generation in the Limit

no code yet • 10 Apr 2024

A computational agent is trying to learn to generate from this language; we say that the agent generates from L in the limit if after some finite point in the enumeration of L, the agent is able to produce new elements that come exclusively from L and that have not yet been presented by the adversary.

GraSAME: Injecting Token-Level Structural Information to Pretrained Language Models via Graph-guided Self-Attention Mechanism

no code yet • 10 Apr 2024

Pretrained Language Models (PLMs) benefit from external knowledge stored in graph structures for various downstream tasks.

Less is More for Improving Automatic Evaluation of Factual Consistency

no code yet • 9 Apr 2024

Assessing the factual consistency of automatically generated texts in relation to source context is crucial for developing reliable natural language generation applications.