Text Generation

1438 papers with code • 167 benchmarks • 149 datasets

Text Generation is the task of generating text with the goal of appearing indistinguishable to human-written text. This task if more formally known as "natural language generation" in the literature.

Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include BART, GPT and other GAN-based approaches. Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU.

Further readings:

( Image credit: Adversarial Ranking for Language Generation )

Libraries

Use these libraries to find Text Generation models and implementations
10 papers
122,310
6 papers
205

Most implemented papers

Show and Tell: A Neural Image Caption Generator

karpathy/neuraltalk CVPR 2015

Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions.

Generating Sequences With Recurrent Neural Networks

karpathy/char-rnn 4 Aug 2013

This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time.

Learning Transferable Visual Models From Natural Language Supervision

openai/CLIP 26 Feb 2021

State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories.

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

huggingface/transformers ACL 2020

We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token.

Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models

ashwinkalyan/dbs 7 Oct 2016

We observe that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.

SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

LantaoYu/SeqGAN 18 Sep 2016

As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data.

Language Models are Unsupervised Multitask Learners

openai/gpt-2 Preprint 2019

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

BERTScore: Evaluating Text Generation with BERT

Tiiiger/bert_score ICLR 2020

We propose BERTScore, an automatic evaluation metric for text generation.

BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models

salesforce/lavis 30 Jan 2023

The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models.

Prefix-Tuning: Optimizing Continuous Prompts for Generation

XiangLi1999/PrefixTuning ACL 2021

Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks.