Text Generation
1454 papers with code • 167 benchmarks • 149 datasets
Text Generation is the task of generating text with the goal of appearing indistinguishable to human-written text. This task if more formally known as "natural language generation" in the literature.
Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include BART, GPT and other GAN-based approaches. Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU.
Further readings:
( Image credit: Adversarial Ranking for Language Generation )
Libraries
Use these libraries to find Text Generation models and implementationsSubtasks
- Dialogue Generation
- Data-to-Text Generation
- Multi-Document Summarization
- Text Style Transfer
- Text Style Transfer
- Story Generation
- Paraphrase Generation
- Spelling Correction
- Table-to-Text Generation
- Headline Generation
- Conditional Text Generation
- Visual Storytelling
- Text Infilling
- Distractor Generation
- News Generation
- Question-Answer-Generation
- Story Completion
- Code Documentation Generation
- Concept-To-Text Generation
- Paper generation
- Sonnet Generation
- Profile Generation
- Fact-based Text Editing
- Rules-of-thumb Generation
- Molecular description generation
- Natural Language Landmark Navigation Instructions Generation
Latest papers
ILLUMINER: Instruction-tuned Large Language Models as Few-shot Intent Classifier and Slot Filler
State-of-the-art intent classification (IC) and slot filling (SF) methods often rely on data-intensive deep learning models, limiting their practicality for industry applications.
ToXCL: A Unified Framework for Toxic Speech Detection and Explanation
This draws a unique need for unified frameworks that can effectively detect and explain implicit toxic speech.
LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models
Efficient fine-tuning is vital for adapting large language models (LLMs) to downstream tasks.
Dynamic Reward Adjustment in Multi-Reward Reinforcement Learning for Counselor Reflection Generation
In this paper, we study the problem of multi-reward reinforcement learning to jointly optimize for multiple text qualities for natural language generation.
Evaluating Named Entity Recognition: Comparative Analysis of Mono- and Multilingual Transformer Models on Brazilian Corporate Earnings Call Transcriptions
By curating a comprehensive dataset comprising 384 transcriptions and leveraging weak supervision techniques for annotation, we evaluate the performance of monolingual models trained on Portuguese (BERTimbau and PTT5) and multilingual models (mBERT and mT5).
ConvSDG: Session Data Generation for Conversational Search
Conversational search provides a more convenient interface for users to search by allowing multi-turn interaction with the search engine.
DSP: Dynamic Sequence Parallelism for Multi-Dimensional Transformers
Scaling large models with long sequences across applications like language generation, video generation and multimodal tasks requires efficient sequence parallelism.
DRAGIN: Dynamic Retrieval Augmented Generation based on the Real-time Information Needs of Large Language Models
Our framework is specifically designed to make decisions on when and what to retrieve based on the LLM's real-time information needs during the text generation process.
Whose Side Are You On? Investigating the Political Stance of Large Language Models
Large Language Models (LLMs) have gained significant popularity for their application in various everyday tasks such as text generation, summarization, and information retrieval.
Generative Pretrained Structured Transformers: Unsupervised Syntactic Language Models at Scale
A syntactic language model (SLM) incrementally generates a sentence with its syntactic tree in a left-to-right manner.