Dialogue Generation
229 papers with code • 13 benchmarks • 31 datasets
Dialogue generation is the task of "understanding" natural language inputs - within natural language processing in order to produce output. The systems are usually intended for conversing with humans, for instance back and forth dialogue with a conversation agent like a chatbot. Some example benchmarks for this task (see others such as Natural Language Understanding) include FusedChat and Ubuntu DIalogue Corpus (UDC). Models can be evaluated via metrics such as BLEU, ROUGE, and METEOR albeit with challenges in terms of weak correlation with human judgement, that may be addressed by new ones like UnSupervised and Reference-free (USR) and Metric for automatic Unreferenced dialog evaluation (MaUde).
Libraries
Use these libraries to find Dialogue Generation models and implementationsMost implemented papers
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks.
PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation
To enhance the generalization ability of PanGu-$\alpha$, we collect 1. 1TB high-quality Chinese data from a wide range of domains to pretrain the model.
Relevance of Unsupervised Metrics in Task-Oriented Dialogue for Evaluating Natural Language Generation
However, previous work in dialogue response generation has shown that these metrics do not correlate strongly with human judgment in the non task-oriented dialogue setting.
End-to-end Adversarial Learning for Generative Conversational Agents
This paper presents a new adversarial learning method for generative conversational agents (GCA) besides a new model of GCA.
DP-GAN: Diversity-Promoting Generative Adversarial Network for Generating Informative and Diversified Text
Existing text generation methods tend to produce repeated and "boring" expressions.
Personalized Dialogue Generation with Diversified Traits
In this paper, we investigate the problem of incorporating explicit personality traits in dialogue generation to deliver personalized dialogues.
Rethinking Action Spaces for Reinforcement Learning in End-to-end Dialog Agents with Latent Variable Models
Defining action spaces for conversational agents and optimizing their decision-making process with reinforcement learning is an enduring challenge.
Text Generation from Knowledge Graphs with Graph Transformers
Generating texts which express complex ideas spanning multiple sentences requires a structured representation of their content (document plan), but these representations are prohibitively expensive to manually produce.
PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable
Pre-training models have been proved effective for a wide range of natural language processing tasks.
PLATO-XL: Exploring the Large-scale Pre-training of Dialogue Generation
To explore the limit of dialogue generation pre-training, we present the models of PLATO-XL with up to 11 billion parameters, trained on both Chinese and English social media conversations.