Dialogue Generation

230 papers with code • 13 benchmarks • 30 datasets

Dialogue generation is the task of "understanding" natural language inputs - within natural language processing in order to produce output. The systems are usually intended for conversing with humans, for instance back and forth dialogue with a conversation agent like a chatbot. Some example benchmarks for this task (see others such as Natural Language Understanding) include FusedChat and Ubuntu DIalogue Corpus (UDC). Models can be evaluated via metrics such as BLEU, ROUGE, and METEOR albeit with challenges in terms of weak correlation with human judgement, that may be addressed by new ones like UnSupervised and Reference-free (USR) and Metric for automatic Unreferenced dialog evaluation (MaUde).

Libraries

Use these libraries to find Dialogue Generation models and implementations
2 papers
1,693

Most implemented papers

Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation

snakeztc/NeuralDialog-LAED ACL 2018

The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains.

Zero-Shot Dialog Generation with Cross-Domain Latent Actions

snakeztc/NeuralDialog-ZSDG WS 2018

This paper introduces zero-shot dialog generation (ZSDG), as a step towards neural dialog systems that can instantly generalize to new situations with minimal data.

Explicit State Tracking with Semi-Supervision for Neural Dialogue Generation

AuCson/SEDST 31 Aug 2018

However, the \emph{expensive nature of state labeling} and the \emph{weak interpretability} make the dialogue state tracking a challenging problem for both task-oriented and non-task-oriented dialogue generation: For generating responses in task-oriented dialogues, state tracking is usually learned from manually annotated corpora, where the human annotation is expensive for training; for generating responses in non-task-oriented dialogues, most of existing work neglects the explicit state tracking due to the unlimited number of dialogue states.

Wizard of Wikipedia: Knowledge-Powered Conversational agents

facebookresearch/ParlAI ICLR 2019

In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date.

ReCoSa: Detecting the Relevant Contexts with Self-Attention for Multi-turn Dialogue Generation

zhanghainan/ReCoSa ACL 2019

Then, the self-attention mechanism is utilized to update both the context and masked response representation.

A Pre-training Based Personalized Dialogue Generation Model with Persona-sparse Data

ghosthamlet/persona 12 Nov 2019

Further, to incorporate the target persona in the decoding process and to balance its contribution, an attention routing structure is devised in the decoder to merge features extracted from the target persona and dialogue contexts using dynamically predicted weights.

A Neural Topical Expansion Framework for Unstructured Persona-oriented Dialogue Generation

Minghong-Xu/Neural_Topical_Expansion_for_UPDS 6 Feb 2020

To address this, we propose a neural topical expansion framework, namely Persona Exploration and Exploitation (PEE), which is able to extend the predefined user persona description with semantically correlated content before utilizing them to generate dialogue responses.

Towards Controllable Biases in Language Generation

ewsheng/nlg-bias Findings of the Association for Computational Linguistics 2020

We present a general approach towards controllable societal biases in natural language generation (NLG).

A Large-Scale Chinese Short-Text Conversation Dataset

thu-coai/CDial-GPT 10 Aug 2020

The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling.