Open-Domain Dialog
32 papers with code • 1 benchmarks • 11 datasets
Datasets
Latest papers
Dior-CVAE: Pre-trained Language Models and Diffusion Priors for Variational Dialog Generation
These models also suffer from posterior collapse, i. e., the decoder tends to ignore latent variables and directly access information captured in the encoder through the cross-attention mechanism.
Open-Domain Dialog Evaluation using Follow-Ups Likelihood
Automatic evaluation of open-domain dialogs remains an unsolved problem.
Re2G: Retrieve, Rerank, Generate
As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger.
GODEL: Large-Scale Pre-Training for Goal-Directed Dialog
We introduce GODEL (Grounded Open Dialogue Language Model), a large pre-trained language model for dialog.
CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI
Finally, we provide baseline systems for these tasks and consider the function of speakers' personalities and emotions on conversation.
InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning
We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets.
What is wrong with you?: Leveraging User Sentiment for Automatic Dialog Evaluation
Existing model-based metrics for system response evaluation are trained on human annotated data, which is cumbersome to collect.
Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks
The research of open-domain dialog systems has been greatly prospered by neural models trained on large-scale corpora, however, such corpora often introduce various safety problems (e. g., offensive languages, biases, and toxic behaviors) that significantly hinder the deployment of dialog systems in practice.
Investigating Robustness of Dialog Models to Popular Figurative Language Constructs
Humans often employ figurative language use in communication, including during interactions with dialog systems.
GenSF: Simultaneous Adaptation of Generative Pre-trained Models and Slot Filling
We instead achieve strong alignment by simultaneously modifying both the pre-trained model and the formulation of the downstream task, which is more efficient and preserves the scalability of transfer learning.