Dialogue Generation
229 papers with code • 14 benchmarks • 31 datasets
Dialogue generation is the task of "understanding" natural language inputs - within natural language processing in order to produce output. The systems are usually intended for conversing with humans, for instance back and forth dialogue with a conversation agent like a chatbot. Some example benchmarks for this task (see others such as Natural Language Understanding) include FusedChat and Ubuntu DIalogue Corpus (UDC). Models can be evaluated via metrics such as BLEU, ROUGE, and METEOR albeit with challenges in terms of weak correlation with human judgement, that may be addressed by new ones like UnSupervised and Reference-free (USR) and Metric for automatic Unreferenced dialog evaluation (MaUde).
Libraries
Use these libraries to find Dialogue Generation models and implementationsLatest papers with no code
Research on emotionally intelligent dialogue generation based on automatic dialogue system
The model can detect and understand a wide range of emotions and specific pain signals in real time, enabling the system to provide empathetic interaction.
Modeling Low-Resource Health Coaching Dialogues via Neuro-Symbolic Goal Summarization and Text-Units-Text Generation
Health coaching helps patients achieve personalized and lifestyle-related goals, effectively managing chronic conditions and alleviating mental health issues.
Impact of Preference Noise on the Alignment Performance of Generative Language Models
A key requirement in developing Generative Language Models (GLMs) is to have their values aligned with human values.
DiffusionDialog: A Diffusion Model for Diverse Dialog Generation with Latent Space
Previous studies attempted to introduce discrete or Gaussian-based continuous latent variables to address the one-to-many problem, but the diversity is limited.
CoVoMix: Advancing Zero-Shot Speech Generation for Human-like Multi-talker Conversations
CoVoMix is capable of first converting dialogue text into multiple streams of discrete tokens, with each token stream representing semantic information for individual talkers.
A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded Dialogue Generation
Recently, knowledge-grounded dialogue generation models, that intentionally invoke external knowledge resources to more informative responses, are also proven to be effective in reducing hallucination.
PSYDIAL: Personality-based Synthetic Dialogue Generation using Large Language Models
Experimental results indicate that while pre-trained models and those fine-tuned with a chit-chat dataset struggle to generate responses reflecting personality, models trained with PSYDIAL show significant improvements.
Controllable and Diverse Data Augmentation with Large Language Model for Low-Resource Open-Domain Dialogue Generation
To evaluate the efficacy of data augmentation methods for open-domain dialogue, we designed a clustering-based metric to characterize the semantic diversity of the augmented dialogue data.
BP4ER: Bootstrap Prompting for Explicit Reasoning in Medical Dialogue Generation
This approach eliminates the need for entity annotation and increases the transparency of the MDG process by explicitly generating the intermediate reasoning chain.
Empowering Segmentation Ability to Multi-modal Large Language Models
Multi-modal large language models (MLLMs) can understand image-language prompts and demonstrate impressive reasoning ability.