Multi-Modal Open-Domain Dialogue

EMNLP 2021  ยท  Kurt Shuster, Eric Michael Smith, Da Ju, Jason Weston ยท

Recent work in open-domain conversational agents has demonstrated that significant improvements in model engagingness and humanness metrics can be achieved via massive scaling in both pre-training data and model size (Adiwardana et al., 2020; Roller et al., 2020). However, if we want to build agents with human-like abilities, we must expand beyond handling just text. A particularly important topic is the ability to see images and communicate about what is perceived. With the goal of engaging humans in multi-modal dialogue, we investigate combining components from state-of-the-art open-domain dialogue agents with those from state-of-the-art vision models. We study incorporating different image fusion schemes and domain-adaptive pre-training and fine-tuning strategies, and show that our best resulting model outperforms strong existing models in multi-modal dialogue while simultaneously performing as well as its predecessor (text-only) BlenderBot (Roller et al., 2020) in text-based conversation. We additionally investigate and incorporate safety components in our final model, and show that such efforts do not diminish model performance with respect to engagingness metrics.

PDF Abstract EMNLP 2021 PDF EMNLP 2021 Abstract
No code implementations yet. Submit your code now

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Dialog BlendedSkillTalk Multi-Modal BlenderBot F1 17.8 # 1
BLEU-4 1 # 1
ROUGE-L 19.3 # 1
Visual Dialog ConvAI2 Multi-Modal BlenderBot F1 18.4 # 1
BLEU-4 1.1 # 1
ROUGE-L 22.6 # 1
Visual Dialog EmpatheticDialogues Multi-Modal BlenderBot F1 19.2 # 1
BLEU-4 1.5 # 1
ROUGE-L 24.5 # 1
Visual Dialog Image-Chat Multi-Modal BlenderBot F1 13.1 # 1
BLEU-4 40 # 1
ROUGE-L 18 # 1
Visual Dialog Wizard of Wikipedia Multi-Modal BlenderBot F1 18.6 # 1
BLEU-4 2.2 # 1
ROUGE-L 17.4 # 1

Methods


No methods listed for this paper. Add relevant methods here