Sequence-to-Sequence Learning for Task-oriented Dialogue with Dialogue State Representation

Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base. Conversely, sequence-to-sequence models learn to map dialogue history to the response in current turn without explicit knowledge base querying. In this work, we propose a novel framework that leverages the advantages of classic pipeline and sequence-to-sequence models. Our framework models a dialogue state as a fixed-size distributed representation and use this representation to query a knowledge base via an attention mechanism. Experiment on Stanford Multi-turn Multi-domain Task-oriented Dialogue Dataset shows that our framework significantly outperforms other sequence-to-sequence based baseline models on both automatic and human evaluation.

PDF Abstract COLING 2018 PDF COLING 2018 Abstract
No code implementations yet. Submit your code now

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Task-Oriented Dialogue Systems KVRET DSR Entity F1 51.9 # 7
BLEU 12.7 # 8

Methods


No methods listed for this paper. Add relevant methods here