Dialogue State Tracking
126 papers with code • 7 benchmarks • 11 datasets
Dialogue state tacking consists of determining at each turn of a dialogue the full representation of what the user wants at that point in the dialogue, which contains a goal constraint, a set of requested slots, and the user's dialogue act.
Libraries
Use these libraries to find Dialogue State Tracking models and implementationsDatasets
Most implemented papers
Leveraging Slot Descriptions for Zero-Shot Cross-Domain Dialogue State Tracking
Zero-shot cross-domain dialogue state tracking (DST) enables us to handle task-oriented dialogue in unseen domains without the expense of collecting in-domain data.
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System
Pre-trained language models have been recently shown to benefit task-oriented dialogue (TOD) systems.
Few-Shot Bot: Prompt-Based Learning for Dialogue Systems
A simple yet unexplored solution is prompt-based few-shot learning (Brown et al. 2020) which does not require gradient-based fine-tuning but instead uses a few examples in the LM context as the only source of learning.
Know Thy Strengths: Comprehensive Dialogue State Tracking Diagnostics
Recent works that revealed the vulnerability of dialogue state tracking (DST) models to distributional shifts have made holistic comparisons on robustness and qualitative analyses increasingly important for understanding their relative performance.
MoPE: Mixture of Prefix Experts for Zero-Shot Dialogue State Tracking
Zero-shot dialogue state tracking (DST) transfers knowledge to unseen domains, reducing the cost of annotating new datasets.
Dynamic Time-Aware Attention to Speaker Roles and Contexts for Spoken Language Understanding
However, the previous model only paid attention to the content in history utterances without considering their temporal information and speaker roles.
Scalable Multi-Domain Dialogue State Tracking
We introduce a novel framework for state tracking which is independent of the slot value set, and represent the dialogue state as a distribution over a set of values of interest (candidate set) derived from the dialogue history or knowledge.
Dialogue Learning with Human Teaching and Feedback in End-to-End Trainable Task-Oriented Dialogue Systems
To address this challenge, we propose a hybrid imitation and reinforcement learning method, with which a dialogue agent can effectively learn from its interaction with users by learning from human teaching and feedback.
Post-Specialisation: Retrofitting Vectors of Words Unseen in Lexical Resources
Word vector specialisation (also known as retrofitting) is a portable, light-weight approach to fine-tuning arbitrary distributional word vector spaces by injecting external knowledge from rich lexical resources such as WordNet.
Fully Statistical Neural Belief Tracking
This paper proposes an improvement to the existing data-driven Neural Belief Tracking (NBT) framework for Dialogue State Tracking (DST).