Dialogue State Tracking
127 papers with code • 7 benchmarks • 11 datasets
Dialogue state tacking consists of determining at each turn of a dialogue the full representation of what the user wants at that point in the dialogue, which contains a goal constraint, a set of requested slots, and the user's dialogue act.
Libraries
Use these libraries to find Dialogue State Tracking models and implementationsDatasets
Latest papers with no code
S3-DST: Structured Open-Domain Dialogue Segmentation and State Tracking in the Era of LLMs
The traditional Dialogue State Tracking (DST) problem aims to track user preferences and intents in user-agent conversations.
Does Collaborative Human-LM Dialogue Generation Help Information Extraction from Human Dialogues?
The capabilities of pretrained language models have opened opportunities to explore new application areas, but applications involving human-human interaction are limited by the fact that most data is protected from public release for privacy reasons.
Span-Selective Linear Attention Transformers for Effective and Robust Schema-Guided Dialogue State Tracking
We demonstrate the effectiveness of our model on the Schema-Guided Dialogue (SGD) and MultiWOZ datasets.
ChatGPT for Zero-shot Dialogue State Tracking: A Solution or an Opportunity?
Recent research on dialogue state tracking (DST) focuses on methods that allow few- and zero-shot transfer to new domains or schemas.
Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking
Zero-shot transfer learning for Dialogue State Tracking (DST) helps to handle a variety of task-oriented dialogue domains without the cost of collecting in-domain data.
Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer in Prompt Tuning
In this paper, we focus on improving the prompt transfer from dialogue state tracking to dialogue summarization and propose Skeleton-Assisted Prompt Transfer (SAPT), which leverages skeleton generation as extra supervision that functions as a medium connecting the distinct source and target task and resulting in the model's better consumption of dialogue state information.
A Preliminary Evaluation of ChatGPT for Zero-shot Dialogue Understanding
Zero-shot dialogue understanding aims to enable dialogue to track the user's needs without any training data, which has gained increasing attention.
More Robust Schema-Guided Dialogue State Tracking via Tree-Based Paraphrase Ranking
The schema-guided paradigm overcomes scalability issues inherent in building task-oriented dialogue (TOD) agents with static ontologies.
AUTODIAL: Efficient Asynchronous Task-Oriented Dialogue Model
As large dialogue models become commonplace in practice, the problems surrounding high compute requirements for training, inference and larger memory footprint still persists.
Dialogue State Distillation Network with Inter-slot Contrastive Learning for Dialogue State Tracking
In this paper, we propose a Dialogue State Distillation Network (DSDN) to utilize relevant information of previous dialogue states and migrate the gap of utilization between training and testing.