Dialogue State Tracking

127 papers with code • 7 benchmarks • 11 datasets

Dialogue state tacking consists of determining at each turn of a dialogue the full representation of what the user wants at that point in the dialogue, which contains a goal constraint, a set of requested slots, and the user's dialogue act.

Libraries

Use these libraries to find Dialogue State Tracking models and implementations

Latest papers with no code

S3-DST: Structured Open-Domain Dialogue Segmentation and State Tracking in the Era of LLMs

no code yet • 16 Sep 2023

The traditional Dialogue State Tracking (DST) problem aims to track user preferences and intents in user-agent conversations.

Does Collaborative Human-LM Dialogue Generation Help Information Extraction from Human Dialogues?

no code yet • 13 Jul 2023

The capabilities of pretrained language models have opened opportunities to explore new application areas, but applications involving human-human interaction are limited by the fact that most data is protected from public release for privacy reasons.

Span-Selective Linear Attention Transformers for Effective and Robust Schema-Guided Dialogue State Tracking

no code yet • 15 Jun 2023

We demonstrate the effectiveness of our model on the Schema-Guided Dialogue (SGD) and MultiWOZ datasets.

ChatGPT for Zero-shot Dialogue State Tracking: A Solution or an Opportunity?

no code yet • 2 Jun 2023

Recent research on dialogue state tracking (DST) focuses on methods that allow few- and zero-shot transfer to new domains or schemas.

Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking

no code yet • 1 Jun 2023

Zero-shot transfer learning for Dialogue State Tracking (DST) helps to handle a variety of task-oriented dialogue domains without the cost of collecting in-domain data.

Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer in Prompt Tuning

no code yet • 20 May 2023

In this paper, we focus on improving the prompt transfer from dialogue state tracking to dialogue summarization and propose Skeleton-Assisted Prompt Transfer (SAPT), which leverages skeleton generation as extra supervision that functions as a medium connecting the distinct source and target task and resulting in the model's better consumption of dialogue state information.

A Preliminary Evaluation of ChatGPT for Zero-shot Dialogue Understanding

no code yet • 9 Apr 2023

Zero-shot dialogue understanding aims to enable dialogue to track the user's needs without any training data, which has gained increasing attention.

More Robust Schema-Guided Dialogue State Tracking via Tree-Based Paraphrase Ranking

no code yet • 17 Mar 2023

The schema-guided paradigm overcomes scalability issues inherent in building task-oriented dialogue (TOD) agents with static ontologies.

AUTODIAL: Efficient Asynchronous Task-Oriented Dialogue Model

no code yet • 10 Mar 2023

As large dialogue models become commonplace in practice, the problems surrounding high compute requirements for training, inference and larger memory footprint still persists.

Dialogue State Distillation Network with Inter-slot Contrastive Learning for Dialogue State Tracking

no code yet • 16 Feb 2023

In this paper, we propose a Dialogue State Distillation Network (DSDN) to utilize relevant information of previous dialogue states and migrate the gap of utilization between training and testing.