Dialogue Understanding

29 papers with code • 0 benchmarks • 9 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization

dascim/acl2018_abssumm ACL 2018

We introduce a novel graph-based framework for abstractive meeting speech summarization that is fully unsupervised and does not rely on any annotations.

A Repository of Conversational Datasets

PolyAI-LDN/conversational-datasets WS 2019

Progress in Machine Learning is often driven by the availability of large datasets, and consistent evaluation metrics for comparing modeling approaches.

TEACh: Task-driven Embodied Agents that Chat

alexa/teach 1 Oct 2021

Robots operating in human spaces must be able to engage in natural language interaction with people, both understanding and executing instructions, and using conversation to resolve ambiguity and recover from mistakes.

ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications

idiap/atco2-corpus 8 Nov 2022

In this paper, we introduce the ATCO2 corpus, a dataset that aims at fostering research on the challenging ATC field, which has lagged behind due to lack of annotated data.

Masking Orchestration: Multi-task Pretraining for Multi-role Dialogue Representation Learning

wangtianyiftd/dialogue_pretrain 27 Feb 2020

Multi-role dialogue understanding comprises a wide range of diverse tasks such as question answering, act classification, dialogue summarization etc.

Utterance-level Dialogue Understanding: An Empirical Study

declare-lab/dialogue-understanding 29 Sep 2020

Most of these approaches account for the context for effective understanding.

DREAM: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension

nlpdata/dream 1 Feb 2019

DREAM is likely to present significant challenges for existing reading comprehension systems: 84% of answers are non-extractive, 85% of questions require reasoning beyond a single sentence, and 34% of questions also involve commonsense knowledge.

A Natural Language Corpus of Common Grounding under Continuous and Partially-Observable Context

Alab-NII/onecommon 8 Jul 2019

Finally, we evaluate and analyze baseline neural models on a simple subtask that requires recognition of the created common ground.

Incorporating Commonsense Knowledge into Abstractive Dialogue Summarization via Heterogeneous Graph Networks

xcfcode/DHGN CCL 2021

In detail, we consider utterance and commonsense knowledge as two different types of data and design a Dialogue Heterogeneous Graph Network (D-HGN) for modeling both information.