Instructive Dialogue Summarization with Query Aggregations

17 Oct 2023  ·  Bin Wang, Zhengyuan Liu, Nancy F. Chen ·

Conventional dialogue summarization methods directly generate summaries and do not consider user's specific interests. This poses challenges in cases where the users are more focused on particular topics or aspects. With the advancement of instruction-finetuned language models, we introduce instruction-tuning to dialogues to expand the capability set of dialogue summarization models. To overcome the scarcity of instructive dialogue summarization data, we propose a three-step approach to synthesize high-quality query-based summarization triples. This process involves summary-anchored query generation, query filtering, and query-based summary generation. By training a unified model called InstructDS (Instructive Dialogue Summarization) on three summarization datasets with multi-purpose instructive triples, we expand the capability of dialogue summarization models. We evaluate our method on four datasets, including dialogue summarization and dialogue reading comprehension. Experimental results show that our approach outperforms the state-of-the-art models and even models with larger sizes. Additionally, our model exhibits higher generalizability and faithfulness, as confirmed by human subjective evaluations.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Summarization DialogSum InstructDS Rouge1 47.8 # 1
Rouge2 22.2 # 1
RougeL 39.4 # 2
Machine Reading Comprehension DREAM InstructDS Accuracy 65.9 # 2
Text Summarization SAMSum InstructDS ROUGE-1 55.3 # 1
ROUGE-2 31.3 # 1
ROUGE-L 46.7 # 4
BertScoreF1 55.5 # 2

Methods


No methods listed for this paper. Add relevant methods here