Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots

7 Apr 2020  ·  Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, Xiaodan Zhu ·

In this paper, we study the problem of employing pre-trained language models for multi-turn response selection in retrieval-based chatbots. A new model, named Speaker-Aware BERT (SA-BERT), is proposed in order to make the model aware of the speaker change information, which is an important and intrinsic property of multi-turn dialogues. Furthermore, a speaker-aware disentanglement strategy is proposed to tackle the entangled dialogues. This strategy selects a small number of most important utterances as the filtered context according to the speakers' information in them. Finally, domain adaptation is performed to incorporate the in-domain knowledge into pre-trained language models. Experiments on five public datasets show that our proposed model outperforms the present models on all metrics by large margins and achieves new state-of-the-art performances for multi-turn response selection.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Conversational Response Selection Douban SA-BERT MAP 0.619 # 8
MRR 0.659 # 8
P@1 0.496 # 7
R10@1 0.313 # 7
R10@2 0.481 # 9
R10@5 0.847 # 8
Conversational Response Selection E-commerce SA-BERT R10@1 0.704 # 7
R10@2 0.879 # 7
R10@5 0.985 # 7
Conversational Response Selection RRS SA-BERT+BERT-FP R10@1 0.497 # 1
MAP 0.701 # 2
MRR 0.715 # 1
P@1 0.555 # 1
R10@2 0.685 # 2
R10@5 0.931 # 1
Conversational Response Selection RRS Ranking Test SA-BERT+BERT-FP NDCG@3 0.674 # 2
NDCG@5 0.753 # 2
Conversational Response Selection Ubuntu Dialogue (v1, Ranking) SA-BERT R10@1 0.855 # 9
R10@2 0.928 # 9
R10@5 0.983 # 10
R2@1 0.965 # 2

Methods