ArgSciChat: A Dataset for Argumentative Dialogues on Scientific Papers

14 Feb 2022  ยท  Federico Ruggeri, Mohsen Mesgar, Iryna Gurevych ยท

The applications of conversational agents for scientific disciplines (as expert domains) are understudied due to the lack of dialogue data to train such agents. While most data collection frameworks, such as Amazon Mechanical Turk, foster data collection for generic domains by connecting crowd workers and task designers, these frameworks are not much optimized for data collection in expert domains. Scientists are rarely present in these frameworks due to their limited time budget. Therefore, we introduce a novel framework to collect dialogues between scientists as domain experts on scientific papers. Our framework lets scientists present their scientific papers as groundings for dialogues and participate in dialogue they like its paper title. We use our framework to collect a novel argumentative dialogue dataset, ArgSciChat. It consists of 498 messages collected from 41 dialogues on 20 scientific papers. Alongside extensive analysis on ArgSciChat, we evaluate a recent conversational agent on our dataset. Experimental results show that this agent poorly performs on ArgSciChat, motivating further research on argumentative scientific agents. We release our framework and the dataset.

PDF Abstract

Datasets


Introduced in the Paper:

ArgSciChat

Used in the Paper:

CoQA QuAC

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Fact Selection ArgSciChat TF-IDF Fact-F1 16.22 # 1
Response Generation ArgSciChat LED(Q,F) Message-F1 19.54 # 1
BScore 86.64 # 1
Mover 8.53 # 1
Response Generation ArgSciChat LED(Q,P,H) Message-F1 16.14 # 2
BScore 86.00 # 2
Mover 4.54 # 2
Response Generation ArgSciChat LED(Q,P) Message-F1 14.25 # 3
BScore 85.85 # 3
Mover 2.25 # 3
Fact Selection ArgSciChat LED(Q,P,H) Fact-F1 8.50 # 4
Fact Selection ArgSciChat LED(Q,P) Fact-F1 10.58 # 3
Fact Selection ArgSciChat S-BERT Fact-F1 13.65 # 2

Methods


No methods listed for this paper. Add relevant methods here