A simple but tough-to-beat baseline for the Fake News Challenge stance detection task

11 Jul 2017  ·  Benjamin Riedel, Isabelle Augenstein, Georgios P. Spithourakis, Sebastian Riedel ·

Identifying public misinformation is a complicated and challenging task. An important part of checking the veracity of a specific claim is to evaluate the stance different news sources take towards the assertion. Automatic stance evaluation, i.e. stance detection, would arguably facilitate the process of fact checking. In this paper, we present our stance detection system which claimed third place in Stage 1 of the Fake News Challenge. Despite our straightforward approach, our system performs at a competitive level with the complex ensembles of the top two winning teams. We therefore propose our system as the 'simple but tough-to-beat baseline' for the Fake News Challenge stance detection task.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Fake News Detection FNC-1 3rd place at FNC-1 - Team UCL Machine Reading (Riedel et al., 2017) Weighted Accuracy 81.72 # 5
Per-class Accuracy (Agree) 44.04 # 5
Per-class Accuracy (Disagree) 6.60 # 5
Per-class Accuracy (Discuss) 81.38 # 5
Per-class Accuracy (Unrelated) 97.90 # 3

Methods


No methods listed for this paper. Add relevant methods here