MuMiN: A Large-Scale Multilingual Multimodal Fact-Checked Misinformation Social Network Dataset

23 Feb 2022  ยท  Dan Saattrup Nielsen, Ryan McConville ยท

Misinformation is becoming increasingly prevalent on social media and in news articles. It has become so widespread that we require algorithmic assistance utilising machine learning to detect such content. Training these machine learning models require datasets of sufficient scale, diversity and quality. However, datasets in the field of automatic misinformation detection are predominantly monolingual, include a limited amount of modalities and are not of sufficient scale and quality. Addressing this, we develop a data collection and linking system (MuMiN-trawl), to build a public misinformation graph dataset (MuMiN), containing rich social media data (tweets, replies, users, images, articles, hashtags) spanning 21 million tweets belonging to 26 thousand Twitter threads, each of which have been semantically linked to 13 thousand fact-checked claims across dozens of topics, events and domains, in 41 different languages, spanning more than a decade. The dataset is made available as a heterogeneous graph via a Python package (mumin). We provide baseline results for two node classification tasks related to the veracity of a claim involving social media, and demonstrate that these are challenging tasks, with the highest macro-average F1-score being 62.55% and 61.45% for the two tasks, respectively. The MuMiN ecosystem is available at https://mumin-dataset.github.io/, including the data, documentation, tutorials and leaderboards.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Node Classification MuMiN-large HeteroGraphSAGE Claim Classification Macro-F1 0.5980 # 1
Tweet Classification Macro-F1 0.6145 # 1
Node Classification MuMiN-large Random Claim Classification Macro-F1 0.3879 # 4
Tweet Classification Macro-F1 0.3690 # 4
Node Classification MuMiN-large Majority class Claim Classification Macro-F1 0.4813 # 3
Tweet Classification Macro-F1 0.4887 # 3
Node Classification MuMiN-large LaBSE Claim Classification Macro-F1 0.5790 # 2
Tweet Classification Macro-F1 0.5280 # 2
Node Classification MuMiN-medium Random Claim Classification Macro-F1 0.3896 # 4
Tweet Classification Macro-F1 0.3772 # 4
Node Classification MuMiN-medium LaBSE Claim Classification Macro-F1 0.5585 # 2
Tweet Classification Macro-F1 0.5745 # 1
Node Classification MuMiN-medium HeteroGraphSAGE Claim Classification Macro-F1 0.5770 # 1
Tweet Classification Macro-F1 0.5410 # 2
Node Classification MuMiN-medium Majority class Claim Classification Macro-F1 0.4806 # 3
Tweet Classification Macro-F1 0.4856 # 3
Node Classification MuMiN-small HeteroGraphSAGE Claim Classification Macro-F1 0.5795 # 2
Tweet Classification Macro-F1 0.5605 # 1
Node Classification MuMiN-small Random Claim Classification Macro-F1 0.4007 # 4
Tweet Classification Macro-F1 0.3718 # 4
Node Classification MuMiN-small Majority class Claim Classification Macro-F1 0.4756 # 3
Tweet Classification Macro-F1 0.4877 # 3
Node Classification MuMiN-small LaBSE Claim Classification Macro-F1 0.6255 # 1
Tweet Classification Macro-F1 0.5450 # 2

Methods


No methods listed for this paper. Add relevant methods here