N24News: A New Dataset for Multimodal News Classification

LREC 2022  ยท  Zhen Wang, Xu Shan, Xiangxie Zhang, Jie Yang ยท

Current news datasets merely focus on text features on the news and rarely leverage the feature of images, excluding numerous essential features for news classification. In this paper, we propose a new dataset, N24News, which is generated from New York Times with 24 categories and contains both text and image information in each news. We use a multitask multimodal method and the experimental results show multimodal news classification performs better than text-only news classification. Depending on the length of the text, the classification accuracy can be increased by up to 8.11%. Our research reveals the relationship between the performance of a multimodal classifier and its sub-classifiers, and also the possible improvements when applying multimodal in news classification. N24News is shown to have great potential to prompt the multimodal news studies.

PDF Abstract LREC 2022 PDF LREC 2022 Abstract

Datasets


Introduced in the Paper:

N15News

Used in the Paper:

AG News Fakeddit

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
News Classification N15News Multimodal(ViT+BERT, Input: Image + Body) Accuracy 0.9249 # 1
News Classification N15News Multimodal(ViT+BERT, Input: Image + Abstract) Accuracy 0.8610 # 3
News Classification N15News Multimodal(ViT+BERT, Input: Image + Caption) - Concatenate Accuracy 0.7951 # 6
News Classification N15News Multimodal(ViT+BERT, Input: Image + Headline) - Dot Accuracy 0.8202 # 5
News Classification N15News BERT (Input: Body) Accuracy 0.9203 # 2
News Classification N15News BERT (Input: Abstract) Accuracy 0.8471 # 4
News Classification N15News BERT (Input: Caption) Accuracy 0.7792 # 7
News Classification N15News BERT (Input: Headline) Accuracy 0.7727 # 8
News Classification N15News ViT (Input: Image) Accuracy 0.6065 # 9

Methods


No methods listed for this paper. Add relevant methods here