An Analysis of the Semantic Annotation Task on the Linked Data Cloud

13 Nov 2018  ·  Gagnon Michel, Zouaq Amal, Aranha Francisco, Ensan Faezeh, Jean-Louis Ludovic ·

Semantic annotation, the process of identifying key-phrases in texts and linking them to concepts in a knowledge base, is an important basis for semantic information retrieval and the Semantic Web uptake. Despite the emergence of semantic annotation systems, very few comparative studies have been published on their performance. In this paper, we provide an evaluation of the performance of existing systems over three tasks: full semantic annotation, named entity recognition, and keyword detection. More specifically, the spotting capability (recognition of relevant surface forms in text) is evaluated for all three tasks, whereas the disambiguation (correctly associating an entity from Wikipedia or DBpedia to the spotted surface forms) is evaluated only for the first two tasks. Our evaluation is twofold: First, we compute standard precision and recall on the output of semantic annotators on diverse datasets, each best suited for one of the identified tasks. Second, we build a statistical model using logistic regression to identify significant performance differences. Our results show that systems that provide full annotation perform better than named entities annotators and keyword extractors, for all three tasks. However, there is still much room for improvement for the identification of the most relevant entities described in a text.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods