REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking

19 Apr 2024  ยท  Nacime Bouziani, Shubhi Tyagi, Joseph Fisher, Jens Lehmann, Andrea Pierleoni ยท

Extracting structured information from unstructured text is critical for many downstream NLP applications and is traditionally achieved by closed information extraction (cIE). However, existing approaches for cIE suffer from two limitations: (i) they are often pipelines which makes them prone to error propagation, and/or (ii) they are restricted to sentence level which prevents them from capturing long-range dependencies and results in expensive inference time. We address these limitations by proposing REXEL, a highly efficient and accurate model for the joint task of document level cIE (DocIE). REXEL performs mention detection, entity typing, entity disambiguation, coreference resolution and document-level relation classification in a single forward pass to yield facts fully linked to a reference knowledge graph. It is on average 11 times faster than competitive existing approaches in a similar setting and performs competitively both when optimised for any of the individual subtasks and a variety of combinations of different joint tasks, surpassing the baselines by an average of more than 6 F1 points. The combination of speed and accuracy makes REXEL an accurate cost-efficient system for extracting structured information at web-scale. We also release an extension of the DocRED dataset to enable benchmarking of future work on DocIE, which is available at https://github.com/amazon-science/e2e-docie.

PDF Abstract

Datasets


Introduced in the Paper:

DocRED-IE

Used in the Paper:

DocRED DWIE

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Joint Entity and Relation Extraction DocRED REXEL Relation F1 39.06 # 5
Document-level Closed Information Extraction DocRED REXEL Relation F1 27.96 # 1
Document-level Relation Extraction DocRED-IE REXEL Relation F1 60.10 # 1
Entity Disambiguation DocRED-IE REXEL Avg F1 86.74 # 1
Coreference Resolution DocRED-IE REXEL Avg F1 90.93 # 1
Entity Typing DocRED-IE REXEL Avg F1 96.01 # 1
Document-level Closed Information Extraction DocRED-IE REXEL Relation F1 27.96 # 1
Joint Entity and Relation Extraction DocRED-IE REXEL Relation F1 39.06 # 1
Document-level Closed Information Extraction DWIE REXEL F1-Hard 53.77 # 1
Named Entity Recognition (NER) DWIE REXEL F1-Hard 90.59 # 1
Relation Extraction DWIE REXEL F1-Hard 65.8 # 1
Coreference Resolution DWIE REXEL Avg. F1 95.12 # 1

Methods