Asking Effective and Diverse Questions: A Machine Reading Comprehension based Framework for Joint Entity-Relation Extraction

IJCAI 2020  ·  Tianyang Zhao, Zhao Yan, Yunbo Cao, Zhoujun Li ·

Recent advances cast the entity-relation extraction to a multi-turn question answering (QA) task and provide an effective solution based on the machine reading comprehension (MRC) models. However, they use a single question to characterize the meaning of entities and relations, which is intuitively not enough because of the variety of context semantics. Meanwhile, existing models enumerate all relation types to generate questions, which is inefficient and easily leads to confusing questions. In this paper, we improve the existing MRCbased entity-relation extraction model through diverse question answering. First, a diversity question answering mechanism is introduced to detect entity spans and two answering selection strategies are designed to integrate different answers. Then, we propose to predict a subset of potential relations and filter out irrelevant ones to generate questions effectively. Finally, entity and relation extractions are integrated in an end-to-end way and optimized through joint learning. Experiment results show that the proposed method significantly outperforms baseline models, which improves the relation F1 to 62.1% (+1.9%) on ACE05 and 71.9% (+3.0%) on CoNLL04. Our implementation is available at https://github.com/TanyaZhao/MRC4ERE.

PDF

Datasets


Results from the Paper


 Ranked #1 on Relation Extraction on ACE 2005 (Sentence Encoder metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Relation Extraction ACE 2005 MRC4ERE++ NER Micro F1 85.5 # 13
RE+ Micro F1 62.1 # 8
Sentence Encoder BERT base # 1
Cross Sentence No # 1

Methods


No methods listed for this paper. Add relevant methods here