Knowledge base population is the task of filling the incomplete elements of a given knowledge base by automatically processing a large corpus of text.
The combination of better supervised data and a more appropriate high-capacity model enables much better relation extraction performance.
Ranked #19 on Relation Extraction on TACRED
KnowledgeNet is a benchmark dataset for the task of automatically populating a knowledge base (Wikidata) with facts expressed in natural language text on the web.
We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials.
State-of-the-art relation extraction approaches are only able to recognize relationships between mentions of entity arguments stated explicitly in the text and typically localized to the same sentence.
In this work, we propose a model which alleviates the need for such disambiguators by jointly learning NER and MD taggers in languages for which one can provide a list of candidate morphological analyses.
State-of-the-art knowledge base completion (KBC) models predict a score for every known or unknown fact via a latent factorization over entity and relation embeddings.
If not, what characteristics of a dataset determine the performance of MF and TF models?