Knowledge Base Completion
66 papers with code • 0 benchmarks • 2 datasets
Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational facts, often represented in the form of "subject", "relation", "object"-triples.
Benchmarks
These leaderboards are used to track progress in Knowledge Base Completion
Most implemented papers
Temporal Knowledge Base Completion: New Algorithms and Evaluation Protocols
Temporal knowledge bases associate relational (s, r, o) triples with a set of times (or a single time instant) when the relation is valid.
BoxE: A Box Embedding Model for Knowledge Base Completion
Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB).
Explaining Neural Matrix Factorization with Gradient Rollback
Moreover, we show theoretically that the difference between gradient rollback's influence approximation and the true influence on a model's behavior is smaller than known bounds on the stability of stochastic gradient descent.
BERTnesia: Investigating the capture and forgetting of knowledge in BERT
We found that ranking models forget the least and retain more knowledge in their final layer.
IntKB: A Verifiable Interactive Framework for Knowledge Base Completion
(ii) Our system is designed such that it continuously learns during the KB completion task and, therefore, significantly improves its performance upon initial zero- and few-shot relations over time.
K-PLUG: KNOWLEDGE-INJECTED PRE-TRAINED LANGUAGE MODEL FOR NATURAL LANGUAGE UNDERSTANDING AND GENERATION
K-PLUG achieves new state-of-the-art results on a suite of domain-specific NLP tasks, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue, significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks.
Ranking vs. Classifying: Measuring Knowledge Base Completion Quality
We randomly remove some of these correct answers from the data set, simulating the realistic scenario of real-world entities missing from a KB.
K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-Commerce
K-PLUG achieves new state-of-the-art results on a suite of domain-specific NLP tasks, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue, significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks.
QuatDE: Dynamic Quaternion Embedding for Knowledge Graph Completion
Knowledge graph embedding has been an active research topic for knowledge base completion (KGC), with progressive improvement from the initial TransE, TransH, RotatE et al to the current state-of-the-art QuatE.
BERTnesia: Investigating the capture and forgetting of knowledge in BERT
We found that ranking models forget the least and retain more knowledge in their final layer compared to masked language modeling and question-answering.