Knowledge Base Completion

66 papers with code • 0 benchmarks • 2 datasets

Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational facts, often represented in the form of "subject", "relation", "object"-triples.

Most implemented papers

Temporal Knowledge Base Completion: New Algorithms and Evaluation Protocols

dair-iitd/tkbi EMNLP 2020

Temporal knowledge bases associate relational (s, r, o) triples with a set of times (or a single time instant) when the relation is valid.

BoxE: A Box Embedding Model for Knowledge Base Completion

ralphabb/BoxE NeurIPS 2020

Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB).

Explaining Neural Matrix Factorization with Gradient Rollback

carolinlawrence/gradient-rollback 12 Oct 2020

Moreover, we show theoretically that the difference between gradient rollback's influence approximation and the true influence on a model's behavior is smaller than known bounds on the stability of stochastic gradient descent.

BERTnesia: Investigating the capture and forgetting of knowledge in BERT

jwallat/knowledge-probing 19 Oct 2020

We found that ranking models forget the least and retain more knowledge in their final layer.

IntKB: A Verifiable Interactive Framework for Knowledge Base Completion

bernhard2202/intkb COLING 2020

(ii) Our system is designed such that it continuously learns during the KB completion task and, therefore, significantly improves its performance upon initial zero- and few-shot relations over time.

K-PLUG: KNOWLEDGE-INJECTED PRE-TRAINED LANGUAGE MODEL FOR NATURAL LANGUAGE UNDERSTANDING AND GENERATION

xu-song/k-plug 1 Jan 2021

K-PLUG achieves new state-of-the-art results on a suite of domain-specific NLP tasks, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue, significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks.

Ranking vs. Classifying: Measuring Knowledge Base Completion Quality

marina-sp/classification_lp AKBC 2020

We randomly remove some of these correct answers from the data set, simulating the realistic scenario of real-world entities missing from a KB.

K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-Commerce

xu-song/k-plug Findings (EMNLP) 2021

K-PLUG achieves new state-of-the-art results on a suite of domain-specific NLP tasks, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue, significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks.

QuatDE: Dynamic Quaternion Embedding for Knowledge Graph Completion

hopkin-ghp/QuatDE 19 May 2021

Knowledge graph embedding has been an active research topic for knowledge base completion (KGC), with progressive improvement from the initial TransE, TransH, RotatE et al to the current state-of-the-art QuatE.

BERTnesia: Investigating the capture and forgetting of knowledge in BERT

jwallat/knowledge-probing EMNLP (BlackboxNLP) 2020

We found that ranking models forget the least and retain more knowledge in their final layer compared to masked language modeling and question-answering.