Triple Classification

21 papers with code • 1 benchmarks • 4 datasets

Triple classification aims to judge whether a given triple (h, r, t) is correct or not with respect to the knowledge graph.

Most implemented papers

KG-BERT: BERT for Knowledge Graph Completion

yao8839836/kg-bert 7 Sep 2019

Knowledge graphs are important resources for many artificial intelligence tasks but often suffer from incompleteness.

Knowledge Representation Learning: A Quantitative Review

thunlp/OpenKE 28 Dec 2018

Knowledge representation learning (KRL) aims to represent entities and relations in knowledge graph in low-dimensional semantic space, which have been widely used in massive knowledge-driven tasks.

Reasoning on Knowledge Graphs with Debate Dynamics

m-hildebrandt/R2D2 2 Jan 2020

The main idea is to frame the task of triple classification as a debate game between two reinforcement learning agents which extract arguments -- paths in the knowledge graph -- with the goal to promote the fact being true (thesis) or the fact being false (antithesis), respectively.

CoDEx: A Comprehensive Knowledge Graph Completion Benchmark

tsafavi/codex EMNLP 2020

We present CoDEx, a set of knowledge graph completion datasets extracted from Wikidata and Wikipedia that improve upon existing knowledge graph completion benchmarks in scope and level of difficulty.

Image-embodied Knowledge Representation Learning

thunlp/IKRL 22 Sep 2016

More specifically, we first construct representations for all images of an entity with a neural image encoder.

Does William Shakespeare REALLY Write Hamlet? Knowledge Representation Learning with Confidence

thunlp/CKRL 9 May 2017

Experimental results demonstrate that our confidence-aware models achieve significant and consistent improvements on all tasks, which confirms the capability of CKRL modeling confidence with structural information in both KG noise detection and knowledge representation learning.

Differentiating Concepts and Instances for Knowledge Graph Embedding

davidlvxin/TransC EMNLP 2018

Most conventional knowledge embedding methods encode both entities (concepts and instances) and relations as vectors in a low dimensional semantic space equally, ignoring the difference between concepts and instances.

A Relational Memory-based Embedding Model for Triple Classification and Search Personalization

daiquocnguyen/R-MeN ACL 2020

Knowledge graph embedding methods often suffer from a limitation of memorizing valid triples to predict new ones for triple classification and search personalization problems.

On the Role of Conceptualization in Commonsense Knowledge Graph Construction

mutiann/ccc 6 Mar 2020

Commonsense knowledge graphs (CKGs) like Atomic and ASER are substantially different from conventional KGs as they consist of much larger number of nodes formed by loosely-structured text, which, though, enables them to handle highly diverse queries in natural language related to commonsense, leads to unique challenges for automatic KG construction methods.