Translating Embeddings for Modeling Multi-relational Data

We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose, TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.

PDF Abstract

Datasets


Introduced in the Paper:

FB15k WN18 WN18RR

Used in the Paper:

FB15k-237 UMLS FB122

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Link Prediction FB122 TransE HITS@3 58.9 # 5
Hits@5 64.2 # 5
Hits@10 70.2 # 5
MRR 48.0 # 5
Link Prediction FB15k TransE MR 125 # 12
Hits@10 0.471 # 25
Link Prediction FB15k-237 TransE MRR 0.2904 # 55
Hits@10 .4709 # 55
Hits@1 0.1987 # 48
Link Prediction WN18 TransE Hits@10 0.754 # 34
MR 263 # 10
Link Prediction WN18RR TransE MRR 0.4659 # 48
Hits@10 0.5555 # 45
Hits@1 0.4226 # 46

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Link Prediction UMLS TransE Hits@10 0.989 # 8
MR 1.84 # 8

Methods