SC-Block: Supervised Contrastive Blocking within Entity Resolution Pipelines

6 Mar 2023  ยท  Alexander Brinkmann, Roee Shraga, Christian Bizer ยท

The goal of entity resolution is to identify records in multiple datasets that represent the same real-world entity. However, comparing all records across datasets can be computationally intensive, leading to long runtimes. To reduce these runtimes, entity resolution pipelines are constructed of two parts: a blocker that applies a computationally cheap method to select candidate record pairs, and a matcher that afterwards identifies matching pairs from this set using more expensive methods. This paper presents SC-Block, a blocking method that utilizes supervised contrastive learning for positioning records in the embedding space, and nearest neighbour search for candidate set building. We benchmark SC-Block against eight state-of-the-art blocking methods. In order to relate the training time of SC-Block to the reduction of the overall runtime of the entity resolution pipeline, we combine SC-Block with four matching methods into complete pipelines. For measuring the overall runtime, we determine candidate sets with 99.5% pair completeness and pass them to the matcher. The results show that SC-Block is able to create smaller candidate sets and pipelines with SC-Block execute 1.5 to 2 times faster compared to pipelines with other blockers, without sacrificing F1 score. Blockers are often evaluated using relatively small datasets which might lead to runtime effects resulting from a large vocabulary size being overlooked. In order to measure runtimes in a more challenging setting, we introduce a new benchmark dataset that requires large numbers of product offers to be blocked. On this large-scale benchmark dataset, pipelines utilizing SC-Block and the best-performing matcher execute 8 times faster than pipelines utilizing another blocker with the same matcher reducing the runtime from 2.5 hours to 18 minutes, clearly compensating for the 5 minutes required for training SC-Block.

PDF Abstract

Datasets


Introduced in the Paper:

WDC Block

Used in the Paper:

Amazon-Google Abt-Buy

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Blocking Abt-Buy BM25 Recall 94.7 # 4
Candidate Set Size 8000 # 3
Blocking Abt-Buy SC-Block Recall 99.5 # 1
Candidate Set Size 5000 # 2
Blocking Amazon-Google BM25 Recall 98.7 # 3
Candidate Set Size 40000 # 3
Blocking Amazon-Google SC-Block Recall 99.6 # 1
Candidate Set Size 11000 # 1
Blocking WDC Block - large SC-Block Recall 89.5 # 2
Candidate Set Size 5000000 # 1
Blocking WDC Block - large BM25 Recall 95.5 # 1
Candidate Set Size 20000000 # 2
Blocking WDC Block - medium BM25 Recall 97.8 # 1
Candidate Set Size 500000 # 2
Blocking WDC Block - medium SC-Block Recall 91.9 # 2
Candidate Set Size 100000 # 1
Blocking WDC Block - small SC-Block Recall 93.5% # 2
Candidate Set Size 70000 # 1
Blocking WDC Block - small BM25 Recall 96.9% # 1
Candidate Set Size 250000 # 2

Methods