TURL: Table Understanding through Representation Learning

26 Jun 2020  ·  Xiang Deng, Huan Sun, Alyssa Lees, You Wu, Cong Yu ·

Relational tables on the Web store a vast amount of knowledge. Owing to the wealth of such tables, there has been tremendous progress on a variety of tasks in the area of table understanding. However, existing work generally relies on heavily-engineered task-specific features and model architectures. In this paper, we present TURL, a novel framework that introduces the pre-training/fine-tuning paradigm to relational Web tables. During pre-training, our framework learns deep contextualized representations on relational tables in an unsupervised manner. Its universal model design with pre-trained representations can be applied to a wide range of tasks with minimal task-specific fine-tuning. Specifically, we propose a structure-aware Transformer encoder to model the row-column structure of relational tables, and present a new Masked Entity Recovery (MER) objective for pre-training to capture the semantics and knowledge in large-scale unlabeled data. We systematically evaluate TURL with a benchmark consisting of 6 different tasks for table understanding (e.g., relation extraction, cell filling). We show that TURL generalizes well to all tasks and substantially outperforms existing methods in almost all instances.

PDF Abstract

Datasets


Introduced in the Paper:

WikiTables-TURL

Used in the Paper:

DBpedia T2Dv2 WikipediaGS
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Column Type Annotation T2Dv2 TURL Accuracy (%) 96.2 # 2
Cell Entity Annotation WikipediaGS TURL F1 (%) 67 # 1
Column Type Annotation WikipediaGS-CTA TURL Accuracy (%) 74.6 # 1
Cell Entity Annotation WikiTables-TURL-CEA TURL F1 (%) 68 # 1
Columns Property Annotation WikiTables-TURL-CPA TURL F1 (%) 94.91 # 1
Column Type Annotation WikiTables-TURL-CTA TURL F1 (%) 94.75 # 1

Methods