Content Enhanced BERT-based Text-to-SQL Generation

16 Oct 2019  ยท  Tong Guo, Huilin Gao ยท

We present a simple methods to leverage the table content for the BERT-based model to solve the text-to-SQL problem. Based on the observation that some of the table content match some words in question string and some of the table header also match some words in question string, we encode two addition feature vector for the deep model. Our methods also benefit the model inference in testing time as the tables are almost the same in training and testing time. We test our model on the WikiSQL dataset and outperform the BERT-based baseline by 3.7% in logic form and 3.7% in execution accuracy and achieve state-of-the-art.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Code Generation WikiSQL NL2SQL-RULE Execution Accuracy 89.2 # 1
Exact Match Accuracy 83.7 # 1
Semantic Parsing WikiSQL NL2SQL-BERT Accuracy 89 # 1

Methods