Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning

ICLR 2018  ·  Victor Zhong, Caiming Xiong, Richard Socher ·

A significant amount of the world's knowledge is stored in relational databases. However, the ability for users to retrieve facts from a database is limited due to a lack of understanding of query languages such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model leverages the structure of SQL queries to significantly reduce the output space of generated queries. Moreover, we use rewards from in-the-loop query execution over the database to learn a policy to generate unordered parts of the query, which we show are less suitable for optimization via cross entropy loss. In addition, we will publish WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia. This dataset is required to train our model and is an order of magnitude larger than comparable datasets. By applying policy-based reinforcement learning with a query execution environment to WikiSQL, our model Seq2SQL outperforms attentional sequence to sequence models, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%.

PDF Abstract ICLR 2018 PDF ICLR 2018 Abstract

Datasets


Introduced in the Paper:

WikiSQL

Used in the Paper:

WikiTableQuestions

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Code Generation WikiSQL Seq2SQL (Zhong et al., 2017) Execution Accuracy 59.4 # 9
Exact Match Accuracy 48.3 # 7
Code Generation WikiSQL Seq2Seq (Zhong et al., 2017) Execution Accuracy 35.9 # 10
Exact Match Accuracy 23.4 # 8

Methods


No methods listed for this paper. Add relevant methods here