( Image credit: SyntaxSQLNet )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Existing state-of-the-art approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations.
Ranked #2 on Text-To-Sql on WikiSQL
A significant amount of the world's knowledge is stored in relational databases.
Ranked #7 on Code Generation on WikiSQL
Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks.
Ranked #1 on Semantic Parsing on WikiTableQuestions
We define a new complex and cross-domain semantic parsing and text-to-SQL task where different complex SQL queries and databases appear in train and test sets.
Ranked #1 on Semantic Parsing on spider
Second, we show that the current division of data into training and test sets measures robustness to variations in the way questions are asked, but only partially tests how well systems generalize to new queries; therefore, we propose a complementary dataset split for evaluation of future work.
We present a neural approach called IRNet for complex and cross-domain Text-to-SQL.
In this paper we propose SyntaxSQLNet, a syntax tree network to address the complex and cross-domain text-to-SQL generation task.
In addition, we observe qualitative improvements in the model’s understanding of schema linking and alignment.
The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modeling alignment between database columns and their mentions in a given query.