TrustSQL: A Reliability Benchmark for Text-to-SQL Models with Diverse Unanswerable Questions

23 Mar 2024  ·  Gyubok Lee, Woosog Chay, Seonhee Cho, Edward Choi ·

Recent advances in large language models (LLMs) have led to significant improvements in translating natural language questions into SQL queries. While achieving high accuracy in SQL generation is crucial, little is known about the extent to which these text-to-SQL models can reliably handle diverse types of questions encountered during real-world deployment, including unanswerable ones. To explore this aspect, we introduce TrustSQL, a new benchmark designed to assess the reliability of text-to-SQL models in both single-database and cross-database settings. TrustSQL requires models to provide one of two outputs: 1) an SQL prediction or 2) abstention from making an SQL prediction, either due to potential errors in the generated SQL or when faced with unanswerable questions. For model evaluation, we explore various modeling approaches specifically designed for this task: 1) optimizing separate models for answerability detection, SQL generation, and error detection, which are then integrated into a single pipeline; and 2) developing a unified approach that uses a single model to solve this task. Experimental results using our new reliability score show that addressing this challenge involves many different areas of research and opens new avenues for model development. However, none of the methods consistently surpasses the reliability scores of a naive baseline that abstains from SQL predictions for all questions, with varying penalties.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here