R^3-SQL: Ranking Reward and Resampling for Text-to-SQL

Hugging Face Daily Papers Papers

Summary

# Paper page - R^3-SQL: Ranking Reward and Resampling for Text-to-SQL Source: [https://huggingface.co/papers/2604.25325](https://huggingface.co/papers/2604.25325) ## Abstract R$^3$\-SQL addresses inconsistencies in scoring functionally equivalent SQL queries and improves candidate recall through unified reward ranking and agentic resampling techniques\. Modern[Text\-to\-SQL](https://huggingface.co/papers?q=Text-to-SQL)systems generate multiple candidate[SQL queries](https://huggingface.co/papers

Modern Text-to-SQL systems generate multiple candidate SQL queries and rank them to judge a final prediction. However, existing methods face two limitations. First, they often score functionally equivalent SQL queries inconsistently despite identical execution results. Second, ranking cannot recover when the correct SQL is absent from the candidate pool. We propose R^3-SQL, a Text-to-SQL framework that addresses both issues through unified reward for ranking and resampling. R^3-SQL first groups candidates by execution result and ranks groups for consistency. To score each group, it combines a pairwise preference across groups with a pointwise utility from the best group rank and size, capturing relative preference, consistency, and candidate quality. To improve candidate recall, R^3-SQL introduces agentic resampling, which judges the generated candidate pool and selectively resamples when the correct SQL is likely absent. R^3-SQL achieves 75.03 execution accuracy on BIRD-dev, a new state of the art among methods using models with disclosed sizes, with consistent gains across five benchmarks.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 05/11/26, 10:44 AM

Paper page - R^3-SQL: Ranking Reward and Resampling for Text-to-SQL

Source: https://huggingface.co/papers/2604.25325

Abstract

R^3-SQL addresses inconsistencies in scoring functionally equivalent SQL queries and improves candidate recall through unified reward ranking and agentic resampling techniques.

ModernText-to-SQLsystems generate multiple candidateSQL queriesand rank them to judge a final prediction. However, existing methods face two limitations. First, they often score functionally equivalentSQL queriesinconsistently despite identical execution results. Second, ranking cannot recover when the correct SQL is absent from the candidate pool. We propose R^3-SQL, aText-to-SQLframework that addresses both issues through unified reward for ranking and resampling. R^3-SQL first groups candidates by execution result and ranks groups for consistency. To score each group, it combines apairwise preferenceacross groups with apointwise utilityfrom the best group rank and size, capturing relative preference, consistency, and candidate quality. To improve candidate recall, R^3-SQL introducesagentic resampling, which judges the generated candidate pool and selectively resamples when the correct SQL is likely absent. R^3-SQL achieves 75.03execution accuracyon BIRD-dev, a new state of the art among methods using models with disclosed sizes, with consistent gains across five benchmarks.

View arXiv pageView PDFAdd to collection

Models citing this paper0

No model linking this paper

Cite arxiv.org/abs/2604.25325 in a model README.md to link it from this page.

Datasets citing this paper0

No dataset linking this paper

Cite arxiv.org/abs/2604.25325 in a dataset README.md to link it from this page.

Spaces citing this paper0

No Space linking this paper

Cite arxiv.org/abs/2604.25325 in a Space README.md to link it from this page.

Collections including this paper0

No Collection including this paper

Add this paper to acollectionto link it from this page.

Similar Articles

Rethinking the Necessity of Adaptive Retrieval-Augmented Generation through the Lens of Adaptive Listwise Ranking

arXiv cs.CL

This paper proposes AdaRankLLM, an adaptive retrieval framework that challenges the necessity of adaptive RAG by using listwise ranking to dynamically filter retrieved passages. The work shows that adaptive retrieval serves as a noise filter for weaker models while acting as a cost-efficiency optimizer for stronger models, with extensive experiments across multiple datasets and LLMs.