GSM-SEM: Benchmark and Framework for Generating Semantically Variant Augmentations
Summary
This paper introduces GSM-SEM, a framework for generating semantically diverse benchmark variants to mitigate memorization in mathematical reasoning evaluations. The authors demonstrate that this approach reveals significant performance drops in current SOTA LLMs compared to static benchmarks.
View Cached Full Text
Cached at: 05/11/26, 06:43 AM
# GSM-SEM: Benchmark and Framework for Generating Semantically Variant Augmentations Source: [https://arxiv.org/html/2605.07053](https://arxiv.org/html/2605.07053) Jyotika Singh,Fang Tu,Aziza Mirzadova,Amit Agarwal, Hitesh Laxmichand Patel,Sandip Ghoshal,Miguel Ballesteros, Yassine Benajiba,Weiyi Sun,Graham Horwood,Sujith Ravi,Dan Roth Oracle AI Correspondence:[jyotika\.s\.singh@oracle\.com](https://arxiv.org/html/2605.07053v1/mailto:[email protected]) ###### Abstract Benchmarks like GSM8K are popular measures of mathematical reasoning, but leaderboard gains can overstate true capability due to memorization of fixed test sets\. Most robustness variants apply surface\-level perturbations \(paraphrases, renamings, number swaps, distractors\) that largely preserve the underlying facts, and static releases can themselves become memorization targets over time\. We introduceGSM\-SEM, a reusable and stochastic framework for generating*semantically diverse*benchmark variants with substantially higher semantic variance than prior approaches\. GSM\-SEM perturbs problem statements by modifying entities, attributes, and/or relationships, frequently altering underlying facts and requiring models to recompute solutions under new conditions, while constraining generation to preserve the original calculations/answer and approximate problem difficulty\. GSM\-SEM generates fresh variants on each run without requiring re\-annotation, reducing reliance on static public benchmarks for evaluation and thereby lowering the bias of memorization\. We apply GSM\-SEM on GSM8K and two existing variation suites \(GSM\-Symbolic and GSM\-Plus\), producingGSM8K\-SEM,GSM\-Symbolic\-SEM, andGSM\-Plus\-SEM\. Evaluating 14 SOTA LLMs, we observe consistent performance drops with larger decline when semantic perturbations are coupled with symbolic/plus variations \(average drop rate ~28% in maximum strictness configuration of GSM\-SEM\)\. We publicly release the three SEM variants as fully human\-validated datasets\. Finally, to demonstrate applicability beyond GSM\-style math problems, we apply GSM\-SEM to additional benchmarks including BigBenchHard, LogicBench, and NLR\-BIRD\. GSM\-SEM: Benchmark and Framework for Generating Semantically Variant Augmentations Jyotika Singh, Fang Tu, Aziza Mirzadova, Amit Agarwal,Hitesh Laxmichand Patel,Sandip Ghoshal,Miguel Ballesteros,Yassine Benajiba,Weiyi Sun,Graham Horwood,Sujith Ravi,Dan RothOracle AICorrespondence:[jyotika\.s\.singh@oracle\.com](https://arxiv.org/html/2605.07053v1/mailto:[email protected]) ## 1Introduction Benchmark leaderboards are widely used to track progress in language\-model reasoning, with gains often interpreted as evidence of improved mathematical and logical competence\. Yet a growing body of work suggests that high benchmark scores may reflect memorization, contamination, or exploitation of dataset\-specific artifacts rather than robust generalization\(Sclar et al\.,[2024](https://arxiv.org/html/2605.07053#bib.bib22); Gonen et al\.,[2023](https://arxiv.org/html/2605.07053#bib.bib7)\)\. This concern is especially salient for widely used datasets such as GSM8K\(Cobbe et al\.,[2021](https://arxiv.org/html/2605.07053#bib.bib5)\), where repeated exposure through training mixtures and derivative resources can make static evaluation sets progressively less diagnostic\. Figure 1:Example perturbations and per\-run accuracy\. Top: original GSM8K problem\. Middle: GSM‑Symbolic and GSM‑Plus rewrites\. Bottom: GSM\-SEM variants \(orange highlights indicate edited spans\)\. For each panel, accuracy across five independent runs is shown \(✓correct,xincorrect\), illustrating higher failures on SEM variants\. L3\.1 = Llama‑3\.1‑405B\-Ins; GPT‑5 uses medium/default reasoning effort\.A common response has been to introduce robustness variants via paraphrasing, entity renaming, and numerical substitutions\(Mirzadeh et al\.,[2025](https://arxiv.org/html/2605.07053#bib.bib14); Li et al\.,[2024](https://arxiv.org/html/2605.07053#bib.bib9); Lunardi et al\.,[2025](https://arxiv.org/html/2605.07053#bib.bib11); Wang and Zhao,[2024](https://arxiv.org/html/2605.07053#bib.bib30); Rabinovich et al\.,[2023](https://arxiv.org/html/2605.07053#bib.bib17); Raj et al\.,[2022](https://arxiv.org/html/2605.07053#bib.bib18); Sclar et al\.,[2024](https://arxiv.org/html/2605.07053#bib.bib22)\)\. While these perturbations are useful for testing sensitivity to wording and superficial cues, they typically preserve the original factual backbone and relational structure of each problem\. As a result, they often probe invariance to surface form more than adaptability to meaningfully different semantic conditions\. Moreover, most evaluations focus on older or smaller models, where effects are large, while the few newer SOTA models of the time assessed show smaller declines, leaving open whether recent SOTA systems have developed more robust reasoning capabilities\. We address this gap withGSM\-SEM, a reusable framework for generatingsemantically diversevariants with substantially higher semantic variance than prior transformations \(Figure[1](https://arxiv.org/html/2605.07053#S1.F1)\), frequently altering underlying facts requiring models to recompute solutions under new conditions, while constraining generation to preserve the original ground\-truth final answer \(and associated computation\)\. Importantly, GSM\-SEM is*stochastic and reusable*: it generates fresh variants on each run without re\-annotation \(due to ground\-truth answer and calculation preservation\), reducing reliance on static public benchmarks and thereby lowering evaluation bias from memorization as models and training corpora evolve\. Our main contributions are: - •GSM\-SEM: A stochastic, reusable framework for generating semantically diverse benchmark variants while preserving the original calculations/answer and targeting similar difficulty\. - •Human\-validated benchmark release: Public release of three fully human\-validated benchmarks \- GSM8K\-SEM, GSM\-Symbolic\-SEM, GSM\-Plus\-SEM\.111[https://sites\.google\.com/view/gsm\-sem/home](https://sites.google.com/view/gsm-sem/home) - •Comprehensive evaluation: Study of 14 SOTA LLMs across model families, showing consistent performance degradation in GSM\-SEM variants relative to the original benchmarks, with amplified failures when semantic perturbations are coupled with symbolic and plus variations\. - •Extension: Demonstration of the framework’s applicability to out\-of\-domain non\-GSM samples from BigBenchHardSuzgun et al\. \([2022](https://arxiv.org/html/2605.07053#bib.bib28)\), LogicBenchParmar et al\. \([2024](https://arxiv.org/html/2605.07053#bib.bib16)\), and NLR\-BIRDSingh et al\. \([2025](https://arxiv.org/html/2605.07053#bib.bib25)\)\. ## 2Related Work GSM8K is a widely used benchmark for evaluating LLMs on grade\-school mathematical reasoning\. However, its extensive adoption and relatively standardized problem format have raised concerns that leaderboard gains may partially reflect overfitting, contamination, or memorization rather than robust reasoning\(Xie et al\.,[2025](https://arxiv.org/html/2605.07053#bib.bib33); Zhang et al\.,[2024](https://arxiv.org/html/2605.07053#bib.bib41)\)\. A large body of work shows that small changes to problem statements \- including entity substitutions, paraphrasing, and the insertion of irrelevant context \- can induce substantial performance drops\(Shi et al\.,[2023b](https://arxiv.org/html/2605.07053#bib.bib24); Li et al\.,[2024](https://arxiv.org/html/2605.07053#bib.bib9); Wu et al\.,[2024](https://arxiv.org/html/2605.07053#bib.bib31); Zhou et al\.,[2024](https://arxiv.org/html/2605.07053#bib.bib42); Srivatsa and Kochmar,[2024](https://arxiv.org/html/2605.07053#bib.bib27); Bhuiya et al\.,[2024](https://arxiv.org/html/2605.07053#bib.bib3); Shi et al\.,[2023a](https://arxiv.org/html/2605.07053#bib.bib23); Karim et al\.,[2025](https://arxiv.org/html/2605.07053#bib.bib8)\), suggesting that strong GSM8K accuracy alone can be a brittle indicator of generalization\. Adversarial context and distractors:One line of work probes fragility by introducing adversarial or irrelevant information\. GSM\-IC \(Irrelevant Context\)\(Shi et al\.,[2023b](https://arxiv.org/html/2605.07053#bib.bib24)\)demonstrates that LLMs may incorporate distractor sentences into intermediate reasoning, degrading performance\. iGSM\(Ye et al\.,[2025](https://arxiv.org/html/2605.07053#bib.bib35)\)further studies sensitivity to changes in prompting and problem structure\. While these are valuable for isolating specific failure modes, they often target a narrower set of perturbations relative to more recent, multi\-phenomena suites\. Comprehensive perturbation suites:GSM\-Plus\(Li et al\.,[2024](https://arxiv.org/html/2605.07053#bib.bib9)\)expands GSM8K with a large collection of derived variants spanning multiple perturbation categories \(e\.g\., rephrasing, distractors, numerical variation, arithmetic/operation changes, and higher\-order reasoning demands\)\. This breadth makes GSM\-Plus a useful modern testbed that covers distractor\-style failures \(including those emphasized by GSM\-IC\) while also probing additional robustness dimensions\. Many GSM\-Plus transformations modify the underlying computation and therefore need new ground\-truth answers\. Symbolic synthesis:A complementary line addresses the instability of single\-point benchmarks by generating*distributions*of variants\. GSM\-Symbolic\(Mirzadeh et al\.,[2025](https://arxiv.org/html/2605.07053#bib.bib14)\)uses symbolic templates to synthesize controlled instantiations of problems by varying names and numbers, enabling statistical evaluation across many realizations\. This template\-based view reveals that model accuracy can be highly sensitive to seemingly minor changes \(e\.g\., specific values and therefore calculations\) and can support more rigorous measurement than small\-scale iterative perturbations\. Because GSM\-Symbolic typically changes numbers, variants need updated ground\-truth answers\. Figure 2:GSM\-SEM:Semantic variant generation pipeline\.GSM\-SEMtargets a different axis than prior variants:*high semantic variance*via edits to the narrative, while constraining generation to preserve the original answer and approximate difficulty\. This addresses two practical limitations of many existing benchmarks: \(i\) released variants are static and can become memorization targets over time; and \(ii\) extending them with fresh, comparable perturbations is often infeasible or requires re\-annotating ground\-truth answers, which is hard to do reliably at scale in an automated fashion\. By generating fresh variants while keeping the final answer fixed, GSM\-SEM supports re\-runnable evaluation without re\-annotation and enables*compositional*robustness testing: it can be applied directly to GSM8K and layered on top of existing robustness suites\. We focus in particular on GSM\-Plus and GSM\-Symbolic because they provide complementary substrates for such composition—a broad, multi\-phenomena perturbation suite \(GSM\-Plus\) and a template\-driven framework enabling evaluation across different calculations for the same problem \(GSM\-Symbolic\)\. Extended related work / methodological details are shared in Appendix[A](https://arxiv.org/html/2605.07053#A1)\. ## 3Methods \- GSM\-SEM pipeline The GSM\-SEM framework \(Figure[2](https://arxiv.org/html/2605.07053#S2.F2)\) generates semantic variations of GSM\-based datasets \(GSM8k, GSM\-Symbolic, GSM\-Plus\) through a dual curation and validation process followed by pruning the generated set based on similarity\-based redundancy and tunable strictness filtering\. These perturbations maintain the same answers as the original data, without altering the calculation and values required for the solutions\. This method is further applied to out\-of\-domain datasets \(BigBenchHard, LogicBench, NLR\-BIRD\) to demonstrate potential for broader applicability beyond GSM\. Appendix[B](https://arxiv.org/html/2605.07053#A2)contains details of prompts, models, and settings\. ##### Semantic Variant Generation Letx=\(q,a\)x=\(q,a\)denote an original sample with questionqqand ground\-truth numeric answeraa\. We generate semantically diverse variantsx~=\(q~,a\)\\tilde\{x\}=\(\\tilde\{q\},a\)using two complementary augmentation strategies\. The two strategies are designed to be complementary\. One conditions on the solution and answer, anchoring the computation while enabling larger semantic variation\. Other conditions on the original question, preserving fine\-grained structural constraints while modifying narrative elements\. These streams cover different points on the diversity–control tradeoff, and combining them reduces failure modes of either alone\. First, given\(q,a\)\(q,a\), we prompt diverse LLMs to reverse\-engineer a new questionq~\\tilde\{q\}that preserves the underlying numerical computation implied byaawhile allowing the surface form and contextual facts to change substantially \(e\.g\., from ‘new bacteria infects’ to ‘virus spreads’; Figure[1](https://arxiv.org/html/2605.07053#S1.F1)\)\. As evaluation process compares only the final numeric answer \(or also calculations for LLM\-judge\-based evaluation\), the ground\-truth valueaaremains valid for these variants\. Second, we prompt the LLM to keep all numerical values fixed while altering the scenario/theme in the question, thereby preserving the required computation but increasing semantic diversity \(e\.g\., from ‘plague infects’ to ‘rumor spreads’ and ‘fire breaks out’; Figure[1](https://arxiv.org/html/2605.07053#S1.F1)\)\. Finally, we apply a rule\-based verifier to ensure that all numeric values appearing in eachq~\\tilde\{q\}match those in the corresponding original questionqq\. ##### Automated Quality Validation We designed the automated validator by first having human reviewers assess 70 randomly sampled augmented variants, flagging both acceptable and flawed examples \(details in Appendix[C\.1](https://arxiv.org/html/2605.07053#A3.SS1)\)\. Two independent reviewers labeled the data for quality and logical coherence, and these annotations \(on samples distinct from those used in our main evaluation\) were used as input to engineer a judge LLM prompt to mirror the same decision criteria for tuning the pipeline\. ##### Similarity\-Based Redundancy Pruning Letxxdenote a base sample andx~\\tilde\{x\}a candidate variant\. We compute the cosine\-based similaritys\(x,x~\)s\(x,\\tilde\{x\}\)based on term frequency vectors and discard candidates withs\(x,x~\)\>0\.85s\(x,\\tilde\{x\}\)\>0\.85, retaining only variants that are sufficiently divergent from the base sample\. To further promote diversity among retained variants, we additionally perform set\-level de\-duplication\. Specifically, given the current retained set𝒮\\mathcal\{S\}, we accept a candidatex~\\tilde\{x\}only ifs\(x~,x′\)≤0\.85s\(\\tilde\{x\},x^\{\\prime\}\)\\leq 0\.85for allx′∈𝒮x^\{\\prime\}\\in\\mathcal\{S\}\. This criterion ensures that each added variant is distinct both from the original sample and from previously retained variants\. ##### Strictness Filter This step applies a strictness filter to control how challenging the final samples should be to further filter easy and redundant variants\. We first evaluatex~\\tilde\{x\}using a held\-out set of reference modelsℳhold\\mathcal\{M\}\_\{\\text\{hold\}\}and compute a cross\-model consistency scorec\(x~\)c\(\\tilde\{x\}\)that captures agreement in correctness acrossℳhold\\mathcal\{M\}\_\{\\text\{hold\}\}\. Variants with unanimous agreement are treated as potentially*easy*, as they may either \(i\) oversimplify the underlying task and thus fail to probe robustness, or \(ii\) remain insufficiently perturbed relative toxx\. To mitigate both cases, for high\-consistency variants we apply a stricter semantic pruning rule by constraining the semantic similaritys\(x,x~\)s\(x,\\tilde\{x\}\)to lie within an acceptance window\[α,β\]\[\\alpha,\\beta\]; variants withs\(x,x~\)∉\[α,β\]s\(x,\\tilde\{x\}\)\\notin\[\\alpha,\\beta\]are discarded\. The held\-out model set isℳhold=\{Llama\-3\.3\-70B,GPT\-4o,GPT\-4o\-mini\}\\mathcal\{M\}\_\{\\text\{hold\}\}=\\\{\\text\{Llama\-3\.3\-70B\},\\text\{GPT\-4o\},\\text\{GPT\-4o\-mini\}\\\}\. These models are used exclusively for filtering and are excluded from the main evaluation in Section 5, which is conducted on a disjoint set of 14 models\. We summarize the validation and filtering pipeline as the following acceptance criterion for adding a candidate variantx~\\tilde\{x\}to the retained set𝒟s\\mathcal\{D\}\_\{s\}, x~∈𝒟s↔s\(x,x~\)≤τ∧maxx′∈𝒮s\(x~,x′\)≤τ∧\(cm\(x~\)<1∨s\(x,x~\)∈\[α,β\]\)\\tilde\{x\}\\in\\mathcal\{D\}\_\{s\}\\ \\leftrightarrow\\ \\begin\{aligned\} &s\(x,\\tilde\{x\}\)\\leq\\tau\\\\ &\\wedge\\ \\max\_\{x^\{\\prime\}\\in\\mathcal\{S\}\}s\(\\tilde\{x\},x^\{\\prime\}\)\\leq\\tau\\\\ &\\wedge\\ \\Big\(cm\(\\tilde\{x\}\)<1\\ \\vee\\ s\(x,\\tilde\{x\}\)\\in\[\\alpha,\\beta\]\\Big\)\\end\{aligned\}wheres\(⋅,⋅\)s\(\\cdot,\\cdot\)is cosine similarity between variants and original question, and among different variants;cm\(x~\)∈\[0,1\]cm\(\\tilde\{x\}\)\\in\[0,1\]is the cross\-model consistency score computed on a held\-out model setℳhold\\mathcal\{M\}\_\{\\text\{hold\}\}\. We useτ=0\.85\\tau=0\.85and present results across several configurations of\[α,β\]\[\\alpha,\\beta\]for filter strictness\. Section[5](https://arxiv.org/html/2605.07053#S5)shows impact of different settings and we adopt a moderate\[α,β\]\[\\alpha,\\beta\]0\.4–0\.6 \(rather than removing all unanimously\-correct variants\) for detailed results shared in the main paper, with more results in the appendix for other configurations\. Figure[8](https://arxiv.org/html/2605.07053#A2.F8)in Appendix[B\.2](https://arxiv.org/html/2605.07053#A2.SS2)details all*Strictness Filter*settings\. Extendability and modes:The*Strictness Filter*can be tightened to drop any variant that a held\-out model set solves unanimously, producing a harder evaluation subset\. The retained hard variants may be reused as seeds and passed through GSM\-SEM to generate additional challenging variants for robustness testing\. The choice of held\-out modelsℳhold\\mathcal\{M\}\_\{\\text\{hold\}\}further enables targeted analyses: older model versions can surface improvements in newer models on previously inconsistent cases, while non\-reasoning models can help isolate whether reasoning\-oriented models benefit specifically from explicit reasoning capability on the same filtered set\. ## 4Datasets ### 4\.1GSM8K\-SEM, GSM\-Symbolic\-SEM, GSM\-Plus\-SEM Using the GSM\-SEM framework, we generated the GSM8K\-SEM, GSM\-Symbolic\-SEM, and GSM\-Plus\-SEM datasets, which comprises multiple perturbed samples per original base data sample[H](https://arxiv.org/html/2605.07053#A8), resulting in a total of 685, 2525, and 1436 samples, respectively\. Dataset size details and breakdown by*Strictness Filter*are shared in Appendix[C\.2](https://arxiv.org/html/2605.07053#A3.SS2)\. ##### Fully Human Validated Datasets We subjected these full datasets generated by the GSM\-SEM pipeline to human validation at the sample level to ensure that all reported results are based exclusively on a vetted subset\. Each sample was independently reviewed by two annotators for overall quality and logical coherence, drawing from a pool of six reviewers in total \(see Appendix[C\.1](https://arxiv.org/html/2605.07053#A3.SS1)for full statistics and details\)\. Fewer than 3% of samples were rated below the high\-quality threshold, further supporting the robustness of our generation approach\. All published datasets and results include only samples that have been validated as good by humans\. Figure 3:Cosine similarity distribution of GSM8K variants with respect to GSM8K, where GSM8K\-SEM shows higher semantic divergence than other variants\. ##### Semantic Similarity Distribution As our goal is to create semantically different samples from the original dataset, Figure[3](https://arxiv.org/html/2605.07053#S4.F3)shows the count\-based cosine similarity distribution between the GSM8K\-SEM and the GSM8K dataset\. For comparison, we also report the similarity distribution for a paraphrased version of GSM8K\. GSM8K\-SEM is shifted toward lower similarity, indicating that it differs more substantially from the original dataset and thus is not equivalent to paraphrasing\. We also show the count\-based cosine similarity distribution between GSM\-Symbolic and GSM8K, and between GSM\-Plus and GSM8K\. As GSM\-Symbolic essentially only swaps entities from the original dataset, its similarity tends to be higher than paraphrased queries\. GSM\-Plus shows high similarity, consistent with many variants introducing only minimal edits\. Figure[3](https://arxiv.org/html/2605.07053#S4.F3)reports GSM8K\-SEM with strictness filter\[α,β\]0,1\[\\alpha,\\beta\]~0,1\. Figure[9](https://arxiv.org/html/2605.07053#A3.F9)shows how the similarity distributions change under different strictness filter settings in Appendix[C\.3](https://arxiv.org/html/2605.07053#A3.SS3)\. For a more fine\-grained comparison, we also compare cosine similarity using sentence embeddings \(all\-MiniLM\-L6\-v2\) \(see Figure[10](https://arxiv.org/html/2605.07053#A3.F10), Appendix[C\.3](https://arxiv.org/html/2605.07053#A3.SS3)\)\. Paraphrased questions have the highest semantic similarity to the originals, as they convey the same content\. GSM\-Symbolic\-SEM, which incorporates entity swaps and semantic modifications, exhibits reduced similarity and SEM variants continue to show lowest similarity\. #### 4\.1\.1Evaluation set\-up and metrics We measure accuracy by running each model at least five times per sample \(R≥5R\\geq 5\), averaging correctness across runs, and then averaging over all samples to obtain the final accuracy score\. Let𝒟s\\mathcal\{D\}\_\{s\}be the set of SEM variants, where each element is a pair\(x~,y\)\(\\tilde\{x\},y\)\. Acc=1\|𝒟s\|∑\(x~,y\)∈𝒟s\(1R∑r=1R𝕀\[LMr\(x~\)=y\]\)\\mathrm\{Acc\}=\\frac\{1\}\{\|\\mathcal\{D\}\_\{s\}\|\}\\sum\_\{\(\\tilde\{x\},y\)\\in\\mathcal\{D\}\_\{s\}\}\\left\(\\frac\{1\}\{R\}\\sum\_\{r=1\}^\{R\}\\mathbb\{I\}\\\!\\left\[\\mathrm\{LM\}\_\{r\}\(\\tilde\{x\}\)=y\\right\]\\right\)where𝕀\[⋅\]\\mathbb\{I\}\[\\cdot\]is the indicator function andLMr\(x~\)\\mathrm\{LM\}\_\{r\}\(\\tilde\{x\}\)denotes the model output on runrr\. AccΔ\\Deltais the variant accuracy subtracted by original accuracy\. Correctness is determined using a hybrid evaluator: rule\-based numeric extractor complemented by LLM\-as\-a\-judge\. We found that rule\-based evaluators used in prior work at times fail to extract valid answers \(yield false negatives\), whereas the LLM judge substantially improves coverage and reliability[Yu et al\.](https://arxiv.org/html/2605.07053#bib.bib38)\. See Appendix[B](https://arxiv.org/html/2605.07053#A2)for prompt/settings\. We also measure the performance drop rate \(PDR\) metric to show the relative performance decline on variants compared to the baseline samples\. PDR=1−1\|𝒟s\|∑\(x~,y\)∈𝒟a𝕀\[LM\(x~\)=y\]1\|𝒟\|∑\(x,y\)∈𝒟𝕀\[LM\(x\)=y\]\\mathrm\{PDR\}=1\-\\frac\{\\frac\{1\}\{\|\\mathcal\{D\}\_\{s\}\|\}\\sum\_\{\(\\tilde\{x\},y\)\\in\\mathcal\{D\}\_\{a\}\}\\mathbb\{I\}\\\!\\left\[\\mathrm\{LM\}\(\\tilde\{x\}\)=y\\right\]\}\{\\frac\{1\}\{\|\\mathcal\{D\}\|\}\\sum\_\{\(x,y\)\\in\\mathcal\{D\}\}\\mathbb\{I\}\\\!\\left\[\\mathrm\{LM\}\(x\)=y\\right\]\}where𝒟s\\mathcal\{D\}\_\{s\}and𝒟\\mathcal\{D\}represent the SEM variants and baseline datasets, respectively\. We report these metrics on original GSM8K dataset, GSM\-Symbolic, GSM\-Plus and their SEM variants, all evaluated under zero\-shot inference\. Additional experiments using 8\-shot Chain\-of\-prompting yielded trends consistent with the zero\-shot setting, so we focus on zero\-shot results here \(details in Appendix[D](https://arxiv.org/html/2605.07053#A4)\)\. ### 4\.2Out\-of\-domain datasets To assess broader applicability of GSM\-SEM pipeline on non\-GSM data, we evaluated the SEM pipeline by applying it to several out of domain datasets, each representing distinct type of data and reasoning difficulty\. These included BigBench\-HardSuzgun et al\. \([2022](https://arxiv.org/html/2605.07053#bib.bib28)\); bench authors \([2023](https://arxiv.org/html/2605.07053#bib.bib2)\)\(temporal sequences\), LogicBenchParmar et al\. \([2024](https://arxiv.org/html/2605.07053#bib.bib16)\)\(first\-order logic\), and NLR\-BIRDSingh et al\. \([2025](https://arxiv.org/html/2605.07053#bib.bib25)\)\. Each dataset diverges from the math\-oriented GSM benchmark: BigBench\-Hard presents lengthy sequences of temporal events, demanding more complex reasoning to identify the correct response; LogicBench entails multiple\-choice questions testing first\-order logic skills, such as hypothetical and disjunctive syllogisms, as well as various logical dilemmas; NLR\-BIRD, a newer dataset, focuses on nuanced linguistic reasoning to present tables into natural language representations \(NLR\)\. For each dataset, example SEM variants generated are provide along with experimental details in Appendix[G](https://arxiv.org/html/2605.07053#A7)\. ## 5Results and Discussion Figure[4](https://arxiv.org/html/2605.07053#S5.F4)shows the performance drop rate for different tightness of the GSM\-SEM pipeline*Strictness filter*step, at varying\[α,β\]\[\\alpha,\\beta\]\. Table[1](https://arxiv.org/html/2605.07053#S5.T1)presents the performance across 14 LLM settings on GSM8K, Symbolic, Plus, and their SEM variants for the*med*configuration of the*Strictness filter*\. Figure 4:Effect of*Strictness Filter*\(Section[3](https://arxiv.org/html/2605.07053#S3.SS0.SSS0.Px4)\) on PDR% \(relative to GSM8K\) and statistical significance\. Filter settings: none \(all samples kept;\[α,β\]\[\\alpha,\\beta\]0\-1\), min \(\[α,β\]\[\\alpha,\\beta\]0\.30–0\.70\), min\-med \(0\.35–0\.65\), med \(0\.40–0\.60\), med\-max \(0\.45–0\.55\), and max \(all such samples filtered out\)\. Variant dataset sizes across filters are shared in Table[4](https://arxiv.org/html/2605.07053#A3.T4)\.ParaphrGSM8K\-SEMGSM\-SymbSymb\-SEMGSM\-PlusPlus\-SEMAccΔ\\Delta\[PDR\]AccΔ\\Delta\[PDR\]AccΔ\\Delta\[PDR\]AccΔ\\Delta\[PDR\]AccΔ\\Delta\[PDR\]AccΔ\\Delta\[PDR\]Grok31\.04 \[\-1\.58\]\-3\.70 \[2\.89\]↓\\bm\{\\downarrow\}\-0\.76 \[0\.80\]\-7\.11 \[7\.52\]↓\\bm\{\\downarrow\}\-3\.77 \[3\.74\]\-13\.00 \[13\.46\]↓\\bm\{\\downarrow\}Llama3\.1\-405\-1\.07 \[0\.62\]\-6\.84 \[6\.01\]↓\\bm\{\\downarrow\}\-3\.40 \[3\.48\]\-9\.73 \[9\.97\]↓\\bm\{\\downarrow\}\-5\.81 \[5\.62\]\-13\.06 \[13\.11\]↓\\bm\{\\downarrow\}Llama4\-Mav\-2\.22 \[1\.79\]\-3\.79 \[2\.85\]↓\\bm\{\\downarrow\}\-3\.48 \[3\.55\]\-7\.34 \[7\.50\]↓\\bm\{\\downarrow\}\-6\.42 \[6\.37\]\-16\.34 \[16\.47\]↓\\bm\{\\downarrow\}Llama4\-Scout\-0\.51 \[0\.57\]\-3\.88 \[4\.10\]↓\\bm\{\\downarrow\}\-2\.76 \[2\.86\]\-7\.97 \[8\.26\]↓\\bm\{\\downarrow\}\-6\.93 \[7\.03\]\-15\.33 \[15\.81\]↓\\bm\{\\downarrow\}GPT4\.1\-0\.95 \[0\.51\]\-3\.85 \[2\.98\]↓\\bm\{\\downarrow\}\-3\.33 \[3\.47\]\-7\.98 \[8\.31\]↓\\bm\{\\downarrow\}\-4\.47 \[4\.26\]\-11\.86 \[11\.87\]↓\\bm\{\\downarrow\}GPT4\.1\-mini\-1\.77 \[0\.84\]\-2\.68 \[1\.73\]↓\\bm\{\\downarrow\}\-2\.42 \[2\.51\]\-6\.83 \[7\.06\]↓\\bm\{\\downarrow\}\-4\.25 \[4\.05\]\-12\.19 \[12\.35\]↓\\bm\{\\downarrow\}Gemini2\.5\-f\-2\.51 \[2\.12\]\-5\.07 \[4\.23\]↓\\bm\{\\downarrow\}\-3\.11 \[3\.17\]\-6\.94 \[7\.16\]↓\\bm\{\\downarrow\}\-6\.66 \[6\.57\]\-13\.38 \[13\.34\]↓\\bm\{\\downarrow\}Gemini2\.5\-f\-l\-1\.95 \[1\.04\]\-4\.44 \[3\.64\]↓\\bm\{\\downarrow\}\-0\.52 \[0\.55\]\-7\.27 \[7\.62\]↓\\bm\{\\downarrow\}\-6\.40 \[6\.27\]\-14\.05 \[14\.53\]↓\\bm\{\\downarrow\}Gemini2\.5\-pro\-0\.82 \[0\.36\]\-3\.49 \[3\.39\]↓\\bm\{\\downarrow\}\-0\.86 \[0\.88\]\-6\.66 \[6\.86\]↓\\bm\{\\downarrow\}\-4\.15 \[4\.07\]\-11\.53 \[11\.35\]↓\\bm\{\\downarrow\}O3\-0\.19 \[\-0\.29\]\-3\.36 \[2\.68\]↓\\bm\{\\downarrow\}\-2\.18 \[2\.23\]\-8\.71 \[8\.91\]↓\\bm\{\\downarrow\}\-5\.06 \[4\.91\]\-9\.77 \[9\.45\]↓\\bm\{\\downarrow\}GPT5\(mnml\)\-1\.16 \[0\.71\]\-3\.37 \[2\.42\]↓\\bm\{\\downarrow\}\-1\.69 \[1\.73\]\-8\.68 \[8\.89\]↓\\bm\{\\downarrow\}\-4\.84 \[4\.46\]\-12\.86 \[12\.43\]↓\\bm\{\\downarrow\}GPT5\.1\(mnml\)\-1\.99 \[1\.58\]\-4\.70 \[3\.83\]↓\\bm\{\\downarrow\}\-2\.15 \[2\.21\]\-11\.26 \[11\.61\]↓\\bm\{\\downarrow\}\-6\.02 \[5\.73\]\-18\.52 \[18\.69\]↓\\bm\{\\downarrow\}GPT5\(med\)\-0\.07 \[\-0\.42\]\-2\.59 \[1\.64\]↓\\bm\{\\downarrow\}\-1\.28 \[1\.32\]\-7\.60 \[7\.85\]↓\\bm\{\\downarrow\}\-3\.83 \[3\.46\]\-9\.09 \[8\.61\]↓\\bm\{\\downarrow\}GPT5\.1\(med\)\-2\.26 \[1\.84\]\-4\.47 \[3\.56\]↓\\bm\{\\downarrow\}\-5\.20 \[5\.32\]\-11\.80 \[12\.08\]↓\\bm\{\\downarrow\}\-6\.50 \[6\.18\]\-18\.40 \[17\.93\]↓\\bm\{\\downarrow\}Avg Acc Drop\-1\.17%\-4\.02%\-2\.36%\-8\.28%\-5\.37%\-13\.53%Avg PDR0\.69%3\.28%2\.43%8\.54%5\.19%13\.53%Table 1:Accuracy drop \(in %\) and PDR compared to GSM8K across GSM\-based datasets and our SEM variations\. Positive PDR indicates performance decline\. These results are on 100% human\-validated data samples\.##### Performance drops in SEM variants As can be seen in Table[1](https://arxiv.org/html/2605.07053#S5.T1), there is a consistent decline in performance in each SEM variant compared to original baseline data\. We performed a Wilcoxon test to determine whether SEM variants perform worse over the baselines withstatistical significance\. Our results show that 14/14 models demonstrate worse performance than baseline with statistical significance \(p<0\.05\) for all SEM variants in the*med*configuration \(shown in Figure[4](https://arxiv.org/html/2605.07053#S5.F4)\)\. Detailed statistics breakdown per model is shared in Appendix[E\.2](https://arxiv.org/html/2605.07053#A5.SS2)\. ##### Impact of Compounded Perturbations Figure[4](https://arxiv.org/html/2605.07053#S5.F4)and Table[1](https://arxiv.org/html/2605.07053#S5.T1)shows that GSM\-Plus\-SEM and GSM\-Symbolic\-SEM yield the highest accuracy drops across models among all variants, indicating that compounding semantic and other perturbations \(symbolic and others in plus\) amplify the challenge\. Notably, GSM8K\-SEM, GSM\-Symbolic, and GSM\-Plus alone also cause measurable accuracy reductions relative to the original GSM benchmark, but the semantic nature and effects of these perturbations differ considerably thus leading to compounded decline when combined\. ##### Comparison of Perturbation Types Although GSM8K\-SEM, GSM\-Symbolic, and GSM\-Plus induce accuracy drop, the underlying perturbations are fundamentally different\. GSM\-Symbolic primarily substitutes names or numbers, resulting in high semantic similarity to the original GSM \(Figure[3](https://arxiv.org/html/2605.07053#S4.F3)\)\. GSM\-Plus on the other hand induces perturbations through numerical and arithmetic variations, problem rephrasing, and distractor insertion, which involves injecting topic\-related but irrelevant sentences that do not contribute to the solution\. In contrast, GSM8K\-SEM produces variants with substantial semantic divergence, despite maintaining final answer consistency, resulting in lower cosine similarity\. Figure[5](https://arxiv.org/html/2605.07053#S5.F5)shows the difference in performance across these perturbation types in further detail\. It shows across perturbations, the breakdown of samples that change from right to wrong, wrong to right, and wrong to wrong compared to GSM8K, where SEM added on top of Symbolic and Plus causes much larger differences and more failed answer determinations\. Together this shows that SEM variants and other existing variants probe distinct aspects of LLM generalization\. Figure 5:GSM8K vs\. variant outcomes \(all models pooled\)\. ##### Relationship to Semantic Similarity Figure[6](https://arxiv.org/html/2605.07053#S5.F6)illustrates model accuracy as a function of cosine similarity with the original GSM dataset\. Accuracy generally improves with higher semantic similarity, implying that greater semantic shifts lead to larger performance drops, even when the required computations and numbers remain unchanged\. Figure 6:Average accuracy across models for GSM8K\-SEM samples in different cosine similarity ranges w\.r\.t\. GSM8K\. ##### Model\-Specific Effects The perturbation challenges models, with slight variation depending on the architecture and presence of reasoning specialized ability in models\. In Table[1](https://arxiv.org/html/2605.07053#S5.T1), GSM8K\-SEM shows 3\.61% average degradation on GPT family of models, compared to 4\.33% on Gemini and 4\.84% on Llama family\. Figure[7](https://arxiv.org/html/2605.07053#S6.F7)shows delta in performance by reasoning vs non\-reasoning models averaged by all model families tested\. While reasoning\-optimized models exhibit a generally smaller overall performance drop except Symbolic\-SEM, the improvement over non\-reasoning models is marginal\. This suggests the gap is broad\-based rather than confined to non\-reasoning models\. ## 6Analysis of samples ##### Samples that improved under variants Some samples that originally yielded inaccurate results showed stronger performance in the variants\. These tended to be cases where original question has a strong ambiguity which the variants clarify\. An example is as follows\. Example: Variants overcoming ambiguity*Original:*Watson works a 10\-hour shift each day, five days a week\. He earns $10 per hour and gets a $300 bonus each week if the company performs well\. How much money did Watson make in April if the company performed very well for the whole month?*Variant:*What is the total amount of money Watson earned in April, given that he makes $10 per hour, works 10 hours a day, 5 days a week, and received a $300 bonus for each of the4 weeksof the month? The variant clarifies that a month should be treated as 4 weeks, whereas the original question leaves this assumption ambiguous \(e\.g\., some may instead assume a 30\-day month\)\. Figure 7:Delta in performance for GSM\-variants compared to baseline \(GSM8K\) by reasoning \(O3, Gemini\-2\.5\-Pro, GPT5x\) vs non\-reasoning models \(others\)\. ##### Negative Result Analysis\. Across GSM\-derived benchmarks, SEM variants underperform their base datasets \(Table[1](https://arxiv.org/html/2605.07053#S5.T1)\) despite preserving the intended computation and answer\. To understand why, we manually inspected items where some variants are solved reliably but other variants \(from the same base problem\) fail across most models, excluding uniformly always\-correct/always\-incorrect cases to isolate wording\-driven effects\. Key failure modes:\(1\) Added indirection increases cognitive load:variants that introduce extra conversions or indirect constraints \(e\.g\., “7%7\\%remaining” vs\. an explicit remainder\) often lead models to drop or mis\-bind constraints at the final step\.\(2\) Unit/currency cues trigger scale errors:mixed “cents”/$/Yphrasing can cause implicit rescaling \(e\.g\., treating $1200 as $12\.00\), yielding coherent\-but\-wrong solutions\.\(3\) Reference\-style rewrites destabilize state tracking:logically equivalent relational phrasing \(“twice the previous interval until the third”\) is harder than direct numbers, frequently misplacing doubling/halving boundaries\.\(4\) Template misretrieval under minor semantic shifts:small cue changes can switch the latent solution template, producing structured mistakes such as, for instance, assuming available spoons for dinner table setting includes spoons already used during dinner preparation instead of counting them an unavailable due to prior usage\.\(5\) Overcomplication and “unmotivated” misses:some models add unnecessary steps even for similarly simple variants, and we observe repeatable identical errors at temperature 0\.0, suggesting cue\-driven brittleness rather than pure decoding noise\. Example: Variants overcoming ambiguityDirect remainder:Arjun went to a store withY1100\\hbox\{\\set@color\\hskip 3\.75002pt\\hskip\-3\.75002pt\\hbox\{\\set@color\\hbox\{\\set@color\\hskip 3\.75pt\\hskip\-3\.75pt\\hbox\{\\set@color Y\}\\hskip\-3\.75pt\\hskip\-2\.7986pt\\raisebox\{2\.79857pt\}\{\\hbox\{\\set@color\\rule\{5\.59721pt\}\{0\.43057pt\}\}\}\\hskip\-2\.7986pt\\hskip 3\.75pt\}\}\\hskip\-3\.75002pt\\hskip\-2\.7986pt\\raisebox\{1\.72218pt\}\{\\hbox\{\\set@color\\rule\{5\.59721pt\}\{0\.43057pt\}\}\}\\hskip\-2\.7986pt\\hskip 3\.75002pt\}1100\. He bought crackers priced atY0\.25\\hbox\{\\set@color\\hskip 3\.75002pt\\hskip\-3\.75002pt\\hbox\{\\set@color\\hbox\{\\set@color\\hskip 3\.75pt\\hskip\-3\.75pt\\hbox\{\\set@color Y\}\\hskip\-3\.75pt\\hskip\-2\.7986pt\\raisebox\{2\.79857pt\}\{\\hbox\{\\set@color\\rule\{5\.59721pt\}\{0\.43057pt\}\}\}\\hskip\-2\.7986pt\\hskip 3\.75pt\}\}\\hskip\-3\.75002pt\\hskip\-2\.7986pt\\raisebox\{1\.72218pt\}\{\\hbox\{\\set@color\\rule\{5\.59721pt\}\{0\.43057pt\}\}\}\\hskip\-2\.7986pt\\hskip 3\.75002pt\}0\.25each and pretzels priced atY1\.00\\hbox\{\\set@color\\hskip 3\.75002pt\\hskip\-3\.75002pt\\hbox\{\\set@color\\hbox\{\\set@color\\hskip 3\.75pt\\hskip\-3\.75pt\\hbox\{\\set@color Y\}\\hskip\-3\.75pt\\hskip\-2\.7986pt\\raisebox\{2\.79857pt\}\{\\hbox\{\\set@color\\rule\{5\.59721pt\}\{0\.43057pt\}\}\}\\hskip\-2\.7986pt\\hskip 3\.75pt\}\}\\hskip\-3\.75002pt\\hskip\-2\.7986pt\\raisebox\{1\.72218pt\}\{\\hbox\{\\set@color\\rule\{5\.59721pt\}\{0\.43057pt\}\}\}\\hskip\-2\.7986pt\\hskip 3\.75002pt\}1\.00each, spending all butY11\\hbox\{\\set@color\\hskip 3\.75002pt\\hskip\-3\.75002pt\\hbox\{\\set@color\\hbox\{\\set@color\\hskip 3\.75pt\\hskip\-3\.75pt\\hbox\{\\set@color Y\}\\hskip\-3\.75pt\\hskip\-2\.7986pt\\raisebox\{2\.79857pt\}\{\\hbox\{\\set@color\\rule\{5\.59721pt\}\{0\.43057pt\}\}\}\\hskip\-2\.7986pt\\hskip 3\.75pt\}\}\\hskip\-3\.75002pt\\hskip\-2\.7986pt\\raisebox\{1\.72218pt\}\{\\hbox\{\\set@color\\rule\{5\.59721pt\}\{0\.43057pt\}\}\}\\hskip\-2\.7986pt\\hskip 3\.75002pt\}11in change\. If he bought 4 crackers, how many pretzels did he purchase?Percent remainder \(often harder\):In a market, fruit baskets are sold for 25 cents and vegetable baskets for 100 cents\. Priya spent $1100 and bought 4 fruit baskets, ending up with 1% of her money as change\. How many vegetable baskets did she purchase? Do different models fail differently?We seeboth shared and model\-specific tendencies: unit/scale confusion, reference\-based state drift, and target/remainder flips occur across model families, while some models like GPT5x more often overcomplicate or mis\-handle narrative frames\. Overall, SEM exposes a spectrum of cue\-sensitive template failures, where even small semantic shifts redirect models into coherent\-but\-wrong reasoning paths\. ##### Memorization problem\. During quality filtering, we found items that appear “incorrect” to humans yet are answered correctly across models, suggesting memorization:even when a variant changes numbers and thereby required calculations, models sometimes return a final answer that still matched the correct answer\. \(see Appendix[F\.2](https://arxiv.org/html/2605.07053#A6.SS2)for example\.\) Appendix[F](https://arxiv.org/html/2605.07053#A6)contains more details and examples\. ## 7Transferability to out\-of\-domain \(non\-GSM\) datasets GSM\-SEM was utilized to successfully generate semantic variants on non\-GSM sets\. Our results indicate that SEM variants show degraded performance on non\-mathematical datasets as well, particularly those with more complex reasoning requirements that have been released multiple years ago\. In the BigBench\-Hard subset \(2022 release\), SEM variants scored average and 5% drops in accuracy compared to original possibly due popularity and susceptibility to memorization\. LogicBench, released mid\-2024 and less widely used than BigBench, is recent enough to have avoided widespread memorization, but not obscure enough to fully rule it out; here, performance drop was negligible\. Finally, NLR\-BIRD \(very recent release from November 2025\) presented minimal memorization risk, and models showed no performance drop in the SEM variants\. Detailed results, prompts, and generated SEM variants examples are shared in Appendix[G](https://arxiv.org/html/2605.07053#A7)\. ## 8Conclusion We introducedGSM\-SEM, a stochastic generate–validate–filter framework for constructing*answer\-preserving*yet semantically diverse benchmark variants\. GSM\-SEM enables*re\-runnable*evaluation, reducing reliance on static test sets that can become less diagnostic under contamination and memorization risk\. Across 14 SOTA models, SEM variants consistently underperform relative to their corresponding base benchmarks, with the largest degradations observed when semantic perturbations are layered on top of Symbolic and Plus transformations\. These results indicate that current models remain sensitive to semantic shifts, highlighting a gap not captured by surface\-level robustness variants\. GSM\-SEM is reusable to create diverse augmentations each time, and can be adapted to other datasets beyond GSM for more robust evaluation and variant generation\. ## Limitations GSM\-SEM is most directly applicable to datasets that include a question along with a ground\-truth answer and supporting reasoning information\. For datasets where the reasoning path is complex or not available, GSM\-SEM may only be applied partially: in one of the dual curation steps, the curation stage may be unable to generate faithful new questions from the answer alone, or the generated variants may not match the original dataset’s complexity profile if availability of reasoned solution isn’t there\. As a results, such datasets may first need to be augmented with reasoning traces to make the most of GSM\-SEM pipeline\. Additionally, parts of our filtering and evaluation rely on automated judging, which can introduce systematic biases despite high agreement with human labels\. In terms of potential implications and future work, while advanced models have demonstrated improved performance on general reasoning tasks, mathematics remains a particularly challenging domain\. Incorporating semantically varied data during model training has the potential to promote true reasoning skills, reducing reliance on memorization\. ## References - Balakrishnan and Purwar \(2024\)Gautam Balakrishnan and Anupam Purwar\. 2024\.[Evaluating the efficacy of open\-source llms in enterprise\-specific rag systems: A comparative study of performance and scalability](https://doi.org/10.13140/RG.2.2.24790.25926)\. - bench authors \(2023\)BIG bench authors\. 2023\.[Beyond the imitation game: Quantifying and extrapolating the capabilities of language models](https://openreview.net/forum?id=uyTL5Bvosj)\.*Transactions on Machine Learning Research*\. - Bhuiya et al\. \(2024\)Neeladri Bhuiya, Viktor Schlegel, and Stefan Winkler\. 2024\.[Seemingly plausible distractors in multi\-hop reasoning: Are large language models attentive readers?](https://doi.org/10.18653/v1/2024.emnlp-main.147)In*Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing*, pages 2514–2528, Miami, Florida, USA\. Association for Computational Linguistics\. - Bilenko and Mooney \(2003\)Mikhail Bilenko and Raymond J\. Mooney\. 2003\.[Adaptive duplicate detection using learnable string similarity measures](https://doi.org/10.1145/956750.956759)\.In*Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, KDD ’03, page 39–48, New York, NY, USA\. Association for Computing Machinery\. - Cobbe et al\. \(2021\)Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman\. 2021\.[Training verifiers to solve math word problems](https://arxiv.org/abs/2110.14168)\.*Preprint*, arXiv:2110\.14168\. - Crocetti \(2015\)Giancarlo Crocetti\. 2015\.[Textual spatial cosine similarity](https://arxiv.org/abs/1505.03934)\.*Preprint*, arXiv:1505\.03934\. - Gonen et al\. \(2023\)Hila Gonen, Srini Iyer, Terra Blevins, Noah Smith, and Luke Zettlemoyer\. 2023\.[Demystifying prompts in language models via perplexity estimation](https://doi.org/10.18653/v1/2023.findings-emnlp.679)\.In*Findings of the Association for Computational Linguistics: EMNLP 2023*, pages 10136–10148, Singapore\. Association for Computational Linguistics\. - Karim et al\. \(2025\)Aabid Karim, Abdul Karim, Bhoomika Lohana, Matt Keon, Jaswinder Singh, and Abdul Sattar\. 2025\.[Lost in cultural translation: Do llms struggle with math across cultural contexts?](https://arxiv.org/abs/2503.18018)*Preprint*, arXiv:2503\.18018\. - Li et al\. \(2024\)Qintong Li, Leyang Cui, Xueliang Zhao, Lingpeng Kong, and Wei Bi\. 2024\.Gsm\-plus: A comprehensive benchmark for evaluating the robustness of llms as mathematical problem solvers\.In*Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics \(Volume 1: Long Papers\)*, pages 2961–2984\. Association for Computational Linguistics\. - Long et al\. \(2025\)Jikai Long, Zijian Hu, Xiaodong Yu, Jianwen Xie, and Zhaozhuo Xu\. 2025\.[Oat\-rephrase: Optimization\-aware training data rephrasing for zeroth\-order llm fine\-tuning](https://arxiv.org/abs/2506.17264)\.*Preprint*, arXiv:2506\.17264\. - Lunardi et al\. \(2025\)Riccardo Lunardi, Vincenzo Della Mea, Stefano Mizzaro, and Kevin Roitero\. 2025\.[On robustness and reliability of benchmark\-based evaluation of llms](https://arxiv.org/abs/2509.04013)\.*Preprint*, arXiv:2509\.04013\. - Maini et al\. \(2024\)Pratyush Maini, Skyler Seto, Richard Bai, David Grangier, Yizhe Zhang, and Navdeep Jaitly\. 2024\.[Rephrasing the web: A recipe for compute and data\-efficient language modeling](https://doi.org/10.18653/v1/2024.acl-long.757)\.In*Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics \(Volume 1: Long Papers\)*, pages 14044–14072, Bangkok, Thailand\. Association for Computational Linguistics\. - Minh\-Cong et al\. \(2023\)Nguyen\-Hoang Minh\-Cong, Nguyen Van Vinh, and Nguyen Le\-Minh\. 2023\.[A fast method to filter noisy parallel data WMT2023 shared task on parallel data curation](https://doi.org/10.18653/v1/2023.wmt-1.37)\.In*Proceedings of the Eighth Conference on Machine Translation*, pages 359–365, Singapore\. Association for Computational Linguistics\. - Mirzadeh et al\. \(2025\)Seyed Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, and Mehrdad Farajtabar\. 2025\.Gsm\-symbolic: Understanding the limitations of mathematical reasoning in large language models\.In*The Thirteenth International Conference on Learning Representations*\. - OpenAI et al\. \(2024\)OpenAI, :, Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, A J Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander Mądry, Alex Baker\-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, and 401 others\. 2024\.[GPT\-4o system card](https://arxiv.org/abs/2410.21276)\. - Parmar et al\. \(2024\)Mihir Parmar, Nisarg Patel, Neeraj Varshney, Mutsumi Nakamura, Man Luo, Santosh Mashetty, Arindam Mitra, and Chitta Baral\. 2024\.[LogicBench: Towards systematic evaluation of logical reasoning ability of large language models](https://doi.org/10.18653/v1/2024.acl-long.739)\.In*Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics \(Volume 1: Long Papers\)*, pages 13679–13707, Bangkok, Thailand\. Association for Computational Linguistics\. - Rabinovich et al\. \(2023\)Ella Rabinovich, Samuel Ackerman, Orna Raz, Eitan Farchi, and Ateret Anaby Tavor\. 2023\.Predicting question\-answering performance of large language models through semantic consistency\.In*Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics \(GEM\)*, pages 138–154\. - Raj et al\. \(2022\)Harsh Raj, Domenic Rosati, and Subhabrata Majumdar\. 2022\.Measuring reliability of large language models through semantic consistency\.In*NeurIPS ML Safety Workshop*\. - Rekabsaz et al\. \(2017\)Navid Rekabsaz, Mihai Lupu, Allan Hanbury, and Guido Zuccon\. 2017\.[Exploration of a threshold for similarity based on uncertainty in word embedding](https://doi.org/10.1007/978-3-319-56608-5_31)\.In*Advances in Information Retrieval: 39th European Conference on IR Research, ECIR 2017*, pages 396–409\. Springer\. - Safarzadeh et al\. \(2025\)Mohammadtaher Safarzadeh, Afshin Oroojlooyjadid, and Dan Roth\. 2025\.[Evaluating nl2sql via sql2nl](https://arxiv.org/abs/2509.04657)\.*Preprint*, arXiv:2509\.04657\. - Schlegel et al\. \(2021\)Viktor Schlegel, Goran Nenadic, and Riza Batista\-Navarro\. 2021\.[Semantics altering modifications for evaluating comprehension in machine reading](https://doi.org/10.1609/aaai.v35i15.17622)\.*Proceedings of the AAAI Conference on Artificial Intelligence*, 35:13762–13770\. - Sclar et al\. \(2024\)Melanie Sclar, Yejin Choi, Yulia Tsvetkov, and Alane Suhr\. 2024\.Quantifying language models’ sensitivity to spurious features in prompt design or: How i learned to start worrying about prompt formatting\.In*The Twelfth International Conference on Learning Representations*\. - Shi et al\. \(2023a\)Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, and Denny Zhou\. 2023a\.Large language models can be easily distracted by irrelevant context\.In*Proceedings of the 40th International Conference on Machine Learning*, ICML’23\. JMLR\.org\. - Shi et al\. \(2023b\)Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H Chi, Nathanael Schärli, and Denny Zhou\. 2023b\.Large language models can be easily distracted by irrelevant context\.In*International Conference on Machine Learning*, pages 31210–31227\. PMLR\. - Singh et al\. \(2025\)Jyotika Singh, Weiyi Sun, Amit Agarwal, Viji Krishnamurthy, Yassine Benajiba, Sujith Ravi, and Dan Roth\. 2025\.[Can LLMs narrate tabular data? an evaluation framework for natural language representations of text\-to\-SQL system outputs](https://doi.org/10.18653/v1/2025.emnlp-industry.60)\.In*Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track*, pages 883–902, Suzhou \(China\)\. Association for Computational Linguistics\. - Singh et al\. \(2026\)Jyotika Singh, Fang Tu, Miguel Ballesteros, Weiyi Sun, Sandip Ghoshal, Michelle Yuan, Yassine Benajiba, Sujith Ravi, and Dan Roth\. 2026\.[Mt\-osc: Path for llms that get lost in multi\-turn conversation](https://arxiv.org/abs/2604.08782)\.*Preprint*, arXiv:2604\.08782\. - Srivatsa and Kochmar \(2024\)Kv Aditya Srivatsa and Ekaterina Kochmar\. 2024\.[What makes math word problems challenging for LLMs?](https://doi.org/10.18653/v1/2024.findings-naacl.72)In*Findings of the Association for Computational Linguistics: NAACL 2024*, pages 1138–1148, Mexico City, Mexico\. Association for Computational Linguistics\. - Suzgun et al\. \(2022\)Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, , and Jason Wei\. 2022\.Challenging big\-bench tasks and whether chain\-of\-thought can solve them\.*arXiv preprint arXiv:2210\.09261*\. - Touvron et al\. \(2023\)Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie\-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample\. 2023\.[Llama: Open and efficient foundation language models](https://arxiv.org/abs/2302.13971)\.*Preprint*, arXiv:2302\.13971\. - Wang and Zhao \(2024\)Yuqing Wang and Yun Zhao\. 2024\.[Rupbench: Benchmarking reasoning under perturbations for robustness evaluation in large language models](https://arxiv.org/abs/2406.11020)\.*Preprint*, arXiv:2406\.11020\. - Wu et al\. \(2024\)Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, and Yoon Kim\. 2024\.[Reasoning or reciting? exploring the capabilities and limitations of language models through counterfactual tasks](https://doi.org/10.18653/v1/2024.naacl-long.102)\.In*Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies \(Volume 1: Long Papers\)*, pages 1819–1862, Mexico City, Mexico\. Association for Computational Linguistics\. - Xiao et al\. \(2011\)Chuan Xiao, Wei Wang, Xuemin Lin, Jeffrey Xu Yu, and Guoren Wang\. 2011\.[Efficient similarity joins for near\-duplicate detection](https://doi.org/10.1145/2000824.2000825)\.*ACM Trans\. Database Syst\.*, 36\(3\)\. - Xie et al\. \(2025\)Chulin Xie, Yangsibo Huang, Chiyuan Zhang, Da Yu, Xinyun Chen, Bill Yuchen Lin, Bo Li, Badih Ghazi, and Ravi Kumar\. 2025\.On memorization of large language models in logical reasoning\.In*Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia\-Pacific Chapter of the Association for Computational Linguistics*, pages 2742–2785\. - Yang et al\. \(2023\)Shuo Yang, Wei\-Lin Chiang, Lianmin Zheng, Joseph E\. Gonzalez, and Ion Stoica\. 2023\.[Rethinking benchmark and contamination for language models with rephrased samples](https://arxiv.org/abs/2311.04850)\.*Preprint*, arXiv:2311\.04850\. - Ye et al\. \(2025\)Tian Ye, Zicheng Xu, Yuanzhi Li, and Zeyuan Allen\-Zhu\. 2025\.[Physics of language models: Part 2\.1, grade\-school math and the hidden reasoning process](https://openreview.net/forum?id=Tn5B6Udq3E)\.In*The Thirteenth International Conference on Learning Representations*\. - Yosef et al\. \(2026\)Erez Yosef, Oron Anschel, Shunit Haviv Hakimi, Asaf Gendler, Adam Botach, Nimrod Berman, and Igor Kviatkovsky\. 2026\.[Rethinking math reasoning evaluation: A robust llm\-as\-a\-judge framework beyond symbolic rigidity](https://arxiv.org/abs/2604.22597)\.*Preprint*, arXiv:2604\.22597\. - Yu et al\. \(2024a\)Longhui Yu, Weisen Jiang, Han Shi, Jincheng YU, Zhengying Liu, Yu Zhang, James Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu\. 2024a\.[Metamath: Bootstrap your own mathematical questions for large language models](https://openreview.net/forum?id=N8N0hgNDRt)\.In*The Twelfth International Conference on Learning Representations*\. - \(38\)Qingchen Yu, Zifan Zheng, Shichao Song, Zhiyu Li, Feiyu Xiong, Bo Tang, and Ding Chen\.Xfinder: Large language models as auto\-mated evaluators for reliable evaluation\. - Yu et al\. \(2024b\)Xiaodong Yu, Ben Zhou, Hao Cheng, and Dan Roth\. 2024b\.[Reasonagain: Using extractable symbolic programs to evaluate mathematical reasoning](https://arxiv.org/abs/2410.19056)\.*CoRR*, arXiv:2410\.19056\. - Zeng et al\. \(2025\)Zhongshen Zeng, Pengguang Chen, Shu Liu, Haiyun Jiang, and Jiaya Jia\. 2025\.[MR\-GSM8k: A meta\-reasoning benchmark for large language model evaluation](https://openreview.net/forum?id=br4H61LOoI)\.In*The Thirteenth International Conference on Learning Representations*\. - Zhang et al\. \(2024\)Hugh Zhang, Jeff Da, Dean Lee, Vaughn Robinson, Catherine Wu, Will Song, Tiffany Zhao, Pranav Raja, Charlotte Zhuang, Dylan Slack, and 1 others\. 2024\.A careful examination of large language model performance on grade school arithmetic\.In*Proceedings of the 38th International Conference on Neural Information Processing Systems*, pages 46819–46836\. - Zhou et al\. \(2024\)Yue Zhou, Yada Zhu, Diego Antognini, Yoon Kim, and Yang Zhang\. 2024\.[Paraphrase and solve: Exploring and exploiting the impact of surface form on mathematical reasoning in large language models](https://doi.org/10.18653/v1/2024.naacl-long.153)\.In*Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies \(Volume 1: Long Papers\)*, pages 2793–2804, Mexico City, Mexico\. Association for Computational Linguistics\. - Zhou et al\. \(2025\)Zihao Zhou, Shudong Liu, Maizhen Ning, Wei Liu, Jindong Wang, Derek F\. Wong, Xiaowei Huang, Qiufeng Wang, and Kaizhu Huang\. 2025\.[Is your model really a good math reasoner? evaluating mathematical reasoning with checklist](https://openreview.net/forum?id=nDvgHIBRxQ)\.In*The Thirteenth International Conference on Learning Representations*\. ## Appendix AExtended Related Work The GSM8K dataset\(Cobbe et al\.,[2021](https://arxiv.org/html/2605.07053#bib.bib5)\)represents a foundational benchmark comprising of over 8,000 grade school mathematical world problems, partitioned into 7473 training and 1319 test samples\. While these questions require only elementary arithmetic operations, recent investigations have revealed significant vulnerabilities in how LLMs approach these seemingly straightforward tasks\. The widespread adoption of GSM8K has brought concerns regarding data contamination and overfitting, as evidenced by the fact that performance can fluctuate dramatically with a minor modifications\(Mirzadeh et al\.,[2025](https://arxiv.org/html/2605.07053#bib.bib14)\)\. ##### Synthetic and Adversarial Benchmarks\. Recent efforts have turned to synthetic data generation to produce more challenging and diverse evaluation frameworks\. The iGSM\(Ye et al\.,[2025](https://arxiv.org/html/2605.07053#bib.bib35)\)dataset employs a hierarchical graph\-based generation process to create math word problems with controllable complexity\. In iGSM, problem parameters and their dependencies are structured in graphs, enabling the construction of questions requiring up to 21 sequential operations\. The framework distinguishes between direct dependencies \(explicit quantitative relationships\), instance dependencies \(repeated structural patterns or categorical relationships\), and implicit dependencies \(unstated relationships requiring inference\)\. By controlling these dependencies and the number of operations, iGSM provides fine\-grained difficulty tuning and fully verifiable solutions, offering deep insights into how models handle multi\-step reasoning\. Orthogonal to synthetic complexity, other work has introduced adversarial distractions to probe LLMs’ focus\. GSM\-IC \(Grade School Math – Irrelevant Context\) by\(Shi et al\.,[2023a](https://arxiv.org/html/2605.07053#bib.bib23)\)appends semantically plausible but logically irrelevant sentences to standard GSM8K questions\. This dataset reveals that even state\-of\-the\-art LLMs can be easily distracted: model accuracy drops significantly when extraneous context is present in the problem description\. For example, a simple arithmetic question padded with an irrelevant narrative or additional numbers often confuses models into erroneous reasoning\. These failures occur despite the irrelevant content having no effect on the actual solution, showing that conventional few\-shot prompting or fine\-tuning strategies struggle to make models robust to such superficial perturbations\. "Lost in Cultural Translation" introduces variants that replace Western\-centric names and objects in GSM8K with culturally diverse equivalents to reveal that LLM reasoning performance is significantly biased toward common Western lexical patterns\(Karim et al\.,[2025](https://arxiv.org/html/2605.07053#bib.bib8)\)\. There has been several work in rephrasing data samples, on arithmetic and more diverse datasets, showing drop in scores or using rephrased data to include variation and diversity of content seen by models\.Maini et al\. \([2024](https://arxiv.org/html/2605.07053#bib.bib12)\); Long et al\. \([2025](https://arxiv.org/html/2605.07053#bib.bib10)\); Lunardi et al\. \([2025](https://arxiv.org/html/2605.07053#bib.bib11)\); Yang et al\. \([2023](https://arxiv.org/html/2605.07053#bib.bib34)\); Safarzadeh et al\. \([2025](https://arxiv.org/html/2605.07053#bib.bib20)\); Schlegel et al\. \([2021](https://arxiv.org/html/2605.07053#bib.bib21)\)\. Different from dataset variations, there has been work in reasoning evaluations to dissect math reasoning gaps, suchZeng et al\. \([2025](https://arxiv.org/html/2605.07053#bib.bib40)\)that presents evaluation benchmark for reasoning \- MR\-GSM8K, and ReasonAgainYu et al\. \([2024b](https://arxiv.org/html/2605.07053#bib.bib39)\)that uses symbolic paradigms for mathematical reasoning evaluation\.Yu et al\. \([2024a](https://arxiv.org/html/2605.07053#bib.bib37)\)presents a fine\-tuned LLM to solve for mathematical reasoning\.Zhou et al\. \([2025](https://arxiv.org/html/2605.07053#bib.bib43)\)proposes MathCheck, a checklist for testing task generalization and reasoning robustness\. ## Appendix BData curation and validation set\-up Across bothcurationsteps, temperature setting of 0\.5 was used\. Five different LLMs were used across the curation steps, including Llama\-3\.3\-70B\-Instruct, Llama\-4\-Maverick, Llama\-4\-Scout\(Touvron et al\.,[2023](https://arxiv.org/html/2605.07053#bib.bib29)\), GPT\-4\.1, and GPT\-4o\(OpenAI et al\.,[2024](https://arxiv.org/html/2605.07053#bib.bib15)\)\. Also see Appendix[H](https://arxiv.org/html/2605.07053#A8)to see what question and answers look like in GSM8K for an example\. Full hyperparameter settings are: - •temperature \(temp\): 0\.5 - •top\_p: 1 - •top\_k: \-1 - •max\_tokens: 10,000 - •frequency\_penalty: 1\.0 Curation \#1 prompt is as follows: Given the following answer, write an appropriate question for which this answer would be correct\. Make sure the question contains all specifications required to compute the answer correctly\.Return only the question and no additional text\.<special\-instruction\>Answer: \[\[answer\]\]Question: <special\-instruction\> is placeholder for any dataset specific instructions\. For all GSM\-specific variants, it is left blank\. Curation \#2 prompt is as follows:Given a question, return the same question changing core facts while keeping the numbers and core problem the same\.Return only the question and no additional text\.Question: Susan made 100 cookies for Christmas and was going to equally divide them between her 6 nephews\. Before Susan could package them, her husband snuck 4 cookies for himself\. How many cookies will each of Susan’s nephews get?New Question: Mala has baked 100 cupcakes for her 6 cousins to enjoy equally, but on their arrival, she finds that 4 cupcakes are spoiled\. How many cupcakes will each cousin get?<special\-instruction\>Question: \[\[question\]\]New Question: For this prompt as well <special\-instruction\> is placeholder for any dataset specific instructions\. For all GSM\-specific variants, it is left blank\. Forautomated quality evaluation, we used GPT4o as LLM\-judge and prompt and parameter settings are as follows\. Given this question, does the provided answer look correct for this question? Say True if it looks correct, else say False\. Don’t return any extra text in your response\.Question: \[\[question\]\]Answer: \[\[answer\]\]True or False: Hyperparameters are set as follows\. - •temperature \(temp\): 0\.01 - •top\_p: 1 - •top\_k: \-1 - •max\_tokens: 10,000 - •frequency\_penalty: 1\.0 LLM\-as\-a\-Judge model forevaluating model\-produced GSM\-answersis GPT4\.1 and the prompt used is as follows: Question: \[\[question\]\]The correct final answer for this question is: \[\[answer\]\]Does the below model generated answer also conclude the same correct final answer?Return True is it contains the correct final answer, else return False\. Only return True or False and no extra text\.Model generated answer: \[\[generated\_answer\]\]\]True or False: The parameter settings are as follows: - •temperature \(temp\): 0\.1 - •top\_p: 1 - •top\_k: \-1 - •max\_tokens: 1000 - •frequency\_penalty: 0\.0 Our GSM\-based evaluations rely on LLM\-based judging and do not include a dedicated mechanical full computation\-equivalence test\. For GSM\-style natural\-language problems, solutions can follow multiple algebraically valid reasoning paths that arrive at the same correct answer, and evaluation is therefore grounded in outcome\-level and semantic consistency rather than strict structural matching\. Prior work on GSM\-style benchmarks has commonly used final\-answer\-based evaluation, with more recent approaches incorporating LLM\-based assessment to better handle linguistic and reasoning variability; our approach is consistent with this line of workYosef et al\. \([2026](https://arxiv.org/html/2605.07053#bib.bib36)\); Singh et al\. \([2026](https://arxiv.org/html/2605.07053#bib.bib26)\)\. Generating model output: For GSM\-based samples that are passed through the 14 LLMs, we use the following inference parameters to get the model outputs: - •temperature \(temp\): 0\.01 - •top\_p: 1 - •top\_k: \-1 - •max\_tokens: 10000 - •frequency\_penalty: 1\.0 ### B\.1Similarity threshold choice To ensure the integrity of our dataset and filter out redundant or highly similar entries, we employed a cosine similarity threshold of 0\.85\. This value is widely recognized in information retrieval and natural language processing as an optimalhigh\-precisioncutoff for identifying near\-duplicate content while minimizing semantic noiseXiao et al\. \([2011](https://arxiv.org/html/2605.07053#bib.bib32)\); Bilenko and Mooney \([2003](https://arxiv.org/html/2605.07053#bib.bib4)\)\. In the context of modern vector embeddings, a threshold of 0\.85 is considered a conservative benchmark that effectively isolates semantically redundant documents, ensuring that only distinct data remains for subsequent analysisBalakrishnan and Purwar \([2024](https://arxiv.org/html/2605.07053#bib.bib1)\); Rekabsaz et al\. \([2017](https://arxiv.org/html/2605.07053#bib.bib19)\)\. By setting the threshold at this level, we mitigate the risk ofsemantic drift—where broadly related but distinct concepts are erroneously grouped together—while maintaining a robust and non\-redundant representation of the baseline data, consistent with established methodologies in large\-scale text filtering and similarity\-based data cleaningCrocetti \([2015](https://arxiv.org/html/2605.07053#bib.bib6)\); Minh\-Cong et al\. \([2023](https://arxiv.org/html/2605.07053#bib.bib13)\)\. ### B\.2Strictness filter settings As shown in Figure[8](https://arxiv.org/html/2605.07053#A2.F8), filter configurations for strictness filter are: none \(no additional filtering; all samples kept;\[α,β\]\[\\alpha,\\beta\]0\-1\), min \(\[α,β\]\[\\alpha,\\beta\]0\.30–0\.70\), min\-med \(0\.35–0\.65\), med \(0\.40–0\.60\), med\-max \(0\.45–0\.55\), and max \(all such samples filtered out\)\. These filters are applied to samples that yielded unanimous correctness evaluation across the held\-out model set \(GPT4o, GPT4o\-mini, Llama\-3\.3\-70B\-Ins\) in the GSM\-SEM pipeline\. Figure 8:Strictness filter configurations\. Each setting corresponds to a band on \[0,1\] specifying which samples \(by their\[α,β\]\[\\alpha,\\beta\]values\) are retained: none keeps all samples \[0,1\]; min–med\-max retain progressively narrower central bands; max filters out all such samples\. ## Appendix CDataset Attributes ### C\.1Dataset Quality Validation Methodology and Results To evaluate dataset quality post\-collection via the SEM pipeline, we implemented a two\-stage manual review process\. - •Validation Criteria:Each sample was evaluated on two binary criteria:Qualityassessed whether the generated question correctly retained key numeric information from the original and preserved answer alignment\.Logical coherenceassessed whether the question was internally consistent and solvable \(e\.g\., flagging physically implausible scenarios such as “a shark walking on a road”\)\. - •Review Process:InStage 1, 70 randomly sampled items were evaluated by two independent professional annotators per subset, with six total reviewers across all subsets This stage was during pipeline development to validated quality of prompts selected\. InStage 2, all dataset samples were evaluated in full with the goal of reporting results and releasing datasets comprising only the samples deemed good by human reviewers: each sample was independently labeled by three expert annotators, who assigned a binary label of 1 \(pass\) or 0 \(fail\) for each criterion\. Any sample receiving at least one negative vote was referred to a third reviewer, who re\-derived the variant from scratch before assigning a final adjudicated label\. Final labels were determined by majority vote\. - •Metrics:For each criterion we report the pass rate, average pairwise agreement, and average pairwise Cohen’sκ\\kappaacross all three annotator pairs\. Tables[2](https://arxiv.org/html/2605.07053#A3.T2)and[3](https://arxiv.org/html/2605.07053#A3.T3)report results for Stage 1 and Stage 2 respectively\. In Stage 2, quality achieved a pass rate of 96\.1% with an averageκ\\kappaof 0\.725, indicating substantial agreement\. Logical coherence achieved a pass rate of 99\.4% with an average pairwise agreement of 98\.1%\. The lowerκ\\kappaof 0\.279 for logical coherence is attributable to the well\-documented prevalence effect: when labels are heavily skewed toward one class, Cohen’sκ\\kappasystematically underestimates true agreement even when raw agreement is high\. We therefore report raw agreement alongsideκ\\kappafor a complete picture\. Of the 685 evaluated samples for GSM8K\-SEM, 20 \(3\.9%\) were rejected, 17 failing on quality alone and 3 failing on both criteria\. MetricQualityLogical CoherencePass Rate96\.4%97\.9%Agreement95\.7%95\.7%Cohen’sκ\\kappa0\.3790\.000†Table 2:Stage 1 annotation results on 70 randomly sampled items\.†Cohen’sκ\\kappaof 0\.000 reflects the prevalence effect: with very few negative cases,κ\\kappais artificially deflated despite high raw agreement\.MetricQualityLogical CoherencePass Rate96\.1%99\.4%Avg Agreement98\.3%98\.1%Avg Cohen’sκ\\kappa0\.7250\.279†Table 3:Stage 2 annotation results on all GSM8K\-SEM samples\.†See prevalence effect note in Table[2](https://arxiv.org/html/2605.07053#A3.T2)\. ### C\.2Dataset size attributes In this work, we present our results alongside those obtained using the GSM\-Symbolic and GSM\-Plus\. To ensure a fair comparison, we utilized the same 100 GSM8K samples for which the GSM\-Symbolic dataset was initially released\. For each of these 100 samples, we generated multiple augmentations to produce the GSM8K\-SEM dataset, resulting in a total of 499 samples\. We further applied the SEM augmentation process to GSM\-Symbolic variants\. The GSM\-Symbolic dataset provides 50 variations for each of the 100 original GSM8K samples\. From these, we randomly selected 5 GSM\-Symbolic variants per sample, giving us 100×5=500 GSM\-Symbolic base variants\. For each of these 500 variants, we then created multiple GSM\-SEM\-augmented versions, leading to several augmented samples per variant\. This approach produced multiple variations for each of the 100 original GSM8K samples, in line with the GSM\-Symbolic methodology\. In total, our GSM\-Symbolic\-SEM dataset contains 2525 samples, derived from the SEM augmentation of selected GSM\-Symbolic variants based on the original GSM8K test data\. For GSM\-Plus, we used the same base 100 original samples to get all variants across their 8 variant types\. We exclude the critical thinking variant type because it requires evaluating whether a model asks follow\-up questions when the prompt is underspecified\. We omit it to remain consistent with the other benchmarks we use, which are single\-turn tasks evaluated by final answer / calculation correctness, rather than by interactive follow\-up behavior\. Using the same 100 base GSM8K samples, we generated 1436 samples from GSM\-Plus\. For GSM\-paraphrased, we paraphrased each of the 100 base GSM8K samples multiple times, leading to a total of 500 samples of which manual inspection deemed 14 samples as low quality \(changed problem/calculation needed after paraphrasing\), so we excluded those and our final dataset size was 685\. All results are first averaged across the variants for each of the 100 original GSM8K samples, yielding a single accuracy score per sample\. These 100 per\-sample accuracy scores are then averaged to produce the final, overall accuracy for the dataset\. In this way, each original sample contributes equally to the final metric, regardless of the number of its variants\. In terms of dataset size based on the different*Strictness filter*, Table[4](https://arxiv.org/html/2605.07053#A3.T4)shows the breakdown of the size\. Strictness FilterData SizeGSM8K\-SEMGSM\-Symbolic\-SEMGSM\-Plus\-SEMnone68525251436min4981866981min\-med3861513802med3001150637med\-max199791456max111391279Table 4:Dataset sizes by variant and strictness filter\. ### C\.3Dataset analysis \- cosine similarity Figure 9:Cosine similarity distribution of GSM8K variants with respect to GSM8K for different*Strictness Filter*configurations\. Filter settings: none \(no additional filtering; all samples kept\), min \(\[α,β\]\[\\alpha,\\beta\]0\.30–0\.70\), med \(0\.40–0\.60\), and max \(all such samples filtered out\)\. The med set, for instance, shows a concentration around mid\-range cosine values as the filtering removes only certain extremes \(rather than uniformly pruning across cosine similarity ranges\)\. By contrast, the max set is filtered primarily by held\-out model unanimity across a wider range of cosine similarities, which yields a more uniform cosine distribution\.We also compare the cosine similarity distribution using a sentence transformer model \(all\-MiniLM\-L6\-v2\) generated embeddings over the same sets to measure their semantic closeness to the original questions, as shown in Figure[10](https://arxiv.org/html/2605.07053#A3.F10)\. From a semantic perspective, paraphrased questions denote the same set of propositions as the original ones, hence the highest similarities among the four sets\. The symbolic semantic set, generated via two steps of transformations: entity swap followed by semantic variations, understandably, has the lower similarities with the original\. GSM\-Symbolic largely overlaps with GSM8K\-SEM, but has a wider spread\. As the number of entities in each query varies, it adds to the similarity variations among the GSM\-Symbolic set\. Figure 10:Cosine similarity distribution using all\-MiniLM\-L6\-v2 embeddings\. ## Appendix DResults \- Chain of Thought Some existing literatureMirzadeh et al\. \([2025](https://arxiv.org/html/2605.07053#bib.bib14)\)presents results on GSM8K and its variants using 8\-cot \(chain of thought\) prompts\. We conducted experiments using 8\-cot prompts as well and noticed high alignment with results using zero\-shot prompts\. Table[5](https://arxiv.org/html/2605.07053#A4.T5)shows 8\-cot prompt results across GSM and some of its variants in comparison to zero\-shot prompts\. The difference in results was marginal, well under than 1%\. 8\-cotzero\-shot\|diff\|GSM8K89\.57%89\.86%0\.29%Paraphrased87\.06%87\.57%0\.51%GSM\-Symbolic87\.09%86\.23%0\.86%Table 5:Accuracy across 9 LLMs for 8\-cot prompt and zero\-shot prompt against GSM and its variants\. ## Appendix EResults by Model Results in this paper are shown using \(1\) the original GSM benchmark using the 100 samples for which GSM\-Symbolic released their augmented dataset, compared against \(2\) paraphrased versions of the original GSM questions, \(3\) the GSM\-Symbolic benchmark, \(4\) our new GSM8K\-SEM benchmark applied to the original GSM 100 samples, \(5\) GSM\-Symbolic\-SEM, where our SEM augmentation pipeline is applied on top of the GSM\-Symbolic dataset, and \(6\) GSM\-Plus\-SEM, where our SEM augmentation pipeline is applied on top of the GSM\-Plus dataset for the same baseline 100 GSM8K samples\. The results demonstrate that accuracy declines as semantic variation increases, with the largest drop observed on the GSM\-SEM\-augmented benchmarks\. Table[1](https://arxiv.org/html/2605.07053#S5.T1)and Figure[4](https://arxiv.org/html/2605.07053#S5.F4)in main paper contains results and expanded results are shared in Table[6](https://arxiv.org/html/2605.07053#A5.T6),[7](https://arxiv.org/html/2605.07053#A5.T7),[8](https://arxiv.org/html/2605.07053#A5.T8), and[9](https://arxiv.org/html/2605.07053#A5.T9)\. ### E\.1Results by Model across Strictness Filter configurations Drop in accuracy from GSM8K\-SEM is shared in Table[6](https://arxiv.org/html/2605.07053#A5.T6), GSM\-Symbolic\-SEM in Table[7](https://arxiv.org/html/2605.07053#A5.T7), and GSM\-Plus\-SEM in Table[8](https://arxiv.org/html/2605.07053#A5.T8)\. GSM8K\-SEMnoneminmin\-medmedmed\-maxmaxGrok3\-1\.09% \(0\.1652\)\-1\.78% \(0\.0752\)\-2\.59% \(0\.0464\)\-3\.70% \(0\.0398\)\-6\.36% \(0\.0108\)\-13\.96% \(0\.0035\)Llama3\.1\-405\-4\.75% \(<1E\-4\)\-5\.20% \(0\.0001\)\-5\.93% \(<1E\-4\)\-6\.84% \(<1E\-4\)\-8\.89% \(<1E\-4\)\-13\.49% \(0\.0001\)Llama4\-Mav\-1\.35% \(0\.0139\)\-1\.89% \(0\.0197\)\-2\.79% \(0\.0176\)\-3\.79% \(0\.0138\)\-5\.52% \(0\.0108\)\-12\.40% \(0\.0065\)Llama4\-Scout\-1\.40% \(0\.0346\)\-2\.52% \(0\.0088\)\-3\.26% \(0\.0081\)\-3\.88% \(0\.0124\)\-5\.35% \(0\.0114\)\-12\.72% \(0\.0032\)GPT4\.1\-2\.81% \(0\.0353\)\-3\.20% \(0\.0321\)\-3\.25% \(0\.0353\)\-3\.85% \(0\.0247\)\-5\.47% \(0\.0267\)\-12\.19% \(0\.0115\)GPT4\.1\-mini\-1\.70% \(0\.0098\)\-1\.76% \(0\.0197\)\-2\.11% \(0\.0197\)\-2\.68% \(0\.0156\)\-3\.45% \(0\.0156\)\-6\.43% \(0\.0192\)Gemini2\.5\-f\-3\.57% \(0\.0019\)\-3\.94% \(0\.0039\)\-4\.52% \(0\.0055\)\-5\.07% \(0\.0055\)\-8\.02% \(0\.0039\)\-14\.41% \(0\.0064\)Gemini2\.5\-f\-l\-1\.77% \(0\.0517\)\-2\.82% \(0\.0240\)\-3\.50% \(0\.0357\)\-4\.44% \(0\.0246\)\-6\.38% \(0\.0100\)\-17\.32% \(0\.0013\)Gemini2\.5\-pro\-2\.23% \(0\.0070\)\-2\.32% \(0\.0095\)\-2\.88% \(0\.0108\)\-3\.49% \(0\.0064\)\-6\.22% \(0\.0009\)\-12\.70% \(0\.0021\)O3\-1\.97% \(0\.0070\)\-2\.74% \(0\.0012\)\-3\.01% \(0\.0009\)\-3\.36% \(0\.0011\)\-4\.64% \(0\.0009\)\-8\.82% \(0\.0029\)GPT5\(mnml\)\-2\.21% \(0\.0183\)\-2\.77% \(0\.0025\)\-3\.00% \(0\.0025\)\-3\.37% \(0\.0025\)\-4\.81% \(0\.0025\)\-8\.00% \(0\.0090\)GPT5\.1\(mnml\)\-2\.93% \(0\.0009\)\-3\.40% \(0\.0011\)\-3\.70% \(0\.0024\)\-4\.70% \(0\.0018\)\-6\.77% \(0\.0015\)\-15\.36% \(0\.0002\)GPT5\(med\)\-1\.42% \(0\.2382\)\-1\.94% \(0\.1010\)\-2\.21% \(0\.0566\)\-2\.59% \(0\.0458\)\-3\.85% \(0\.0293\)\-6\.65% \(0\.0176\)GPT5\.1\(med\)\-2\.82% \(0\.0069\)\-3\.22% \(0\.0079\)\-3\.62% \(0\.0078\)\-4\.47% \(0\.0024\)\-6\.34% \(0\.0024\)\-13\.54% \(0\.0006\)Avg\-2\.29%\-2\.82%\-3\.31%\-4\.02%\-5\.86%\-12\.00%Stat sig \(models w/ p<0\.05\)11/1412/1413/1414/1414/1414/14Table 6:AccuracyΔ\\Delta\(p\-value from Wilcoxon test\) across strictness filter settings in GSM8K\-SEM\.GSM\-Symbolic\-SEMnoneminmin\-medmedmed\-maxmaxGrok3\-2\.05% \(0\.0045\)\-3\.74% \(0\.0042\)\-4\.49% \(0\.0046\)\-7\.11% \(0\.0019\)\-10\.81% \(0\.0007\)\-26\.09% \(<1E\-4\)Llama3\.1\-405\-6\.19% \(<1E\-4\)\-7\.28% \(<1E\-4\)\-8\.20% \(<1E\-4\)\-9\.73% \(<1E\-4\)\-11\.39% \(<1E\-4\)\-25\.05% \(<1E\-4\)Llama4\-Mav\-4\.27% \(<1E\-4\)\-5\.15% \(<1E\-4\)\-5\.90% \(<1E\-4\)\-7\.34% \(<1E\-4\)\-10\.46% \(<1E\-4\)\-23\.48% \(<1E\-4\)Llama4\-Scout\-4\.32% \(<1E\-4\)\-5\.50% \(<1E\-4\)\-6\.20% \(<1E\-4\)\-7\.97% \(<1E\-4\)\-11\.51% \(<1E\-4\)\-28\.68% \(<1E\-4\)GPT4\.1\-3\.29% \(0\.0003\)\-4\.48% \(0\.0003\)\-5\.50% \(0\.0002\)\-7\.98% \(0\.0001\)\-10\.89% \(<1E\-4\)\-27\.67% \(<1E\-4\)GPT4\.1\-mini\-3\.61% \(<1E\-4\)\-4\.63% \(<1E\-4\)\-5\.21% \(<1E\-4\)\-6\.83% \(<1E\-4\)\-9\.28% \(<1E\-4\)\-23\.46% \(<1E\-4\)Gemini2\.5\-f\-2\.64% \(0\.0002\)\-3\.73% \(0\.0003\)\-4\.61% \(0\.0002\)\-6\.97% \(0\.0002\)\-11\.09% \(<1E\-4\)\-25\.77% \(<1E\-4\)Gemini2\.5\-f\-l\-2\.67% \(0\.0001\)\-4\.21% \(0\.0001\)\-5\.20% \(0\.0001\)\-7\.27% \(0\.0001\)\-10\.07% \(0\.0001\)\-23\.66% \(<1E\-4\)Gemini2\.5\-pro\-3\.31% \(0\.0001\)\-4\.14% \(0\.0001\)\-4\.74% \(0\.0001\)\-6\.66% \(0\.0001\)\-9\.42% \(0\.0001\)\-25\.91% \(<1E\-4\)O3\-4\.34% \(<1E\-4\)\-5\.52% \(<1E\-4\)\-6\.08% \(<1E\-4\)\-8\.71% \(<1E\-4\)\-11\.67% \(<1E\-4\)\-24\.26% \(<1E\-4\)GPT5\(mnml\)\-4\.40% \(<1E\-4\)\-5\.49% \(<1E\-4\)\-6\.18% \(0\.0001\)\-8\.68% \(<1E\-4\)\-11\.63% \(<1E\-4\)\-22\.36% \(<1E\-4\)GPT5\.1\(mnml\)\-5\.73% \(<1E\-4\)\-7\.62% \(<1E\-4\)\-8\.32% \(<1E\-4\)\-11\.26% \(<1E\-4\)\-15\.64% \(<1E\-4\)\-34\.45% \(<1E\-4\)GPT5\(med\)\-3\.01% \(0\.0002\)\-4\.37% \(0\.0002\)\-4\.94% \(0\.0002\)\-7\.60% \(0\.0001\)\-11\.09% \(<1E\-4\)\-24\.65% \(<1E\-4\)GPT5\.1\(med\)\-6\.39% \(<1E\-4\)\-8\.27% \(<1E\-4\)\-8\.87% \(<1E\-4\)\-11\.80% \(<1E\-4\)\-14\.79% \(<1E\-4\)\-31\.92% \(<1E\-4\)Avg\.\-4\.02%\-5\.30%\-6\.03%\-8\.28%\-11\.41%\-26\.24%Stat sig \(models w/ p<0\.05\)14/1414/1414/1414/1414/1414/14Table 7:AccuracyΔ\\Delta\(p\-value from Wilcoxon test\) across strictness filter settings in GSM\-Symbolic\-SEM\.GSM\-Plus\-SEMnoneminmin\-medmedmed\-maxmaxGrok3\-3\.72% \(<1E\-4\)\-7\.34% \(<1E\-4\)\-9\.83% \(<1E\-4\)\-13\.00% \(<1E\-4\)\-17\.49% \(<1E\-4\)\-29\.19% \(<1E\-4\)Llama3\.1\-405\-6\.66% \(<1E\-4\)\-9\.75% \(<1E\-4\)\-11\.43% \(<1E\-4\)\-13\.06% \(<1E\-4\)\-17\.10% \(<1E\-4\)\-27\.09% \(<1E\-4\)Llama4\-Mav\-5\.74% \(<1E\-4\)\-9\.16% \(<1E\-4\)\-11\.98% \(<1E\-4\)\-16\.34% \(<1E\-4\)\-19\.97% \(<1E\-4\)\-32\.21% \(<1E\-4\)Llama4\-Scout\-4\.96% \(<1E\-4\)\-8\.32% \(<1E\-4\)\-11\.36% \(<1E\-4\)\-15\.33% \(<1E\-4\)\-19\.17% \(<1E\-4\)\-30\.90% \(<1E\-4\)GPT4\.1\-4\.97% \(<1E\-4\)\-8\.38% \(<1E\-4\)\-10\.35% \(<1E\-4\)\-11\.86% \(<1E\-4\)\-16\.10% \(<1E\-4\)\-27\.13% \(<1E\-4\)GPT4\.1\-mini\-3\.42% \(<1E\-4\)\-6\.05% \(<1E\-4\)\-8\.58% \(<1E\-4\)\-12\.19% \(<1E\-4\)\-14\.69% \(<1E\-4\)\-24\.87% \(<1E\-4\)Gemini2\.5\-f\-5\.91% \(<1E\-4\)\-9\.32% \(<1E\-4\)\-11\.50% \(<1E\-4\)\-13\.38% \(<1E\-4\)\-16\.91% \(<1E\-4\)\-26\.13% \(<1E\-4\)Gemini2\.5\-f\-l\-4\.70% \(<1E\-4\)\-8\.71% \(<1E\-4\)\-11\.66% \(<1E\-4\)\-14\.05% \(<1E\-4\)\-17\.09% \(<1E\-4\)\-27\.45% \(<1E\-4\)Gemini2\.5\-pro\-5\.12% \(<1E\-4\)\-8\.14% \(<1E\-4\)\-9\.86% \(<1E\-4\)\-11\.53% \(<1E\-4\)\-15\.19% \(<1E\-4\)\-25\.47% \(<1E\-4\)O3\-3\.41% \(0\.0001\)\-6\.00% \(<1E\-4\)\-8\.05% \(<1E\-4\)\-9\.77% \(<1E\-4\)\-12\.76% \(<1E\-4\)\-21\.47% \(<1E\-4\)GPT5\(mnml\)\-5\.70% \(<0\.0001\)\-8\.27% \(<1E\-4\)\-10\.14% \(<1E\-4\)\-12\.86% \(<1E\-4\)\-15\.43% \(<1E\-4\)\-25\.64% \(<1E\-4\)GPT5\.1\(mnml\)\-6\.09% \(<1E\-4\)\-10\.27% \(<1E\-4\)\-13\.73% \(<1E\-4\)\-18\.52% \(<1E\-4\)\-21\.82% \(<1E\-4\)\-39\.04% \(<1E\-4\)GPT5\(med\)\-3\.45% \(0\.0004\)\-5\.85% \(0\.0005\)\-7\.24% \(0\.0004\)\-9\.09% \(0\.0002\)\-11\.52% \(0\.0001\)\-19\.35% \(<1E\-4\)GPT5\.1\(med\)\-6\.79% \(<1E\-4\)\-11\.11% \(<1E\-4\)\-14\.03% \(<1E\-4\)\-18\.40% \(<1E\-4\)\-20\.33% \(<1E\-4\)\-31\.56% \(<1E\-4\)Avg\.\-5\.04%\-8\.33%\-10\.70%\-13\.53%\-16\.83%\-27\.68%Stat sig \(models w/ p<0\.05\)14/1414/1414/1414/1414/1414/14Table 8:AccuracyΔ\\Delta\(p\-value from Wilcoxon test\) across strictness filter settings in GSM\-Plus\-SEM\. ### E\.2Statistical Analysis \- detailed Table[9](https://arxiv.org/html/2605.07053#A5.T9)contains p\-values from a Wilcoxon statistical test per model and per variant set for paraphrased version of GSM8K, GSM\-Symbolic, and GSM\-Plus\. The same statistic across strictness filter configurations in SEM variants for GSM8K\-SEM is shared in Table[6](https://arxiv.org/html/2605.07053#A5.T6), GSM\-Symbolic\-SEM in Table[7](https://arxiv.org/html/2605.07053#A5.T7), and GSM\-Plus\-SEM in Table[8](https://arxiv.org/html/2605.07053#A5.T8)\. The test measures if drop in performance compared to GSM8K is statistically significant\. Para\-phrasedGSM\-SymbolicGSM\-PlusGrok30\.9540\.4290\.001Llama3\.1\-4050\.5870\.0050\.000Llama4\-Mav0\.0630\.0020\.000Llama4\-Scout0\.1530\.0420\.000GPT4\.10\.6720\.0350\.000GPT4\.1\-mini0\.6720\.0350\.000Gemini2\.5\-f0\.1420\.0050\.000Gemini2\.5\-f\-l0\.2250\.2880\.000Gemini2\.5\-pro0\.6640\.3000\.000O30\.6440\.0500\.000GPT5\(mnml\)0\.4170\.1660\.000GPT5\.1\(mnml\)0\.1420\.0980\.000GPT5\(med\)0\.8540\.2640\.009GPT5\.1\(med\)0\.0770\.0050\.000\#models with p\-value<0\.05<0\.05\(out of 14\)0714Table 9:P\-values by model and variant set \(Paraphrased, GSM\-Symbolic, GSM\-Plus\) from Wilxocon test comparing drop in variant performance compared to GSM8K\. ## Appendix FData analysis examples ### F\.1Problems with existing baselines We found many baseline questions were not logically coherent or complete in information\. For example, in GSM\-Symbolic, ’Andre caught 6 cods last Sunday, the first cod he caught weighs 54 kgs, the second cod he caught weighs 46 kgs, and the last cod he caught weighs 23 kgs\. If a kg of cod costs €2\.0, how much will he earn after selling all the cods to the market?’ There is no information about the 3rd,4th or 5th cod\. ’Faisal caught 4 cods last Friday, the first cod he caught weighs 47 kgs, the second cod he caught weighs 55 kgs, and the last cod he caught weighs 24 kgs\. If a kg of cod costs ¥1\.0, how much will he earn after selling all the cods to the market?’ These questions lacked completeness\. ### F\.2Unexplained answer arrival For the sample baseline Q and SEM variant \(an incorrect variant\): > Baseline:"I have 15 liters of peach drink that arefour\-sixths waterand I wish to add it to 20 liters of grape drink that is two\-fifths water\. But as I pour it, I spill 6 liters of the peach drink\. How much water is in the remaining 29 liters?" > Variant:"A grape drink contains 40% water, and a peach drink contains60% water\. If you mix 20 liters of the grape drink with 15 liters of the peach drink and then accidentally spill 6 liters of the peach drink, how many liters of water are there in total in the remaining mixture?" Llama\-4\-maverick response is as follows\. A grape drink contains 40% water, and a peach drink contains 60% water\. If you mix 20 liters of the grape drink with 15 liters of the peach drink and then accidentally spill 6 liters of the peach drink, how many liters of water are there in total in the remaining mixture? There are 20 x 2/5 = 8 liters of water from the 20 liters grape drink\. After 6 liter of peach drink was spilled, there were 15 \- 6 = 9 liters of peach drink left\. Out of the 9 liters, 9 x 2/3 = 6 liters are water\. Thus, there are a total of 8 \+ 6 = 14 liters of water out of the 29 liters\. \#\#\#\# 14 \#\# Step 1: Calculate the amount of water in the grape drink\. First, we need to find out how much water is in the 20 liters of grape drink\. Since it contains 40% water, we can calculate this by multiplying 20 liters by 40% \(or 0\.4\)\. \#\# Step 2: Perform the calculation for the amount of water in the grape drink\. Water in grape drink = 20 \* 0\.4 = 8 liters\. \#\# Step 3: Calculate the initial amount of water in the peach drink before spilling\. The peach drink contains 60% water\. We have 15 liters of it initially, so we need to find out how much water is in these 15 liters\. \#\# Step 4: Perform the calculation for the initial amount of water in the peach drink\. Water in peach drink = 15 \* 0\.6 = 9 liters\. \#\# Step \#\#5: Determine how much peach drink is left after spilling some\. 6 liters of the peach drink are spilled, so we subtract this from the initial amount to find out how much is left\. \#\# Step \#\#6: Perform the calculation for remaining peach drink after spilling\. Remaining peach drink = Initial peach drink \- Spilled peach drink = \(15 \- 6\) liters = \(15 \- \(15\*6/15\)\) is not needed as we directly know that we spill a portion directly and not a percentage, so it’s simply subtracting from total volume and then finding its percentage contribution later on\. \#\# Step \#\#7: Correctly calculate remaining volume and composition after spill\. After mixing and before spilling, total mixture volume is \(20 \+ 15\) = 35 liters\. After spilling, it becomes \(35 \- part that was spilled\)\. The part that was spilled was only from peach which means we’re removing some portion directly from total mixture but specifically it’s just removing part of one component first then considering its effect on whole mixture indirectly\. \#\# Step \#\#8: Simplify step \#\#7 by directly calculating remaining volumes and their compositions after spill correctly\. The actual step should be calculating how much liquid remains after mixing and then after spilling some\. So first mix then spill: \- Total mixed before spill = Grape \+ Peach = \(20 \+15\) litres \- Water content before spill: \- From Grape Drink \- From Peach Drink Then adjust according to what was actually asked which involves understanding what happens when you "spill" some liquid\. \#\# Step \#\#9: Recalculate with correct understanding: First mix both drinks together: Total Volume Before Spill = Grape Drink \+ Peach Drink = \(20 \+15\) =35 litres Then consider what happens when you "spill" some liquid: Spill occurs only from "peach" but since they are mixed we’ll consider overall effect on final concentration or quantity directly rather than individual components separately at this stage because our main task revolves around final quantity post\-spill\. \#\# Step \#\#10: Simplify steps further into direct calculations needed for final answer: 1\. Water content from Grape Drink 2\. Water content from Peach Drink initially 3\. Adjusting quantities post\-spill Let’s simplify into actual required steps now\. The best answer is calculated as follows: 1\. \*\*Water\*\* \*\*in\*\* \*\*Grape\*\* \*\*Drink\*\*:88litres \(40%40\\%\*2020\) 2\. \*\*Water\*\* \*\*in\*\* \*\*Peach\*\* \*\*Drink\*\*:99litres \(60%60\\%\*1515\) Total Water Before Spill=8\+9=17=8\+9=17 Total Mixture Before Spill=35=35 After Spilling6L6L, Total Mixture Becomes=29L=29L Since we’re removing a portion that’s part\-peach & part\-grape due to mixing; let’s simplify our thought process focusing on what’s being asked i\.e\., ’total’ ’water’ ’content’\. To do that accurately without overcomplicating things: 1st Find out proportion or quantity being removed & apply accordingly\. The final answer is14\\boxed\{14\}\. ## Appendix GOut\-of\-domain experiment details and results Table[10](https://arxiv.org/html/2605.07053#A7.T10)shows results for the out\-of\-domain non\-GSM datasets across Llama\-4\-Scout and Gemini\-2\.5\-Flash models\. ##### Logicbench: For Logicbench, we used the first\-order logic subset with a 160 total number of data samples\. We added the following descriptions in the prompt placeholders \(prompt templated shared in[B](https://arxiv.org/html/2605.07053#A2)\) ‘<special\-instruction\>’ in prompt \#1: Question format should be: "context":<paragraph providing context for answering the question, pointing to reasoning for answering the question correctly compared to other incorrect choices\> prompt \#2: Question format should be: "context":<paragraph providing context for answering the question, pointing to reasoning for answering the question correctly compared to other incorrect choices\> The following demonstrates an example variant created for LogicBench\. > Original question:"context": If someone chooses to use public transportation, depending on their destination, they can ensure timely arrivals\. taking the bus guarantees punctuality at the workplace, while opting for the train allows for attending college lectures\. it should be noted that at least one of the following statements holds true: either \(1\) the individual named emily takes the bus to reach work or \(2\) she utilizes the train to commute to college\., "question": Taking into account the context provided, what conclusion would be most appropriate?, "choices":\{’choice\_1’: ’They will choose to walk to their destination instead\.’, ’choice\_2’: ’He will miss his train and be late for work\.’, ’choice\_3’: ’she will get to the office on time\. or she will attend her classes\.’, ’choice\_4’: ’She will use a combination of both bus and train for her daily commute\.’\} SEM variant question:"context": If someone chooses to use renewable energy sources, depending on their needs, they can ensure efficiency\. Using solar panels guarantees a reduction in electricity bills at home, while opting for wind turbines allows for sustainable energy production on a farm\. It should be noted that at least one of the following statements holds true: either \(1\) the individual named Alex uses solar panels to power his house or \(2\) he utilizes wind turbines to generate energy for his farm\., "question": Considering the context provided, what conclusion would be most appropriate?, "choices":\{’choice\_1’: ’He will choose to rely entirely on fossil fuels instead\.’, ’choice\_2’: ’He will face an unexpected power outage at home\.’, ’choice\_3’: ’He will reduce his electricity bills\. or he will produce sustainable energy\.’, ’choice\_4’: ’He will use a combination of both solar panels and wind turbines for his energy needs\.’\} Right answer for both:’choice\_3’ In terms of evaluation prompts, we follow the prompt used in the source paperParmar et al\. \([2024](https://arxiv.org/html/2605.07053#bib.bib16)\)\. ##### BigBench\-Hard: We used the temporal sequence subset from this dataset and the total number of data samples were 245\. We added the following descriptions in the prompt placeholders \(prompt template shared in[B](https://arxiv.org/html/2605.07053#A2)\) ‘<special\-instruction\>’ in prompt \#1 and prompt \#2 respectively: Question format should be: "question":<question text comprising 5\-7 sentences containing evidence needed to be all considered to arrive at the answer\> Question format should be: "question":<question text of similar length and complexity as the original question\> "choices":<four choices\> The following demonstrates an example variant created for BigBench\-Hard\. > Original question: "question": Today, Emily went to the soccer field\. Between what times could they have gone? We know that: Emily woke up at 10am\. Linda saw Emily waiting at the train station from 10am to 12pm\. Ashley saw Emily fixing their computer at the electronic store from 12pm to 1pm\. John saw Emily walking towards the Statue of Liberty from 1pm to 2pm\. The soccer field was closed after 8pm\. Between what times could Emily have gone to the soccer field?, "choices": \(A\) 12pm to 1pm \(B\) 10am to 12pm \(C\) 2pm to 8pm \(D\) 1pm to 2pm SEM variant question: "question": A restaurant is open from 10am to 10pm and serves breakfast from 10am to 12pm, lunch from 12pm to 2pm, and dinner from 5pm to 10pm\. The restaurant also has a happy hour from 2pm to 5pm\. During the dinner hours, the restaurant offers a discount on all menu items\. Considering the time intervals when the restaurant is not serving breakfast or lunch, which time frame does the dinner and happy hour fall into\. The correct interval should include both happy hour and dinner service without overlapping with breakfast or lunch hours\. Given these details about the operating hours of the restaurant, what time frame encompasses both happy hour and dinner service without including any part of breakfast or lunch service\., "choices": \(A\) 12pm to 1pm \(B\) 10am to 12pm \(C\) 2pm to 8pm \(D\) 1pm to 2pm Right answer for both:’\(C\)’ In terms of evaluation prompts, we follow the prompt used in the source paperSuzgun et al\. \([2022](https://arxiv.org/html/2605.07053#bib.bib28)\)\. ##### NLR\-BIRD: We filtered samples with table complexity \>2 and <50 sampling 320 samples and passed them through the SEM pipeline\. We added the following descriptions in the prompt placeholders \(prompt templated shared in[B](https://arxiv.org/html/2605.07053#A2)\) ‘<special\-instruction\>’ in prompt \#1: Question format should be: <question text for the given answer on the next line tabular data in format \[\[<column name 1\>, <column name 2\> …\. \], \[<row 1 cell 1 value\>, <row1 cell 2 value\>, …\], …\]containing the evidence for the answer\> We added the following descriptions in the prompt placeholders ‘<special\-instruction\>’ in prompt \#2: Question format should be: <question text of similar complexity as the original question and on the next line tabular data containing the same evidence for answering the question as the original question\> The following demonstrates an example variant created for NLR\-BIRD\. NLR\-BIRD contains a natural language \(NL\) question, tabular results for the question, and a response for that question based on the tabular results in NL \- natural language response \(NLR\)\. > Original question:"What is the category of event which was taken place in 2020\-03\-24T12:00:00?" Tabular answer:\[\[’category’\], \[’Food’\], \[’Speaker Gifts’\], \[’Advertisement’\], \[’Parking’\]\] SEM variant question:"What is the type of product that was launched on 2020\-03\-24T12:00:00?" Tabular answer:\[\[’type’\], \[’Food’\], \[’Speaker Gifts’\], \[’Advertisement’\], \[’Parking’\]\] NLR for both:’Food, Speaker Gifts, Advertisement, Parking’ In terms of evaluation prompts and methods, we follow the details and combo\-eval method used in the source paperSingh et al\. \([2025](https://arxiv.org/html/2605.07053#bib.bib25)\)\. DatasetModelOrig AccSEM AccLogicBenchLl4\-Sct92\.8192\.26LogicBenchGm2\.5fl85\.0087\.62BigBench\-HLl4\-Sct100\.095\.08BigBench\-HGm2\.5fl99\.1994\.23NLR\-BIRDLl4\-Sct95\.8698\.91NLR\-BIRDGm2\.5fl95\.8198\.52Table 10:Accuracy comparison between original and variant datasets ## Appendix HGSM8K Benchmark Data Sample Two samples are shared for reference and context about the GSM8K data’s structure\. Question: > Benny saw a 10\-foot shark with 2 6\-inch remoras attached to it\. What percentage of the shark’s body length is the combined length of the remoras? Answer: > First, find the combined length of the remoras in inches: 6 inches/remora×\\times2 remoras =⟨⟨6×2=12⟩⟩\\langle\\langle 6\\times 2=12\\rangle\\rangle12 inches\. Then divide that number by 12 to convert it to feet: 12 inches / 12 inches/foot = <<1=1\>\>1 foot\. Then divide the combined remora length in feet by the shark’s length and multiply by 100% to express the answer as a percentage: 1 foot / 10 feet \* 100% = 10%\. \#\#\#\# 10 Question: > Toula bakery and bought various types of pastries\. She bought 3 dozen donuts which cost $68 per dozen, 2 dozen mini cupcakes which cost $80 per dozen, and 6 dozen mini cheesecakes for $55 per dozen\. How much was the total cost? Answer: > The total charge for the doughnuts was 3 x $68 = $<<3\*68=204\>\>204\. The total charge for the mini cupcakes was 2 x $80 = $<<2\*80=160\>\>160\. The total charge for the mini cheesecakes was 6 x $55 = $<<6\*55=330\>\>330\. Therefore the total amount Toula paid for the pastries was $204 \+ $160 \+ $330 = $<<204\+160\+330=694\>\>694\. \#\#\#\# 694
Similar Articles
SAGE: Scalable Automated Robustness Augmentation for LLM Knowledge Evaluation
This paper introduces SAGE, a framework for scalable automated robustness augmentation of LLM knowledge evaluation benchmarks. It uses fine-tuned smaller models with reinforcement learning to generate and verify question variants at a lower cost than existing methods.
Generating Leakage-Free Benchmarks for Robust RAG Evaluation
This paper introduces SeedRG, a semi-synthetic benchmark generation pipeline designed to eliminate knowledge leakage in Retrieval-Augmented Generation (RAG) evaluation by creating novel examples that preserve reasoning structures but are absent from model parametric memory.
Improving LLM Code Reasoning via Semantic Equivalence Self-Play with Formal Verification
Researchers from University of Edinburgh propose a self-play framework using Liquid Haskell for formal verification to train LLMs on semantic equivalence reasoning, releasing OpInstruct-HSx dataset (28k programs) and achieving 13.3pp accuracy gains on EquiBench.
From Benchmarking to Reasoning: A Dual-Aspect, Large-Scale Evaluation of LLMs on Vietnamese Legal Text
A comprehensive dual-aspect evaluation framework for large language models on Vietnamese legal text simplification, combining quantitative benchmarking (Accuracy, Readability, Consistency) with qualitative error analysis across GPT-4o, Claude 3 Opus, Gemini 1.5 Pro, and Grok-1.
To Know is to Construct: Schema-Constrained Generation for Agent Memory
UnionPay researchers propose SCG-MEM, a schema-constrained generative memory architecture that eliminates structural hallucinations by forcing LLMs to decode only valid memory keys within a dynamic cognitive schema, outperforming dense-retrieval baselines on the LoCoMo benchmark.