SPS: Steering Probability Squeezing for Better Exploration in Reinforcement Learning for Large Language Models

arXiv cs.CL Papers

Summary

Researchers propose SPS (Steering Probability Squeezing), a training paradigm combining reinforcement learning with inverse reinforcement learning to address probability squeezing in LLM reasoning training, where probability mass concentrates too narrowly on high-reward trajectories, limiting exploration and multi-sample performance (Pass@k). Experiments on five reasoning benchmarks demonstrate improved exploration and Pass@k metrics.

arXiv:2604.16995v1 Announce Type: new Abstract: Reinforcement learning (RL) has emerged as a promising paradigm for training reasoning-oriented models by leveraging rule-based reward signals. However, RL training typically tends to improve single-sample success rates (i.e., Pass@1) while offering limited exploration of diverse reasoning trajectories, which is crucial for multi-sample performance (i.e., Pass@k). Our preliminary analysis reveals that this limitation stems from a fundamental squeezing effect, whereby probability mass is excessively concentrated on a narrow subset of high-reward trajectories, restricting genuine exploration and constraining attainable performance under RL training. To address this issue, in this work, we propose Steering Probability Squeezing (SPS), a training paradigm that interleaves conventional RL with inverse reinforcement learning (IRL). SPS treats on-policy rollouts as demonstrations and employs IRL to explicitly reshape the induced trajectory distribution, thereby enhancing exploration without introducing external supervision. Experiments on five commonly used reasoning benchmarks demonstrate that SPS can enable better exploration and improve Pass@k. Beyond algorithmic contributions, we provide an analysis of RL learning dynamics and identify an empirical upper bound on Pass@k, shedding light on intrinsic exploration limits in RL-based reasoning models. Our findings suggest that alternating between RL and IRL offers an effective pathway toward extending the exploration capacity of reasoning-oriented large language models.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/21/26, 07:05 AM

# SPS: Steering Probability Squeezing for Better Exploration in Reinforcement Learning for Large Language Models
Source: [https://arxiv.org/html/2604.16995](https://arxiv.org/html/2604.16995)
Yifu Huo1, Chenglong Wang1, Ziming Zhu1, Shunjie Xing1, Peinan Feng1, Tongran Liu2, Qiaozhi He1Tianhua Zhou3, Xiaojia Chang3, Jingbo Zhu1, Zhengtao Yu4, and Tong Xiao1 1Northeastern University, Shenyang, China 2CAS Key Laboratory of Behavioral Science, Beijing, China 3Independent Researcher, Beijing, China 4Kunming University of Science and Technology, Kunming, China ifnoct@gmail\.com xiaotong@mail\.neu\.edu\.cn

###### Abstract

Reinforcement learning \(RL\) has emerged as a promising paradigm for training reasoning\-oriented models by leveraging rule\-based reward signals\. However, RL training typically tends to improve single\-sample success rates \(i\.e\., Pass@1\) while offering limited exploration of diverse reasoning trajectories, which is crucial for multi\-sample performance \(i\.e\., Pass@k\)\. Our preliminary analysis reveals that this limitation stems from a fundamentalsqueezing effect, whereby probability mass is excessively concentrated on a narrow subset of high\-reward trajectories, restricting genuine exploration and constraining attainable performance under RL training\. To address this issue, in this work, we proposeSteeringProbabilitySqueezing \(SPS\), a training paradigm that interleaves conventional RL with inverse reinforcement learning \(IRL\)\. SPS treats on\-policy rollouts as demonstrations and employs IRL to explicitly reshape the induced trajectory distribution, thereby enhancing exploration without introducing external supervision\. Experiments on five commonly used reasoning benchmarks demonstrate that SPS can enable better exploration and improve Pass@k\. Beyond algorithmic contributions, we provide an analysis of RL learning dynamics and identify an empirical upper bound on Pass@k, shedding light on intrinsic exploration limits in RL\-based reasoning models\. Our findings suggest that alternating between RL and IRL offers an effective pathway toward extending the exploration capacity of reasoning\-oriented large language models\.

SPS: Steering Probability Squeezing for Better Exploration in Reinforcement Learning for Large Language Models

Yifu Huo1, Chenglong Wang1, Ziming Zhu1, Shunjie Xing1, Peinan Feng1, Tongran Liu2,Qiaozhi He1Tianhua Zhou3, Xiaojia Chang3, Jingbo Zhu1, Zhengtao Yu4, and Tong Xiao1††thanks:Corresponding author\.1Northeastern University, Shenyang, China2CAS Key Laboratory of Behavioral Science, Beijing, China3Independent Researcher, Beijing, China4Kunming University of Science and Technology, Kunming, Chinaifnoct@gmail\.com xiaotong@mail\.neu\.edu\.cn

## 1Introduction

In recent years, large language models \(LLMs\) have demonstrated impressive performance across a broad spectrum of foundational natural language processing \(NLP\) tasks, including text summarization, dialogue systems, and machine translationStiennon et al\. \([2020](https://arxiv.org/html/2604.16995#bib.bib26)\); Wang et al\. \([2024a](https://arxiv.org/html/2604.16995#bib.bib35)\); Luo et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib19)\)\. Building on the advances, the research community has increasingly shifted its focus toward more challenging research frontiers, especially in reasoning and code generationLightman et al\. \([2024](https://arxiv.org/html/2604.16995#bib.bib16)\); Li et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib15)\), and has even begun exploring the use of LLMs in the discovery of novel scientific theoremsGeorgiev et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib8)\)\. As a result, exploration has emerged as a key capability of LLMs for future progress in these domains\.

Motivated by the growing importance of exploration in reasoning\-centric applications, contemporary LLM alignment methods have begun to explicitly incorporate exploration into the training pipeline\. A simple and widely adopted strategy is to draw multiple samples per prompt to obtain a diverse set of candidate responses, where the model’s exploration capability is essential for ensuring output diversityLiu et al\. \([2024](https://arxiv.org/html/2604.16995#bib.bib18)\); Wang et al\. \([2024b](https://arxiv.org/html/2604.16995#bib.bib36)\)\. However, such multi\-sample strategies merely increase surface\-level diversity by repeatedly sampling from an unchanged policy, without fundamentally enhancing the entropy of the underlying distribution, resulting in highly inefficient explorationCui et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib4)\)\.

This limitation has been further substantiated by recent empirical studies\. For example,Yue et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib43)\)demonstrate that although RL training substantially improves Pass@1 under large\-scale sampling, the corresponding gains in Pass@k grow much more slowly, reflecting insufficient exploration of alternative reasoning trajectories\. In essence, RL primarily improves sampling efficiency to boost single\-sample success rates, rather than uncovering diverse trajectories that would meaningfully enhance multi\-sample performance\. To mitigate this sharpening effect and promote exploration, recent work has extended vanilla RL methods primarily along a common direction: explicitly counteracting entropy collapse to encourage broader exploration during RL trainingLiu et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib17)\); Cui et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib4)\)\.

In this work, we advance this line of research by investigating a fundamentalsqueezing effectin RL trainingRen and Sutherland \([2024](https://arxiv.org/html/2604.16995#bib.bib23)\)\. This effect characterizes a systematic bias in probability mass redistribution\. Specifically, negative gradients applied to low\-probability responses fail to reallocate probability mass toward positively reinforced alternatives; instead, the removed mass is disproportionately absorbed by the greedy \(i\.e\., already dominant\) response\. As a consequence, the output distribution becomes increasingly concentrated, exacerbating distributional sharpening rather than promoting exploration\. Our preliminary analysis reveals that this squeezing effect constitutes an intrinsic limitation of exploration in RL\-based training\. Moreover, we provide a theoretical justification supporting this insight, formalizing how probability mass redistribution under standard RL objectives leads to progressive concentration \(Please refer to Appendix[A](https://arxiv.org/html/2604.16995#A1)\)\.

Motivated by this analysis, we aim to explicitly enhance exploration by mitigating the squeezing effect\. To this end, we proposeSteeringProbabilitySqueezing \(SPS\), an RL training approach that extends conventional RL by interleaving inverse reinforcement learning \(IRL\) stages\. Our basic idea is that, following standard RL training, we employ an IRL to explicitly reshape the induced trajectory distribution, reallocating probability mass away from overly dominant responses toward under\-explored but potentially valuable alternatives\. Specifically, compared to vanilla RL, SPS periodically incorporates forward IRL updatesSun and van der Schaar \([2024](https://arxiv.org/html/2604.16995#bib.bib29)\), using only on\-policy rollouts as demonstrations to avoid introducing external supervision or prior knowledge\. Additionally, to further enhance exploration, we design an iterative SPS training strategy that repeatedly alternates between RL and IRL updates, enabling progressive redistribution of probability mass and preventing premature concentration of the policy\.

Our core contributions are threefold:

- •We conduct a preliminary analysis of the training dynamics in RL and identify an empirical upper bound on Pass@k\. Our analysis results reveal the presence of asqueezing effectin RL, which constrains exploration\.
- •Building on this analysis, we propose the SPS approach, which employs IRL to explicitly reshape the induced trajectory distribution, thereby facilitating enhanced exploration\. Additionally, we introduce an iterative SPS training strategy to further enhance exploration\.
- •We evaluate SPS on five Olympiad\-level mathematical benchmarks\. The experimental results demonstrate consistent and substantial improvements in Pass@k, indicating that SPS effectively broadens exploration and facilitates the discovery of diverse reasoning trajectories\. Notably, on the Qwen2\.5\-Math\-1\.5B model, SPS achieves a Pass@128 score of 63\.33 on theBrUMObenchmark, representing an improvement of \+10\.00 points compared to the vanilla GRPOShao et al\. \([2024](https://arxiv.org/html/2604.16995#bib.bib25)\)\.

## 2Preliminaries

### 2\.1Task Formulation

#### Enhancing LLM Reasoning via Constrained Data\.

Given a finite set of reasoning questionsxxwith corresponding ground truth labell\{l\}, the objective of enhancing LLM reasoning is to learn a policy that produces correct reasoning trajectories through on\-policy rollouts\. During training, the policy iteratively samples multiple trajectories and receives corresponding outcome\-level feedback extracted from the validator\. The validator can be written as

R​\(y,l\)\\displaystyle R\(y,l\)=\\displaystyle=𝕀​\[v​\(y\)=l\]\\displaystyle\\mathbb\{I\}\[v\(y\)=l\]\(1\)wherev​\(⋅\)v\(\\cdot\)denotes an extraction function that extracts the answer from responseyy\. In mathematical reasoning, the validator is commonly formulated as an indicator function, assigning a value of 1 when the extracted answer exactly matches the ground truthll, and 0 otherwise\.

#### Exploration on Reasoning Tasks\.

In LLMs training, exploration refers to the ability of a learning process to expand the set of correct reasoning trajectories rather than simply reweighting partial existing patterns\. Formally, given a base policyπbase​\(⋅\)\\pi\_\{\\mathrm\{base\}\}\(\\cdot\)and a training policyπθ​\(⋅\)\\pi\_\{\\theta\}\(\\cdot\), exploration occurs ifπθ​\(⋅\)\\pi\_\{\\theta\}\(\\cdot\)raises the probability to correct reasoning trajectories that are outside the high\-likelihood region, thereby enlarging the boundary of the set of solvable problems\.

#### Measurement of Exploration\.

Under our definition, effective exploration corresponds to expanding the set of problems that the model can successfully solve\. To operationalize this notion, we adopt Pass@k as an estimation of the exploration\. Pass@k is commonly defined as the expected maximum reward obtained fromkkindependently sampled responses for a given problemChen et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib3)\)\. Formally, it is computed as

κk\\displaystyle\\kappa\_\{k\}=\\displaystyle=𝔼\(x,l\)∼D\{yi^\}i=1k∼πθ\(⋅\|x\)\[\\displaystyle\\hskip\-8\.5359pt\\mathop\{\\mathbb\{E\}\}\_\{\\begin\{subarray\}\{c\}\{\}\_\{\(x,l\)\\sim D\}\\\\ \{\}\_\{\\\{\\hat\{y\_\{i\}\}\\\}^\{k\}\_\{i=1\}\\sim\\pi\_\{\\theta\}\(\\cdot\|x\)\}\\end\{subarray\}\}\\hskip\-8\.5359pt\[max\(R\(y1^,l\),R\(y2^,l\),⋯,R\(yk^,l\)\)\]\\displaystyle\\max\(R\(\\hat\{y\_\{1\}\},l\),R\(\\hat\{y\_\{2\}\},l\),\\cdots,R\(\\hat\{y\_\{k\}\},l\)\)\]wherekkis typically set to a relatively large value to reflect the model’s exploration capability\. Following prior studiesJi et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib12)\), we setk=128k=128throughout our experiments\.

### 2\.2Group Relative Policy Optimization

GRPO has emerged as one of the most widely adopted RL algorithms for training LLMs\. Compared to standard PPOSchulman et al\. \([2017](https://arxiv.org/html/2604.16995#bib.bib24)\), GRPO estimates advantages using a group ofGGrollouts rather than relying on a separate value network\. Despite this multi\-sample formulation, the reward signal in the RLVR setting is binary \(i\.e\., correct or incorrect\), which allows the learning objective to be reformulated in a contrastive learning framework\. Building on this observation,[Wu et al\.](https://arxiv.org/html/2604.16995#bib.bib38)further decomposes the original objective into the following contrastive form:

𝒥GRPO​\(θ\)\\displaystyle\\mathcal\{J\}\_\{\\text\{GRPO\}\}\(\\theta\)=\\displaystyle=Var​\(x\)\(𝔼y\+∼πθ\+\(⋅\|x\)πθ​\(y\+\|x\)\|y\+\|\\displaystyle\\sqrt\{\\text\{Var\}\(x\)\}\\Bigg\(\\mathop\{\\mathbb\{E\}\}\_\{\{\}\_\{y^\{\+\}\\sim\\pi\_\{\\theta\}^\{\+\}\(\\cdot\|x\)\}\}\\hskip\-8\.5359pt\\frac\{\\pi\_\{\\theta\}\(y^\{\+\}\|x\)\}\{\|y^\{\+\}\|\}\(3\)−𝔼y−∼πθ−\(⋅\|x\)πθ​\(y−\|x\)\|y−\|\)\\displaystyle\-\\hskip\-8\.5359pt\\mathop\{\\mathbb\{E\}\}\_\{\{\}\_\{y^\{\-\}\\sim\\pi\_\{\\theta\}^\{\-\}\(\\cdot\|x\)\}\}\\hskip\-8\.5359pt\\frac\{\\pi\_\{\\theta\}\(y^\{\-\}\|x\)\}\{\|y^\{\-\}\|\}\\Bigg\)whereVar​\(⋅\)\\mathrm\{Var\}\(\\cdot\)denotes the variance of the Bernoulli reward scores estimated from grouped samples, andy\+y^\{\+\}andy−y^\{\-\}denote positively and negatively rewarded samples, respectively\.πθ\+​\(⋅\)\\pi\_\{\\theta\}^\{\+\}\(\\cdot\)andπθ−​\(⋅\)\\pi\_\{\\theta\}^\{\-\}\(\\cdot\)denote the positive and negative policy, respectively\.

![Refer to caption](https://arxiv.org/html/2604.16995v1/x1.png)\(a\)Influence of gradients on a balanced distribution\.
![Refer to caption](https://arxiv.org/html/2604.16995v1/x2.png)\(b\)Influence of gradients on a peaky distribution\.

Figure 1:Illustration ofsqueezing effect\.yn∗y\_\{n\}^\{\*\}denotes the sequence that dominates the output distribution \(i\.e\., the sequence consistently sampled by greedy decoding\)\. Subfigure \(a\) shows the normal RL case, where probability mass shifts along the gradient direction\. Subfigure \(b\) shows that when the distribution is already imbalanced, the updates further concentrate probability mass into the dominant peak, a phenomenon referred to as thesqueezing effect\.

## 3Preliminary Analysis

Motivated by learning dynamics analysesRen and Sutherland \([2024](https://arxiv.org/html/2604.16995#bib.bib23)\), we hypothesize that the under\-exploration issue in RL arises from an inherent squeezing effect induced by contrastive reward optimization\. To validate this hypothesis, we conduct a two\-stage analysis\. First, we characterize how the squeezing effect emerges during RL training\. Second, we explore how this effect restricts genuine exploration in reasoning tasks\.

### 3\.1Emergence of the Squeezing Effect in Reinforcement Learning

Thesqueezing effectdescribes a phenomenon in which applying negative gradient updates to low\-probability tokens paradoxically causes the model’s output distribution to concentrate further on the most likely token\. As illustrated in Figure[1\(a\)](https://arxiv.org/html/2604.16995#S2.F1.sf1), when a policy model is trained with RL, its updates are jointly influenced by two opposing gradient components arising from the objective\. Intuitively, the positive gradient increases the likelihood of positively rewarded samples, while the negative gradient suppresses the likelihood of negatively rewarded ones\. However, this intuition breaks down under highly imbalanced output distributions, as shown in Figure[1\(b\)](https://arxiv.org/html/2604.16995#S2.F1.sf2)\. When a small number of tokens already dominate the distribution, the probability mass removed from low\-probability tokens is not redistributed evenly; instead, it is effectivelysqueezedtoward the dominant tokens, further amplifying their probabilities\.

In fact, this counterintuitive behavior arises from the normalization property of the softmax function used in the modelRen and Sutherland \([2024](https://arxiv.org/html/2604.16995#bib.bib23)\)\. Specifically, when a negative update is applied to a token with negligible probability, the token itself is barely affected\. Instead, the update primarily increases the softmax normalization constant, which reduces the normalized probabilities of nearly all tokens\. For tokens that already dominate the distribution, however, this reduction is minimal in relative terms, causing their normalized probabilities to increase proportionally\. As a result, probability mass progressively concentrates on the most likely token, leading to systematic sharpening of the output distribution and reduced diversity\. A detailed theoretical proof of the squeezing effect is provided in Appendix[A](https://arxiv.org/html/2604.16995#A1)\.

00\.20\.40\.60\.81\.002020404060608080100100Accuracy IntervalCount\(a\) Before GRPO

00\.20\.40\.60\.81\.002020404060608080100100Accuracy IntervalCount\(b\) After GRPO

01002003004005006007000\.10\.10\.20\.20\.30\.30\.40\.40\.50\.50\.60\.6Training StepPass@12810k Data5k Data3k Data\(c\) AIME\-25

Figure 2:Partial results of the preliminary study\. Subfigures \(a\) and \(b\) show the effect of GRPO on average question accuracy over the combined dataset\. Subfigure \(c\) presents the dynamics of the Pass@128 metric during training, revealing an empirical boundary on exploration\. More results can be found in Figure[5](https://arxiv.org/html/2604.16995#A3.F5)\.
### 3\.2Impact of the Squeezing Effect on Exploration

In this subsection, we analyze the impact of the squeezing effect on RL performance from an exploration perspective\. Inspired by recent studies highlighting the importance of entropy and distributional sharpness in RLCui et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib4)\); Yue et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib43)\), we argue that as the squeezing effect progressively reallocates probability mass toward already dominant tokens, the model’s output distribution becomes increasingly concentrated\. A more closely related phenomenon is reported by[Tang et al\.](https://arxiv.org/html/2604.16995#bib.bib30), who observe that penalizing low\-probability tokens suppresses unlikely outputs, thereby narrowing the distribution and reducing response diversity\. This gradual loss of diversity directly constrains exploratory behavior during training, limiting the model’s ability to discover alternative and potentially superior reasoning trajectories\.

Based on this insight, we conduct a preliminary study focusing on the evolution of solvable questions during RL training\. Specifically, we fine\-tune Qwen2\.5\-Math\-7B on 10k questions sampled fromOpenr1\-Math\-46k\-8192using GRPO, and evaluate intermediate checkpoints on a combined benchmark consisting of the five Olympiad\-level datasets\. For each question, we compute the average pass rate across multiple sampled responses and discretize these values into accuracy buckets, enabling us to examine how performance is distributed throughout the course of GRPO training\. Figures[2](https://arxiv.org/html/2604.16995#S3.F2)\(a\) and \(b\) present the histograms of the average Pass@1 accuracy distributions for the base model \(denoted asBefore GRPO\) and the best GRPO checkpoint \(denoted asAfter GRPO\), respectively\. As shown in the results, although GRPO introduces explicit exploration during training, the model does not consistently discover better trajectories for all questions\. To further substantiate this observation, we also report Pass@128 results on AIME\-25, where model checkpoints are evaluated every 100 training steps under different training data scales \(3k, 5k, and 10k questions\)\. Across all settings, we observe that increasing training steps does not lead to a monotonic improvement in Pass@128 performance, indicating that higher\-quality trajectories are not continuously uncovered during training\. Recent studies often attribute this phenomenon to entropy collapse in RLCui et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib4)\)\. However, rather than stopping at this surface\-level explanation, we here probe a deeper underlying cause:the probability squeezing effect, which naturally acts as a key mechanism that can trigger entropy collapse\.

## 4Steering Probability Sequeezing

![Refer to caption](https://arxiv.org/html/2604.16995v1/x3.png)Figure 3:An overview of our SPS approach\. Our training pipeline follows an iterative loop consisting of two complementary phases: \(a\) We first perform standard RL to explore the dataset and generate rollouts, from which a subset is sampled as demonstrations; \(b\) In the subsequent IRL phase, these demonstrations are leveraged to steer probability squeezing by reshaping the policy distribution\. In practice, the two phases are interleaved to form a continual and unified training process\.From our preliminary analysis, we have established two key findings: 1\) the squeezing effect occurs in RL, and 2\) this squeezing effect limits the exploration\. These findings suggest that if we can steer this probability squeezing in a way that favors exploration, we could achieve improved RL performance\. To this end, we propose an SPS approach, which explicitly steers the probability squeezing phenomenon via interleaving on\-policy RL with inverse RL\. The basic idea of SPS is to “redirect” the misallocated probability mass during squeezing: instead of allowing it to converge to dominant greedy trajectories, we guide it toward under\-explored regions that may contain better correct trajectories\. The overview of SPS is shown in Figure[3](https://arxiv.org/html/2604.16995#S4.F3)\. We present the details of SPS in the following sections\.

### 4\.1Inverse Reinforcement Learning for Probability Redistribution

Standard RL typically leads to probability squeezing, where excessive mass concentrates on a narrow set of high\-reward trajectories\. In this work, we adopt IRL as a principled mechanism to steer probability redistribution by matching desired occupancy patterns, rather than relying on ad hoc entropy regularization or reward reweighting\. In the IRL phase, we employ a forward\-KL objective to reshape the policy’s output distribution\. The loss function is defined as:

ℒIRL=−𝔼x∼Dy′∼Yx′KL\(πrollout\(y′\|x\)∥π\(y′\|x\)\)\\displaystyle\\mathcal\{L\}\_\{\\text\{IRL\}\}=\-\\mathbb\{E\}\_\{\\begin\{subarray\}\{c\}x\\sim D\\\\ y^\{\\prime\}\\sim Y^\{\\prime\}\_\{x\}\\end\{subarray\}\}\\text\{KL\}\(\\pi\_\{\\text\{rollout\}\}\(y^\{\\prime\}\|x\)\\\|\\pi\(y^\{\\prime\}\|x\)\)\(4\)wherexxis sampled from the training datasetDD, andy′y^\{\\prime\}is sampled from a rollout setYx′Y\_\{x\}^\{\\prime\}, which is obtained by uniformly sampling from the responses generated during the vanilla RL phase\. Here,πrollout\\pi\_\{\\text\{rollout\}\}denotes the empirical distribution over rollout completions, whileπ\\pirepresents the current policy\. This objective encourages the policy to align with rollout\-supported solution trajectories and thereby promotes broader exploration\.

Crucially, during the IRL phases, we treat the rollouts generated by the current policy as the sole source of “expert trajectories”\. Note that in this process, no external supervision, annotations, or domain knowledge is introduced\. As a result, SPS preserves the mass\-covering nature of RL, while encouraging the model to explore beyond the narrow high\-reward modes reinforced by standard RL\.

#### Why Inverse Reinforcement Learning?

IRL is no stranger to learn target distributions from demonstrationsSun and van der Schaar \([2024](https://arxiv.org/html/2604.16995#bib.bib29)\); Sun \([2024](https://arxiv.org/html/2604.16995#bib.bib27)\)\. From a theoretical perspective, IRL enjoys an advantage that is particularly well\-suited to our setting: it enables learning directly from example trajectories without explicitly specifying or constraining the policy through divergence\-based regularization \(e\.g\., KL\-based approaches\)\. In our scenario, the goal is to explicitly control the probability squeezing phenomenon, specifically, to encourage probability mass to be redistributed toward under\-explored yet potentially correct trajectories\. Intuitively, this corresponds to steering the squeezing behavior in Figure[2](https://arxiv.org/html/2604.16995#S3.F2)\(b\) to operate more like Figure[2](https://arxiv.org/html/2604.16995#S3.F2)\(a\), where probability mass is concentrated around diverse high\-quality trajectories rather than collapsing onto a few dominant ones\. By incorporating IRL, we can directly leverage sampled rollouts as demonstrations to reshape the policy distribution, explicitly counteracting misallocated probability mass during squeezing\. This allows the model to preserve exploration while still benefiting from reinforcement signals\.

#### Low\-Likelihood Trajectory Emphasis\.

Based on the analysis in Section[3\.1](https://arxiv.org/html/2604.16995#S3.SS1), we observe that the squeezing effect primarily arises when optimization is dominated by negative samples with extremely low model likelihood\. This observation suggests that explicitly increasing the influence of such low\-likelihood solutions may help alleviate the squeezing phenomenon\. Motivated by this insight, we propose Low\-Likelihood Trajectory Emphasis \(L2TE\), a strategy that preferentially samples rollouts from trajectories with relatively low model likelihood\. By amplifying the learning signal from these under\-explored solutions, L2TE encourages broader exploration and counteracts excessive probability concentration\. To ensure stable IRL training, we further augment each sampled batch with positive trajectories whenever the number of available negative samples is insufficient\.

### 4\.2Iterative Reinforcement Learning

Since the IRL phase explicitly reshapes the model’s output distribution, it alleviates excessive distributional sharpening and thereby re\-enables exploration within a fixed dataset\. To further promote sustained exploration, we design a continually looped training strategy, as illustrated in Algorithm[1](https://arxiv.org/html/2604.16995#algorithm1)\. Specifically, we first fine\-tune the base model using vanilla RL and collect the resulting rollouts\. From these rollouts, we sample a small subset that balances exploration diversity and computational efficiency\. The selected rollouts are then used to perform IRL on the reinforced policy, which reshapes the output distribution by redistributing probability mass away from overly dominant trajectories\. This updated policy is subsequently fed back into the next RL phase\. By iterating this RL–IRL loop, the model can continue to explore alternative solution trajectories even under constrained data conditions, progressively expanding the boundary of solvable problems rather than prematurely converging to a narrow set of greedy behaviors\.

Table 1:Performance comparison of RL methods across a set of reasoning benchmarks\. Results are highlighted in bold when SPS outperforms vanilla GRPO, indicating enhanced exploration\.

## 5Experiments

### 5\.1Experimental Setups

#### Dataset and Models\.

Our experiments were conducted on Openr1\-Math\-46k\-8192Yan et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib40)\), which was a curated subset of OpenR1\-Math\-220kFace \([2025](https://arxiv.org/html/2604.16995#bib.bib7)\)\. This subset removed excessively long or erroneous generations, ensuring that all questions were solvable\. From this dataset, we constructed subsets of different scales \(3k, 5k, and 10k\) via uniform random sampling\. For the base models, we conducted experiments using pretrained checkpoints from Qwen2\.5\-Math series, including 1\.5B and 7BYang et al\. \([2024](https://arxiv.org/html/2604.16995#bib.bib41)\)\.

#### Training Details\.

We implemented our method on top ofSWIFT, usingvLLMas the inference backendKwon et al\. \([2023](https://arxiv.org/html/2604.16995#bib.bib14)\)\. During the RL stages, we adopted a completion\-level batch size of 128 and employed a reduced learning rate of 5e\-7 to stabilize long\-horizon exploration\. Rollout generation was performed with a sampling temperature of 1\.0, and we sampled 8 responses per prompt\. Math\-Verify111https://github\.com/huggingface/Math\-Verifywas used as the reward function without any additional format or length\-based rewards\. After the RL phase, we collected the generated rollouts and sampled three responses out of the eight completions for the IRL stages\. To mitigate overfitting during IRL, we used a batch size of 512 and a learning rate of 5e\-10\. We performed four training steps per iteration to support extended exploration\. All experiments were conducted on a cluster of4×84\\times 8NVIDIA H100 GPUs\. More experimental details can be found in Appendix[C](https://arxiv.org/html/2604.16995#A3)\.

#### Evaluation\.

We implemented our SPS method on top of the GRPO algorithm, making GRPOShao et al\. \([2024](https://arxiv.org/html/2604.16995#bib.bib25)\)our primary baseline\. Additionally, we compared our method against several representative RL approaches, including DAPOYu et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib42)\)and GSPOZheng et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib47)\)\. Each baseline was implemented following the recommended configurations reported in the corresponding papers\. We evaluated our models across three challenging olympiad\-level mathematical benchmarks to examine the boundary of solvable questions:AIME[MAA](https://arxiv.org/html/2604.16995#bib.bib20),BrUMO[BRUMO](https://arxiv.org/html/2604.16995#bib.bib2), and HMMT[HMMT](https://arxiv.org/html/2604.16995#bib.bib10)\. ForAIME, we considered both the 2024 and 2025 editions[2024](https://arxiv.org/html/2604.16995#bib.bib45);[2025](https://arxiv.org/html/2604.16995#bib.bib46), and forHMMT, we evaluated bothHMMT\-FebandHMMT\-Nov\. The evaluation was performed usingEvalScope222https://github\.com/modelscope/evalscopeTeam \([2024](https://arxiv.org/html/2604.16995#bib.bib31)\), together with the benchmark data released by[Balunović et al\.](https://arxiv.org/html/2604.16995#bib.bib1)\. We reported Pass@128 and the average of Pass@1 \(Avg@128\) for all benchmarks, generating model outputs with a sampling temperature of 0\.7\.

### 5\.2Main Results

We report Pass@128 and Avg@128 for the best checkpoints within 700 training steps\. The best checkpoint is selected according to Avg@128, as this metric reflects the convergence quality of RL training, as shown in Table[1](https://arxiv.org/html/2604.16995#S4.T1)\. Our results demonstrate that SPS consistently outperforms all RL baselines on Pass@128, while maintaining comparable Avg@128 performance\. This indicates that SPS improves both single\-sample and multi\-sample performance in a synchronized manner\. Notably, SPS substantially increases Pass@128, implying that it effectively expands the exploration boundary\. Remarkably, Qwen2\.5\-Math\-1\.5B achieves a Pass@128 score of 63\.33 using only 3k training samples, highlighting the effectiveness of SPS in data\-constrained settings\.

The results also reveal an interesting pattern:the impact of GRPO varies with model scale, and this trend is consistent across different data regimes\. GRPO reduces Pass@128 for the 1\.5B model, while improving it for the 7B model\. We hypothesize that this phenomenon is closely related to the base model’s initial output distribution\. Smaller models \(e\.g\., 1\.5B\) tend to overfit the training corpus, leading to a sharper distribution\. GRPO aggravates this squeezing effect, thereby suppressing exploration\. In contrast, larger models benefit from GRPO, which appears to enhance exploration by leveraging their richer internal knowledge\.

### 5\.3Ablation on Sampling Size

123450\.00\.00\.10\.10\.20\.20\.30\.30\.40\.40\.50\.50\.60\.60\.70\.7Sampling SizePass@128BrUMO\-25HMMT\-Feb\-25HMMT\-Nov\-25Figure 4:Impact of the sampling size on SPS performance\. The experiments are conducted on the Qwen2\.5\-Math\-1\.5B model\.We further conducted an ablation study to investigate the effect of the sampling\-size hyperparameter on training performance\. Specifically, we applied SPS with varying sampling sizes on the 3k\-sample dataset\. Training was carried out for two epochs, and the results are summarized in Figure[4](https://arxiv.org/html/2604.16995#S5.F4)\. The results demonstrate that model performance increased monotonically with larger sampling sizes\. In practice, however, we balanced batch diversity against computational overhead and thus set the sampling size to three in all main experiments reported in this work\.

## 6Related Works

#### Reinforcement Learning for Large Reasoning Models\.

In the mainstream of current research, reasoning tasks are far from being low\-hanging fruit\. Unlike conventional NLP tasks, theselogic\-intensiveproblems require multi\-step inference and strict logical consistency, making them substantially more difficult to solve\. Interestingly, despite their inherent complexity, the correctness of final answers can often be easily validated through rule\-based procedures, such as exact matching or program executionJiang et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib13)\); Xie et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib39)\); Huo et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib11)\); Wang et al\. \([2026](https://arxiv.org/html/2604.16995#bib.bib34)\)\. This property makes it feasible to train LLMs directly from outcome\-level supervision, rather than relying on costly external annotations\. Based on this observation, RL has emerged as an effective and explainable training paradigm for LLMs\. Compared with RL from human feedback \(RLHF\), which relies on learned reward models to provide learning signalsOuyang et al\. \([2022](https://arxiv.org/html/2604.16995#bib.bib21)\); Zhou et al\. \([2024](https://arxiv.org/html/2604.16995#bib.bib48)\); Wang et al\. \([2025a](https://arxiv.org/html/2604.16995#bib.bib32),[b](https://arxiv.org/html/2604.16995#bib.bib33)\), RLVR replaces human preference annotations with deterministic validators, enabling scalable and low\-cost reward generation\. However, recent studies have indicated that RLVR suffers from degraded exploration, as the learning process tends to concentrate probability mass on a narrow set of high\-reward solutions, leading to a sharpened output distribution and limited discovery of novel reasoning patterns\(Yue et al\.,[2025](https://arxiv.org/html/2604.16995#bib.bib43)\)\.

#### Inverse Reinforcement Learning\.

IRL traditionally sought to infer an implicit reward function from expert demonstrations, framing learning as the recovery of objectives that rationalized observed behaviorsSun and van der Schaar \([2024](https://arxiv.org/html/2604.16995#bib.bib29)\); Sun et al\. \([2024](https://arxiv.org/html/2604.16995#bib.bib28)\); Deng et al\. \([2024](https://arxiv.org/html/2604.16995#bib.bib6)\)\. In contrast to this classical setting, recent IRL\-inspired approaches relaxed the reliance on external experts and instead operated on on\-policy rollouts generated by the model itselfWang et al\. \([2023](https://arxiv.org/html/2604.16995#bib.bib37)\)\. From a self\-supervising perspective, the model’s own trajectories serve as a proxy for demonstrations, allowing implicit reward functions to be extracted from its current behavior distributionZhang et al\. \([2021](https://arxiv.org/html/2604.16995#bib.bib44)\)\. Under this formulation, IRL no longer aims to exactly imitate an expert policy, but rather reshapes the reward or training signal to reweight model\-generated trajectories, encouraging desirable solution patterns while preserving diversity\. This perspective is particularly relevant in LLMs, where explicit rewards are often sparse or binary, and direct training tends to concentrate probability mass on a narrow set of high\-reward outcomes\. By leveraging on\-policy rollouts as implicit supervision, IRL\-style objectives provide a mechanism to smooth and redistribute the output distribution, complementing standard RL updates\.

## 7Conclusion

In this work, we have proposed SPS, an RL framework that interleaves on\-policy RL with IRL to further enhance exploration\. By learning from rollouts generated during the on\-policy training phase, SPS can effectively mitigate the squeezing effect and significantly improve exploration compared with strong baselines across multiple olympiad\-level reasoning benchmarks\. These results underscore the critical role of IRL, which is often overlooked as current research primarily emphasizes purely RL\-based training\. Additionally, this work highlights the importance of analyzing RL from the perspective of learning dynamics, providing a clearer explanation of the behavior and limitations of existing training paradigms\.

## Limitations

While the proposed SPS approach provides a principled mechanism for steering probability mass to enhance exploration, several limitations warrant discussion\. We discuss these limitations below:

- •Although our experiments demonstrate practical effectiveness in reasoning tasks, the empirical validation is restricted to a relatively small set of models, with Qwen2\.5\-Math serving as the primary benchmark due to its consistently strong performance\.
- •Although probability steering proves effective in mitigating thesqueeze effect, future work may explore more sophisticated mechanisms that more fully characterize and exploit the dynamics of reinforcement learning\.
- •Our current study does not analyze the inner states of policy models during training, leaving open questions regarding their interaction and relation to convergence behavior\.

We acknowledge that we have not yet evaluated the method on larger\-scale models\. Due to computational constraints, our experiments focus on the 7B scale, which already allows us to study distributional concentration and exploration dynamics in a controlled setting\.

## Ethics Statement

This work does not need ethical considerations\. The input of training is all from open\-source data, and the output is also obtained based on open\-source or commercial models\.

## Acknowledgments

This work was supported in part by the National Natural Science Foundation of China \(Nos\. U24A20334 and 62276056\), the Yunnan Fundamental Research Projects \(No\.202401BC070021\), the Yunnan Science and Technology Major Project \(No\. 202502AD080014\), the Fundamental Research Funds for the Central Universities \(Nos\. N25BSS054 and N25BSS094\), and the Program of Introducing Talents of Discipline to Universities, Plan 111 \(No\.B16009\)\. We would like to thank the anonymous reviewers and SPC for their valuable comments, which helped improve this paper\.

## References

- Balunović et al\. \(2025\)Mislav Balunović, Jasper Dekoninck, Ivo Petrov, Nikola Jovanović, and Martin Vechev\. 2025\.Matharena: Evaluating llms on uncontaminated math competitions\.
- \(2\)BRUMO\. 2025\.Brown university math olympiad 2025 \(BrUMO\)\.
- Chen et al\. \(2025\)Zhipeng Chen, Xiaobo Qin, Youbin Wu, Yue Ling, Qinghao Ye, Wayne Xin Zhao, and Guang Shi\. 2025\.Pass@k training for adaptively balancing exploration and exploitation of large reasoning models\.*ArXiv preprint*, abs/2508\.10751\.
- Cui et al\. \(2025\)Ganqu Cui, Yuchen Zhang, Jiacheng Chen, Lifan Yuan, Zhi Wang, Yuxin Zuo, Hao\-Si Li, Yuchen Fan, Huayu Chen, Weize Chen, Zhiyuan Liu, Hao Peng, Lei Bai, Wanli Ouyang, Yu Cheng, Bowen Zhou, and Ning Ding\. 2025\.The entropy mechanism of reinforcement learning for reasoning language models\.*ArXiv preprint*, abs/2505\.22617\.
- DeepSeek\-AI et al\. \(2025\)DeepSeek\-AI, Daya Guo, Dejian Yang, Haowei Zhang, Jun\-Mei Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiaoling Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z\. F\. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, and 179 others\. 2025\.[Deepseek\-r1 incentivizes reasoning in llms through reinforcement learning](https://api.semanticscholar.org/CorpusID:275789950)\.*Nature*, 645:633 – 638\.
- Deng et al\. \(2024\)Zhirui Deng, Zhicheng Dou, Yutao Zhu, Ji\-Rong Wen, Ruibin Xiong, Mang Wang, and Weipeng Chen\. 2024\.From novice to expert: Llm agent policy optimization via step\-wise reinforcement learning\.*ArXiv preprint*, abs/2411\.03817\.
- Face \(2025\)Hugging Face\. 2025\.Open r1: A fully open reproduction of deepseek\-r1\.
- Georgiev et al\. \(2025\)Bogdan Georgiev, Javier G’omez\-Serrano, Terence Tao, and Adam Zsolt Wagner\. 2025\.Mathematical exploration and discovery at scale\.*ArXiv preprint*, abs/2511\.02864\.
- Hendrycks et al\. \(2020\)Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Xiaodong Song, and Jacob Steinhardt\. 2020\.[Measuring massive multitask language understanding](https://api.semanticscholar.org/CorpusID:221516475)\.*ArXiv*, abs/2009\.03300\.
- \(10\)HMMT\. 2025\.Harvard\-mit mathematics tournaments \(HMMT\)\.
- Huo et al\. \(2025\)Yifu Huo, Chenglong Wang, Qiren Zhu, Shunjie Xing, Tong Xiao, Chunliang Zhang, Tongran Liu, and Jingbo Zhu\. 2025\.Heal: A hypothesis\-based preference\-aware analysis framework\.In*Findings of the Association for Computational Linguistics: EMNLP 2025*, pages 8901–8919\.
- Ji et al\. \(2025\)Xingguang Ji, Yahui Liu, Qi Wang, Jingyuan Zhang, Yang Yue, Rui Shi, Chenxi Sun, Fuzheng Zhang, Guorui Zhou, and Kun Gai\. 2025\.Leanabell\-prover\-v2: Verifier\-integrated reasoning for formal theorem proving via reinforcement learning\.*ArXiv preprint*, abs/2507\.08649\.
- Jiang et al\. \(2025\)Xue Jiang, Yihong Dong, Mengyang Liu, Hongyi Deng, Tian Wang, Yongding Tao, Rongyu Cao, Binhua Li, Zhi Jin, Wenpin Jiao, Fei Huang, Yongbin Li, and Ge Li\. 2025\.Coderl\+: Improving code generation via reinforcement with execution semantics alignment\.*ArXiv preprint*, abs/2510\.18471\.
- Kwon et al\. \(2023\)Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E\. Gonzalez, Haotong Zhang, and Ion Stoica\. 2023\.Efficient memory management for large language model serving with pagedattention\.*Proceedings of the 29th Symposium on Operating Systems Principles*\.
- Li et al\. \(2025\)Dacheng Li, Shiyi Cao, Chengkun Cao, Xiuyu Li, Shangyin Tan, Kurt Keutzer, Jiarong Xing, Joseph Gonzalez, and Ion Stoica\. 2025\.S\*: Test time scaling for code generation\.*ArXiv preprint*, abs/2502\.14382\.
- Lightman et al\. \(2024\)Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe\. 2024\.Let’s verify step by step\.In*The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7\-11, 2024*\. OpenReview\.net\.
- Liu et al\. \(2025\)Mingjie Liu, Shizhe Diao, Ximing Lu, Jian Hu, Xin Dong, Yejin Choi, Jan Kautz, and Yi Dong\. 2025\.Prorl: Prolonged reinforcement learning expands reasoning boundaries in large language models\.*ArXiv preprint*, abs/2505\.24864\.
- Liu et al\. \(2024\)Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J\. Liu, and Jialu Liu\. 2024\.Statistical rejection sampling improves preference optimization\.In*The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7\-11, 2024*\. OpenReview\.net\.
- Luo et al\. \(2025\)Yingfeng Luo, Tong Zheng, Yongyu Mu, Bei Li, Qinghong Zhang, Yongqi Gao, Ziqiang Xu, Peinan Feng, Xiaoqian Liu, Tong Xiao, and Jingbo Zhu\. 2025\.Beyond decoder\-only: Large language models can be good encoders for machine translation\.*ArXiv preprint*, abs/2503\.06594\.
- \(20\)MAA\. 2025\.American invitational mathematics examination \(AIME\)\.Mathematics Competition Series\.
- Ouyang et al\. \(2022\)Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L\. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F\. Christiano, Jan Leike, and Ryan Lowe\. 2022\.Training language models to follow instructions with human feedback\.In*Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 \- December 9, 2022*\.
- Rein et al\. \(2023\)David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R\. Bowman\. 2023\.[Gpqa: A graduate\-level google\-proof q&a benchmark](https://api.semanticscholar.org/CorpusID:265295009)\.*ArXiv*, abs/2311\.12022\.
- Ren and Sutherland \(2024\)Yi Ren and Danica J\. Sutherland\. 2024\.Learning dynamics of llm finetuning\.*ArXiv preprint*, abs/2407\.10490\.
- Schulman et al\. \(2017\)John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov\. 2017\.Proximal policy optimization algorithms\.*ArXiv preprint*, abs/1707\.06347\.
- Shao et al\. \(2024\)Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Jun\-Mei Song, Mingchuan Zhang, Y\. K\. Li, Yu Wu, and Daya Guo\. 2024\.Deepseekmath: Pushing the limits of mathematical reasoning in open language models\.*ArXiv preprint*, abs/2402\.03300\.
- Stiennon et al\. \(2020\)Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M\. Ziegler, Ryan J\. Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano\. 2020\.Learning to summarize from human feedback\.*ArXiv preprint*, abs/2009\.01325\.
- Sun \(2024\)Hao Sun\. 2024\.Supervised fine\-tuning as inverse reinforcement learning\.*ArXiv preprint*, abs/2403\.12017\.
- Sun et al\. \(2024\)Hao Sun, Thomas Pouplin, Nicolás Astorga, Tennison Liu, and Mihaela van der Schaar\. 2024\.Improving llm generation with inverse and forward alignment: Reward modeling, prompting, fine\-tuning, and inference\-time optimization\.In*The First Workshop on System\-2 Reasoning at Scale, NeurIPS’24*\.
- Sun and van der Schaar \(2024\)Hao Sun and Mihaela van der Schaar\. 2024\.Inverse\-rlignment: Inverse reinforcement learning from demonstrations for llm alignment\.*ArXiv preprint*, abs/2405\.15624\.
- Tang et al\. \(2025\)Xinyu Tang, Yuliang Zhan, Zhixun Li, Wayne Xin Zhao, Zhenduo Zhang, Zujie Wen, Zhiqiang Zhang, and Jun Zhou\. 2025\.Rethinking sample polarity in reinforcement learning with verifiable rewards\.
- Team \(2024\)ModelScope Team\. 2024\.EvalScope: Evaluation framework for large models\.
- Wang et al\. \(2025a\)Chenglong Wang, Yang Gan, Yifu Huo, Yongyu Mu, Qiaozhi He, Murun Yang, Bei Li, Tong Xiao, Chunliang Zhang, Tongran Liu, and 1 others\. 2025a\.Gram: A generative foundation reward model for reward generalization\.*ArXiv preprint*, abs/2506\.14175\.
- Wang et al\. \(2025b\)Chenglong Wang, Yang Gan, Yifu Huo, Yongyu Mu, Murun Yang, Qiaozhi He, Tong Xiao, Chunliang Zhang, Tongran Liu, and Jingbo Zhu\. 2025b\.Rovrm: A robust visual reward model optimized via auxiliary textual preference data\.In*Proceedings of the AAAI Conference on Artificial Intelligence*, volume 39, pages 25336–25344\.
- Wang et al\. \(2026\)Chenglong Wang, Yifu Huo, Yang Gan, Qiaozhi He, Qi Meng, Bei Li, Yan Wang, Junfu Liu, Tianhua Zhou, Jingbo Zhu, and 1 others\. 2026\.Msrl: Scaling generative multimodal reward modeling via multi\-stage reinforcement learning\.*ArXiv preprint*, abs/2603\.25108\.
- Wang et al\. \(2024a\)Chenglong Wang, Hang Zhou, Kaiyan Chang, Bei Li, Yongyu Mu, Tong Xiao, Tongran Liu, and JingBo Zhu\. 2024a\.Hybrid alignment training for large language models\.In*Findings of the Association for Computational Linguistics: ACL 2024*, pages 11389–11403, Bangkok, Thailand\. Association for Computational Linguistics\.
- Wang et al\. \(2024b\)Chenglong Wang, Hang Zhou, Yimin Hu, Yifu Huo, Bei Li, Tongran Liu, Tong Xiao, and Jingbo Zhu\. 2024b\.ESRL: efficient sampling\-based reinforcement learning for sequence generation\.In*Thirty\-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024, Thirty\-Sixth Conference on Innovative Applications of Artificial Intelligence, IAAI 2024, Fourteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2014, February 20\-27, 2024, Vancouver, Canada*, pages 19107–19115\. AAAI Press\.
- Wang et al\. \(2023\)Guojian Wang, Faguo Wu, Xiao Zhang, and Jianxiang Liu\. 2023\.Learning diverse policies with soft self\-generated guidance\.*International Journal of Intelligent Systems*, 2023\(1\):4705291\.
- Wu et al\. \(2025\)Yihong Wu, Liheng Ma, Lei Ding, Muzhi Li, Xinyu Wang, Kejia Chen, Zhan Su, Zhanguang Zhang, Chenyang Huang, Yingxue Zhang, Mark Coates, and Jian\-Yun Nie\. 2025\.It takes two: Your grpo is secretly dpo\.*ArXiv preprint*, abs/2510\.00977\.
- Xie et al\. \(2025\)Tian Xie, Zitian Gao, Qingnan Ren, Haoming Luo, Yuqian Hong, Bryan Dai, Joey Zhou, Kai Qiu, Zhirong Wu, and Chong Luo\. 2025\.Logic\-rl: Unleashing llm reasoning with rule\-based reinforcement learning\.*ArXiv preprint*, abs/2502\.14768\.
- Yan et al\. \(2025\)Jianhao Yan, Yafu Li, Zican Hu, Zhi Wang, Ganqu Cui, Xiaoye Qu, Yu Cheng, and Yue Zhang\. 2025\.Learning to reason under off\-policy guidance\.*ArXiv preprint*, abs/2504\.14945\.
- Yang et al\. \(2024\)An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, and Zhenru Zhang\. 2024\.Qwen2\.5\-math technical report: Toward mathematical expert model via self\-improvement\.*ArXiv preprint*, abs/2409\.12122\.
- Yu et al\. \(2025\)Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, Haibin Lin, Zhiqi Lin, Bole Ma, Guangming Sheng, Yuxuan Tong, Chi Zhang, Mofan Zhang, Wang Zhang, Hang Zhu, and 16 others\. 2025\.Dapo: An open\-source llm reinforcement learning system at scale\.*ArXiv preprint*, abs/2503\.14476\.
- Yue et al\. \(2025\)Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang\. 2025\.Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model?*ArXiv preprint*, abs/2504\.13837\.
- Zhang et al\. \(2021\)Linfeng Zhang, Chenglong Bao, and Kaisheng Ma\. 2021\.Self\-distillation: Towards efficient and compact neural networks\.*IEEE Transactions on Pattern Analysis and Machine Intelligence*, 44\(8\):4388–4403\.
- Zhang and Math\-AI \(2024\)Yifan Zhang and Team Math\-AI\. 2024\.American invitational mathematics examination \(aime\) 2024\.
- Zhang and Math\-AI \(2025\)Yifan Zhang and Team Math\-AI\. 2025\.American invitational mathematics examination \(aime\) 2025\.
- Zheng et al\. \(2025\)Chujie Zheng, Shixuan Liu, Mingze Li, Xiong\-Hui Chen, Bowen Yu, Chang Gao, Kai Dang, Yuqiong Liu, Rui Men, An Yang, Jingren Zhou, and Junyang Lin\. 2025\.Group sequence policy optimization\.*ArXiv preprint*, abs/2507\.18071\.
- Zhou et al\. \(2024\)Hang Zhou, Chenglong Wang, Yimin Hu, Tong Xiao, Chunliang Zhang, and Jingbo Zhu\. 2024\.Prior constraints\-based reward model training for aligning large language models\.In*Proceedings of the 23rd Chinese National Conference on Computational Linguistics \(Volume 1: Main Conference\)*, pages 1395–1407\.

Supplementary Materials for SPS

## Appendix AProofs for Theoretical Results

### A\.1Derivation of the Squeezing Effect

Property 1\.The squeezing effect arises when negative gradient updates are applied to low\-probability tokens, leading to a systematic sharpening of the model’s output distribution\.

Proof:This behavior is inherent to the normalization structure of the softmax function\. Let the model output distribution over the vocabulary be given by

p​\(i\)=eziZ,Z=∑jezj\\displaystyle p\(i\)=\\frac\{e^\{z\_\{i\}\}\}\{Z\},\\qquad Z=\\sum\_\{j\}e^\{z\_\{j\}\}\(5\)whereziz\_\{i\}denotes the logit associated with tokenii\. Consider a tokenmmthat receives a negative logit update during training,

zm←zm\+η,η<0\\displaystyle z\_\{m\}\\leftarrow z\_\{m\}\+\\eta,\\qquad\\eta<0\(6\)which yields the updated distribution

p′​\(i\)=eziZ′,Z′=ezm\+η\+∑j≠mezj\\displaystyle p^\{\\prime\}\(i\)=\\frac\{e^\{z\_\{i\}\}\}\{Z^\{\\prime\}\},\\qquad Z^\{\\prime\}=e^\{z\_\{m\}\+\\eta\}\+\\sum\_\{j\\neq m\}e^\{z\_\{j\}\}\(7\)For any tokenj≠mj\\neq m, we may express the updated probability in terms of the original distribution as

p′​\(j\)=p​\(j\)1\+p​\(m\)​\(eη−1\)\\displaystyle p^\{\\prime\}\(j\)=\\frac\{p\(j\)\}\{1\+p\(m\)\\left\(e^\{\\eta\}\-1\\right\)\}\(8\)The squeezing effect typically arises when the distribution satisfiesp​\(m\)≪1p\(m\)\\ll 1\. Performing a first\-order expansion then gives

p′​\(j\)≈p​\(j\)​\[1−p​\(m\)​\(eη−1\)\]\\displaystyle p^\{\\prime\}\(j\)\\approx p\(j\)\\left\[1\-p\(m\)\\left\(e^\{\\eta\}\-1\\right\)\\right\]\(9\)Sinceη<0\\eta<0implieseη−1<0e^\{\\eta\}\-1<0, it follows that

p′​\(j\)<p​\(j\),∀j≠m\\displaystyle p^\{\\prime\}\(j\)<p\(j\),\\qquad\\forall j\\neq m\(10\)That is, the normalized probabilities of almost all tokens decrease simultaneously\. However, the best\-probability token, being the dominant term in the distribution, which experiences the smallest relative decrease, implying

maxi⁡p′​\(i\)\>maxi⁡p​\(i\)\\displaystyle\\max\_\{i\}p^\{\\prime\}\(i\)\>\\max\_\{i\}p\(i\)\(11\)Thus, the probability mass is progressively concentrated toward the most likely token, and the output distribution becomes increasingly peaked\. This phenomenon is referred to as the squeezing effectRen and Sutherland \([2024](https://arxiv.org/html/2604.16995#bib.bib23)\)\.

### A\.2Squeezing Effect at the Sequence Level

The previous analysis establishes that penalizing low\-probability tokens induces probability mass to concentrate toward the modal token due to the normalization structure of the softmax function\. We now generalize this reasoning to sequence\-level probability distributions, which are central to policy optimization in language model training\.

Property 2\.The squeezing effect arises when negative gradient updates are applied to low\-probability sequences, leading to a systematic sharpening of the model’s output distribution\.

Proof:Let a sequence be denoted by

y=\{y1,…,yT\}\\displaystyle y=\\\{y\_\{1\},\.\.\.,y\_\{T\}\\\}\(12\)and let the model define the joint probaiblity

p​\(y\)=∏t=1Tp​\(yt∣y<t\)\\displaystyle p\(y\)=\\prod\_\{t=1\}^\{T\}p\(y\_\{t\}\\mid y\_\{<t\}\)\(13\)where each conditional distribution is parameterized by a softmax over logitsztz\_\{t\}\. Suppose that a particular sequencey−y^\{\-\}receives a negative gradient update under the training objective, effectively reducing its log\-probability\. This corresponds to a logit\-space update of the form

log⁡p​\(y−\)←log⁡p​\(y−\)\+η,η<0\\displaystyle\\log p\(y^\{\-\}\)\\leftarrow\\log p\(y^\{\-\}\)\+\\eta,\\qquad\\eta<0\(14\)At the sequence level, the normalized model distribution over all candidate sequences𝒴\\mathcal\{Y\}may be represented as

P​\(y\)=exp⁡\(log⁡p​\(y\)\)∑y′∈𝒴exp⁡\(log⁡p​\(y′\)\)\\displaystyle P\(y\)=\\frac\{\\exp\(\\log p\(y\)\)\}\{\\sum\_\{y^\{\\prime\}\\in\\mathcal\{Y\}\}\\exp\(\\log p\(y^\{\\prime\}\)\)\}\(15\)After the update toy−y^\{\-\}, the new distribution becomes

P′​\(y\)=exp⁡\(log⁡p​\(y\)\)exp⁡\(log⁡p​\(y−\)\+η\)\+∑y′≠y−exp⁡\(log⁡p​\(y′\)\)\\displaystyle P^\{\\prime\}\(y\)=\\frac\{\\exp\(\\log p\(y\)\)\}\{\\exp\(\\log p\(y^\{\-\}\)\+\\eta\)\+\\sum\_\{y^\{\\prime\}\\neq y^\{\-\}\}\\exp\(\\log p\(y^\{\\prime\}\)\)\}\(16\)For anyy≠y−y\\neq y^\{\-\}, we obtain

P′​\(y\)=P​\(y\)1\+P​\(y−\)​\(eη−1\)\\displaystyle P^\{\\prime\}\(y\)=\\frac\{P\(y\)\}\{1\+P\(y^\{\-\}\)\\left\(e^\{\\eta\}\-1\\right\)\}\(17\)
If the penalized sequence is already extremely unlikely,i\.e\.

P​\(y−\)≪1\\displaystyle P\(y^\{\-\}\)\\ll 1\(18\)
then a first\-order expansion yields

P′​\(y\)≈P​\(y\)​\[1−P​\(y−\)​\(eη−1\)\]\\displaystyle P^\{\\prime\}\(y\)\\approx P\(y\)\\left\[1\-P\(y^\{\-\}\)\\left\(e^\{\\eta\}\-1\\right\)\\right\]\(19\)
Sinceη<0\\eta<0implieseη−1<0e^\{\\eta\}\-1<0, it follows that

P′​\(y\)<P​\(y\),∀y≠y−\.\\displaystyle P^\{\\prime\}\(y\)<P\(y\),\\qquad\\forall y\\neq y^\{\-\}\.\(20\)
Thus, the normalized probability of nearly every sequence decreases simultaneously\.

Let

y⋆=arg⁡maxy⁡P​\(y\)\\displaystyle y^\{\\star\}=\\arg\\max\_\{y\}P\(y\)\(21\)
y⋆y^\{\\star\}denotes the most probable sequence\. Because this sequence dominates the distribution, its relative decrease under normalization is smallest\. Consequently,

maxy⁡P′​\(y\)\>maxy⁡P​\(y\),\\displaystyle\\max\_\{y\}P^\{\\prime\}\(y\)\>\\max\_\{y\}P\(y\),\(22\)
implying that probability mass becomes increasingly concentrated ony⋆y^\{\\star\}\.

## Appendix BRLVR Algorithms

In this section, we enumerate the RLVR algorithms referred in this paper\.

### B\.1Group Relative Policy Optimization \(GRPO\)

In RLVR, GRPO has become one of the most widely used RL algorithms for LLM training\. GRPO maximizes expected rewards by increasing the likelihood of higher\-reward samples within a group, while normalizing each sample’s advantage by the group’s average reward and variance\. It removes the critic network and instead computes a relative advantage inside each sampled group, then applies a PPO\-style clipped objective to stabilize updates\. The loss function of GRPO can be written as

𝒥GRPO​\(θ\)=1G​∑i=1G1\|yi\|​∑t=1\|yi\|min⁡\(wi​\(θ\)​A^i,clip​\(wi​\(θ\),1−ε,1\+ε\)​A^i\)\\displaystyle\\mathcal\{J\}\_\{\\text\{GRPO\}\}\(\\theta\)=\\frac\{1\}\{G\}\\sum\_\{i=1\}^\{G\}\\frac\{1\}\{\|y\_\{i\}\|\}\\sum\_\{t=1\}^\{\|y\_\{i\}\|\}\\min\\left\(w\_\{i\}\(\\theta\)\\,\\widehat\{A\}\_\{i\},\\;\\mathrm\{clip\}\\\!\\left\(w\_\{i\}\(\\theta\),\\,1\-\\varepsilon,\\,1\+\\varepsilon\\right\)\\widehat\{A\}\_\{i\}\\right\)\(23\)
wherewi​\(θ\)w\_\{i\}\(\\theta\)denotes an importance ratio, which can be computed as

wi​\(θ\)=πθ​\(yi∣x\)πθold​\(yi∣x\)w\_\{i\}\(\\theta\)=\\frac\{\\pi\_\{\\theta\}\(y\_\{i\}\\mid x\)\}\{\\pi\_\{\\theta\_\{\\text\{old\}\}\}\(y\_\{i\}\\mid x\)\}\(24\)Specially, GRPO computes the advantagesA^i\\widehat\{A\}\_\{i\}by normalizing rewards within a group of responses\. In RLVR, we use outcome\-level feedback given in Equation[1](https://arxiv.org/html/2604.16995#S2.E1)as reward, therefore the advantages are computed as:

A^i\\displaystyle\\widehat\{A\}\_\{i\}=\\displaystyle=Ri−mean​\(\{R1,…,RG\}\)std​\(\{R1,…,RG\}\)\\displaystyle\\frac\{R\_\{i\}\-\\text\{mean\}\(\\\{R\_\{1\},\\dots,R\_\{G\}\\\}\)\}\{\\text\{std\}\(\\\{R\_\{1\},\\dots,R\_\{G\}\\\}\)\}\(25\)

### B\.2Dynamic Sampling Policy Optimization \(DAPO\)

To stabilize RL training,Yu et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib42)\)propose DAPO\. In DAPO, the clipping range is asymmetric: the lower bound remains restrictive to control instability, while the upper bound is relaxed to encourage exploration of low\-probability tokens\. Unlike GRPO, gradients are computed at the token level and averaged across all tokens in all sampled responses\. Prompts for which all sampled responses are correct or all are incorrect are filtered out so that every retained prompt contributes a non\-zero learning signal\. The loss function of DAPO can be written as

𝒥DAPO​\(θ\)=1∑i=1G\|yi\|​∑i=1G∑t=1\|yi\|min⁡\(ri,t​\(θ\)​A^i,clip​\(ri,t​\(θ\),1−εlow,1\+εhigh\)​A^i\),\\displaystyle\\mathcal\{J\}\_\{\\text\{DAPO\}\}\(\\theta\)=\\frac\{1\}\{\\sum\_\{i=1\}^\{G\}\|y\_\{i\}\|\}\\sum\_\{i=1\}^\{G\}\\sum\_\{t=1\}^\{\|y\_\{i\}\|\}\\min\\Big\(r\_\{i,t\}\(\\theta\)\\,\\widehat\{A\}\_\{i\},\\;\\mathrm\{clip\}\\\!\\big\(r\_\{i,t\}\(\\theta\),1\-\\varepsilon\_\{\\text\{low\}\},1\+\\varepsilon\_\{\\text\{high\}\}\\big\)\\widehat\{A\}\_\{i\}\\Big\),\(26\)s\.t\.0<\|\{yi∣R​\(yi,l\)=1\}\|<G\\displaystyle\\text\{s\.t\.\}\\quad 0<\\big\|\\\{y\_\{i\}\\mid R\(y\_\{i\},l\)=1\\\}\\big\|<G
whereri,t​\(θ\)r\_\{i,t\}\(\\theta\)denotes a token\-level importance ratio, as follows:

ri,t​\(θ\)=πθ​\(yi,t∣x,yi<t\)πθold​\(yi,t∣x,yi<t\)r\_\{i,t\}\(\\theta\)=\\frac\{\\pi\_\{\\theta\}\(y\_\{i,t\}\\mid x,y\_\{i<t\}\)\}\{\\pi\_\{\\theta\_\{\\text\{old\}\}\}\(y\_\{i,t\}\\mid x,y\_\{i<t\}\)\}\(27\)

### B\.3Group Sequence Policy Optimization \(GSPO\)

GSPO optimizes a sequence\-level clipped objective, where each response’s normalized reward \(advantage\) is weighted by its sequence\-likelihood ratio between the current and old policy\. In essence, it performs PPO\-style clipping at the whole\-sequence level, aligning off\-policy correction and optimization with the sequence\-level reward\. The loss function of GSPO can be written as

𝒥GSPO​\(θ\)=1G​∑i=1Gmin⁡\(si​\(θ\)​A^i,clip​\(wi​\(θ\),1−ε,1\+ε\)​A^i\)\\displaystyle\\mathcal\{J\}\_\{\\text\{GSPO\}\}\(\\theta\)=\\frac\{1\}\{G\}\\sum\_\{i=1\}^\{G\}\\min\\left\(s\_\{i\}\(\\theta\)\\,\\widehat\{A\}\_\{i\},\\;\\mathrm\{clip\}\\\!\\left\(w\_\{i\}\(\\theta\),\\,1\-\\varepsilon,\\,1\+\\varepsilon\\right\)\\widehat\{A\}\_\{i\}\\right\)\(28\)
while its importance ratiosi​\(θ\)s\_\{i\}\(\\theta\)is differently computed as

si​\(θ\)=\(πθ​\(yi\|x\)πθold​\(yi\|x\)\)1\|yi\|=exp⁡\(1\|yi\|​∑t=1\|yi\|log⁡πθ​\(yi,t\|x,yi,<t\)πθold​\(yi,t\|x,yi,<t\)\)\\displaystyle s\_\{i\}\(\\theta\)=\\left\(\\frac\{\\pi\_\{\\theta\}\(y\_\{i\}\|x\)\}\{\\pi\_\{\\theta\_\{\\text\{old\}\}\}\(y\_\{i\}\|x\)\}\\right\)^\{\\frac\{1\}\{\|y\_\{i\}\|\}\}=\\exp\\left\(\\frac\{1\}\{\|y\_\{i\}\|\}\\sum\_\{t=1\}^\{\|y\_\{i\}\|\}\\log\\frac\{\\pi\_\{\\theta\}\(y\_\{i,t\}\|x,y\_\{i,<t\}\)\}\{\\pi\_\{\\theta\_\{\\text\{old\}\}\}\(y\_\{i,t\}\|x,y\_\{i,<t\}\)\}\\right\)\(29\)
Therefore, GSPO applies clipping to entire responses instead of individual tokens to exclude the overly “off\-policy” samples from gradient estimation, which matches both the sequence\-level rewarding and optimizationZheng et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib47)\)\.

## Appendix CImplementation Details

### C\.1Iterative SPS

We propose the pseudo code of iterative SPS in Algorithm[1](https://arxiv.org/html/2604.16995#algorithm1)\. The Iterative SPS algorithm iteratively enhances a base policy by alternating exploration and distribution reshaping: starting from the initial policy, it updates the policy on the dataset using vanilla RL to encourage exploration, collects grouped rollouts, samples a subset emphasizing low\-likelihood or under\-explored trajectories, and then applies IRL on this subset to reshape the policy distribution and mitigate probability squeezing, producing a more balanced and robust enhanced policy for exploration\.

Input:Base policy

πθ0​\(⋅\)\\pi\_\{\\theta\_\{0\}\}\(\\cdot\), dataset

DD, group size

nn, sampling size

kk
Output:Enhanced policy

πθ​\(⋅\)\\pi\_\{\\theta\}\(\\cdot\)
Initialize policy

πθ​\(⋅\)←πθ0​\(⋅\)\\pi\_\{\\theta\}\(\\cdot\)\\leftarrow\\pi\_\{\\theta\_\{0\}\}\(\\cdot\);

while*not converged*do

//Stage 1: Vanilla RL

Update

πθ​\(⋅\)\\pi\_\{\\theta\}\(\\cdot\)on

DDusing vanilla RL to encourage exploration;

Collect grouped rollouts:

Y=\{yx1,…,yxn∣yxi∼πθ\(⋅∣x\),x∈D\}Y=\\\{\\,y\_\{x\}^\{1\},\\dots,y\_\{x\}^\{n\}\\mid y\_\{x\}^\{i\}\\sim\\pi\_\{\\theta\}\(\\cdot\\mid x\),x\\in D\\,\\\}
//PL2TE

Sample a subset

Y′⊂YY^\{\\prime\}\\subset Y, emphasizing low\-likelihood or under\-explored trajectories;

2mm//Stage 2: IRL

Update

πθ​\(⋅\)\\pi\_\{\\theta\}\(\\cdot\)via IRL on

Y′Y^\{\\prime\}by minimizing

ℒIRL\\mathcal\{L\}\_\{\\mathrm\{IRL\}\};

2mm//Reshape the policy distribution to mitigate probability squeezing

return*πθ\\pi\_\{\\theta\}*;

Algorithm 1Iterative SPSIn implementation, the rollout distributionπrollout\\pi\_\{\\text\{rollout\}\}is instantiated through a degenerate discrete distribution over the sampled responses\. Under this construction, the forward\-KL objective can be reformulated in a manner that closely resembles a cross\-entropy style penalty, as previously discussed in related literature\(Sun,[2024](https://arxiv.org/html/2604.16995#bib.bib27)\)\. Interestingly, despite the apparent simplicity of this surrogate, empirical evidence suggests that it nevertheless facilitates an effective enlargement of the exploration region, even in the absence of explicit external guidance\.

Furthermore, while any single low\-probability response is unlikely to be sampled, the total number of such responses is exceedingly large\. Consequently, the probability of obtaining at least one low\-likelihood trajectory within a batch remains high\. To better emphasize these low\-probability trajectories, we sample from the lower quantile of trajectories in each batch\.

### C\.2Hyperparameter Setting

We conduct our main experiments using several RL algorithms, including GRPOShao et al\. \([2024](https://arxiv.org/html/2604.16995#bib.bib25)\), DAPOYu et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib42)\), and GSPOZheng et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib47)\)\. The algorithm\-specific hyperparameters are summarized in Table[2](https://arxiv.org/html/2604.16995#A3.T2)\. For GRPO, we adopt a fixed group size across all baseline methods in order to balance the exploration capacity and computational cost\. For DAPO and GSPO, we employ the recommended default configurations provided inSWIFT, which have been previously validated in practical deployments\.

MethodParameter NameValueGRPObeta0\.01group\_size8DAPOepsilon\_high0\.28max\_resample\_times3soft\_cache\_length2048GSPObeta0\.0epsilon3e\-4epsilon\_high4e\-4steps\_per\_generation4

Table 2:Hyperparameter setting of applied RL methods00\.20\.40\.60\.81\.002020404060608080100100Accuracy IntervalCount00\.20\.40\.60\.81\.002020404060608080100100Accuracy IntervalCount00\.20\.40\.60\.81\.002020404060608080100100Accuracy IntervalCount00\.20\.40\.60\.81\.002020404060608080100100Accuracy IntervalCount\(a\)Accuracy distribution of Qwen2\.5\-Math\-1\.5B\. From left to right: base model, GRPO, DAPO, and GSPO\.

00\.20\.40\.60\.81\.002020404060608080100100Accuracy IntervalCount00\.20\.40\.60\.81\.002020404060608080100100Accuracy IntervalCount00\.20\.40\.60\.81\.002020404060608080100100Accuracy IntervalCount00\.20\.40\.60\.81\.002020404060608080100100Accuracy IntervalCount\(b\)Accuracy distribution of Qwen2\.5\-Math\-7B\. From left to right: base model, GRPO, DAPO, and GSPO\.

Figure 5:Accuracy distribution variation of Qwen2\.5\-Math\-1\.5B

## Appendix DResults on Other Backbone Models and Benchmarks

To further validate the generalizability of our method, we extend the main experiment to incorporate DeepSeek\-R1DeepSeek\-AI et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib5)\)as our additional backbone model\. Furthermore, we extend our evaluation to other non\-mathematical benchmarks, such as MMLUHendrycks et al\. \([2020](https://arxiv.org/html/2604.16995#bib.bib9)\)and GPQA\-DiamondRein et al\. \([2023](https://arxiv.org/html/2604.16995#bib.bib22)\), the extended results are listed as Table[3](https://arxiv.org/html/2604.16995#A4.T3)

Table 3:Additional experimental results on DeepSeek\-R1\. The best results for each group are inbold\. The second\-best results for each group are withunderline\.The extended results consistently show that SPS maintains its effectiveness across different model families and non\-mathematical domains, supporting the general applicability of our method beyond competition\-level mathematics\.

## Appendix EFurther Analysis

### E\.1Analysis of Diversity

As Pass@K \(e\.g\., Pass@128\) is only an indirect proxy for exploration\. Improvements in Pass@K may stem from multiple factors, including enhanced trajectory diversity\. Nevertheless, prior workYue et al\. \([2025](https://arxiv.org/html/2604.16995#bib.bib43)\)suggests an intrinsic relationship between exploration dynamics and Pass@K metrics, as broader policy support generally increases the probability of sampling correct reasoning paths within a finite budget\. In this sense, while Pass@K is not a direct diversity measure, it remains behaviorally correlated with exploration capacity\.

At the same time, we fully agree that explicit diversity metrics provide more direct evidence\. Following this suggestion, we compute trajectory\-level similarity \(lower indicates higher diversity\), with results shown in Table[4](https://arxiv.org/html/2604.16995#A5.T4)\.

StageBASEGRPOSPSSimilarity88\.3488\.1886\.82

Table 4:Diversity comparison across different methods\.The results show that SPS yields lower trajectory similarity, indicating increased reasoning diversity compared to both the base model and GRPO\. This empirical evidence complements the Pass@K improvements and provides more direct support for our exploration claim\.

### E\.2A Cost\-Benefit Analysis on SPS

As a multi\-stage training method, SPS inevitably adds computational overhead and engineering complexity compared to vanilla RLVR methods\. However, in practice, the IRL stage contributes only a negligible fraction of the total training time and does not introduce any substantial computational overhead\. Empirically, the overall runtime is dominated by the RL rollout and policy optimization stage, whereas the IRL update introduces only marginal computational overhead\. To quantify this, we measure the training time by training a 1\.5B\-parameter LLM on 3k prompts, isolating the respective stages\. The measured training time is summarized in Table[5](https://arxiv.org/html/2604.16995#A5.T5)\.

StageRLIRLTime\(min\)68\.102\.25

Table 5:Time cost comparison across different stages\.As shown, the IRL stage accounts for only about 3% of the total time per iteration, which is minor compared to the RL stage\. Therefore, although the pipeline is conceptually multi\-stage, the additional computational cost introduced by IRL is marginal and does not constitute a practical bottleneck\.

### E\.3Diagnostics forSqueezing Effect

We examine probability dynamics by collecting responses generated via greedy decoding and computing the average log\-probability of the generated trajectories under the current policy\. Intuitively, excessive probability squeezing manifests as over\-concentration of mass on a narrow subset of trajectories, typically reflected in inflated log\-probability magnitudes relative to the base model\. We evaluate several algorithms on AIME 2024 and 2025, with results shown in Table[6](https://arxiv.org/html/2604.16995#A5.T6)\.

Table 6:The average log\-probability of the generated trajectories under different optimization methods\.Compared to GRPO and GSPO, SPS maintains log\-probability levels much closer to the base model, indicating that it avoids aggressively concentrating probability mass on a small subset of trajectories\. In contrast, GRPO and GSPO exhibit noticeably higher \(less negative\) log\-probabilities, suggesting stronger probability squeezing\.

These results provide direct empirical evidence that SPS mitigates probability squeezing while preserving performance gains\. We will incorporate this diagnostic analysis into the revised manuscript to clarify the mechanism underlying SPS\.

## Appendix FDiscussion

### F\.1How SPS influence the reasoning?

SPS does not merely flatten the output distribution at the logit level\. If its effect were equivalent to temperature scaling or entropy regularization, we would expect uniform entropy increases without meaningful changes in internal representations\. However, SPS operates on trajectory\-level objectives and reweights complete reasoning paths, which propagates gradients through intermediate transformer layers rather than only adjusting the final projection head\. Empirically, the gains in high\-K metrics exceed what would be predicted from Pass@1 improvements under an independent sampling assumption, suggesting reduced inter\-sample redundancy rather than simple probability smoothing\. Conceptually, post\-hoc logit flattening cannot induce new reasoning modes, whereas SPS reshapes probability mass across semantically distinct trajectories\.

### F\.2Why theDegenerate Discrete Distributionworks well?

Thedegenerate discrete distributionis simply the empirical distribution over RL rollouts,i\.e\., a Monte Carlo estimator of the improved policy\. Since the IRL stage only needs to match the relative structure within the sampled support, rather than reconstructing a continuous density as this empirical approximation is sufficient in practice\. The target distribution is rollout\-induced, so the empirical measure is a consistent surrogate\. While a very small batch may increase variance, performance does not collapse in practice\. This is partly due to therare\-but\-manyeffect, although individual low\-probability trajectories are hard to sample, their combinatorial cardinality is large, so typical batches still contain diverse underrepresented modes\. Moreover, the IRL update is conservative \(small learning rate\), which prevents overfitting to sampling noise\.

Similar Articles

FineSteer: A Unified Framework for Fine-Grained Inference-Time Steering in Large Language Models

arXiv cs.CL

FineSteer is a novel inference-time steering framework that decomposes steering into conditional steering and fine-grained vector synthesis stages, using Subspace-guided Conditional Steering (SCS) and Mixture-of-Steering-Experts (MoSE) mechanisms to improve safety and truthfulness while preserving model utility. Experiments show 7.6% improvement over state-of-the-art methods on TruthfulQA with minimal utility loss.

Beyond GRPO and On-Policy Distillation: An Empirical Sparse-to-Dense Reward Principle for Language-Model Post-Training

Hugging Face Daily Papers

This paper proposes an empirical 'sparse-to-dense' reward principle for language model post-training, arguing that scarce labeled data should be used with sparse rewards for teacher model discovery and dense rewards for student compression via distillation. The authors demonstrate that this staged approach, bridging sparse RL and on-policy distillation, outperforms direct GRPO on deployment-sized models in math benchmarks.