Don't Retrain, Align: Adapting Autoregressive LMs to Diffusion LMs via Representation Alignment

arXiv cs.LG Papers

Summary

This paper introduces Repr-Align, a method to adapt autoregressive language models into diffusion language models via representation alignment, achieving up to 4x training acceleration without retraining representations from scratch.

arXiv:2605.06885v1 Announce Type: new Abstract: Diffusion language models (DLMs) have recently demonstrated capabilities that complement standard autoregressive (AR) models, particularly in non-sequential generation and bidirectional editing. Although recent work has shown that pretrained autoregressive checkpoints can be converted into diffusion language models, existing recipes primarily transfer parameters through continued denoising training with objective- and attention-level modifications. We instead ask whether the internal representation geometry learned by next-token prediction can be explicitly preserved during AR-to-DLM conversion. We hypothesize that much of the semantic structure learned by AR pretraining can transfer across generation orders, and thus DLM training should be viewed as relearning the decoding path rather than relearning language representations. To investigate this, we introduce REPR-ALIGN, a representation alignment objective that adapts a bidirectional masked diffusion model to reuse representations from a pretrained AR model of identical architecture. Concretely, we align the hidden states of the DLM to the frozen AR model at every layer using cosine similarity, while optimizing the standard masked denoising objective. This simple alignment, with no adapters and no architectural changes beyond the attention mask, yields up to 4x training acceleration in our setting and is particularly effective in low-data regimes. Our results suggest that linguistic representations can transfer across generation order, and that representation alignment provides a simple and effective technique for training diffusion language models. Code is available at https://github.com/pengzhangzhi/Open-dLLM.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 05/11/26, 07:03 AM

# Don’t Retrain—Align: Adapting Autoregressive LMs to Diffusion LMs via Representation Alignment
Source: [https://arxiv.org/html/2605.06885](https://arxiv.org/html/2605.06885)
Fred Zhangzhi Peng Duke University &Alexis Fox Duke University Anru R\. Zhang Duke University &Alexander Tong AITHYRA

###### Abstract

Diffusion language models \(DLMs\) have recently demonstrated capabilities that complement standard autoregressive \(AR\) models, particularly in non\-sequential generation and bidirectional editing\. Although recent work has shown that pretrained autoregressive checkpoints can be converted into diffusion language models, existing recipes primarily transfer parameters through continued denoising training with objective\- and attention\-level modifications\. We instead ask whether the internal representation geometry learned by next\-token prediction can be explicitly preserved during AR→\\rightarrowDLM conversion\. We hypothesize that much of the semantic structure learned by AR pretraining can transfer across generation orders, and thus DLM training should be viewed as relearning the decoding path rather than relearning language representations\. To investigate this, we introduceRepr\-Align, a*Representation Alignment*objective that adapts a bidirectional masked diffusion model to reuse representations from a pretrained AR model of identical architecture\. Concretely, we align the hidden states of the DLM to the frozen AR model at every layer using cosine similarity, while optimizing the standard masked denoising objective\. This simple alignment—with no adapters and no architectural changes beyond the attention mask—yields up to*4*×\\timestraining acceleration in our setting and is particularly effective in low\-data regimes\. Our results suggest that linguistic representations are universal regardless of generation order, and representation alignment could be a new go\-to technique for training diffusion language models\. Code is available at[https://github\.com/pengzhangzhi/Open\-dLLM](https://github.com/pengzhangzhi/Open-dLLM)\.

![Refer to caption](https://arxiv.org/html/2605.06885v1/x1.png)\(a\)Adaptation speed\.
![Refer to caption](https://arxiv.org/html/2605.06885v1/figs/modelscompare.png)\(b\)Public DLM frontier\.

Figure 1:Don’t retrain—align\.Left:Repr\-Alignconsistently accelerates AR→\\rightarrowDLM adaptation on HumanEval pass@10, outperforming both AR fine\-tuning and scratch training throughout early conversion\.Right:The resulting oDLM achieves a favorable HumanEval pass@10 versus training\-data trade\-off among public DLMs\.## 1Introduction

![Refer to caption](https://arxiv.org/html/2605.06885v1/figs/method_fig.png)Figure 2:Overview of our methodRepr\-Align: we adapt a pretrained autoregressive \(AR\) transformer into a masked diffusion language model \(DLM\) by switching to bidirectional attention and training with a masked denoising objective, while anchoring layer\-wise hidden states to a frozen AR backbone\.The dominant paradigm in large\-scale language modeling has long been autoregressive \(AR\) sequence modeling\. By factoring the joint probability distribution as a product of conditional probabilities, models such as GPT and Qwen have demonstrated strong general\-purpose generation capabilities\(Radfordet al\.,[2019](https://arxiv.org/html/2605.06885#bib.bib39); Achiamet al\.,[2023](https://arxiv.org/html/2605.06885#bib.bib75); Touvronet al\.,[2023](https://arxiv.org/html/2605.06885#bib.bib38)\)\. Recently, diffusion language models \(DLMs\) have emerged as an alternative formulation for text generation, spanning continuous diffusion over embeddings, discrete absorbing\-state diffusion, likelihood\-based diffusion LMs, masked diffusion LMs, and large\-scale DLMs\(Liet al\.,[2022](https://arxiv.org/html/2605.06885#bib.bib88); Austinet al\.,[2021a](https://arxiv.org/html/2605.06885#bib.bib53); Gulrajani and Hashimoto,[2023](https://arxiv.org/html/2605.06885#bib.bib33); Sahooet al\.,[2024](https://arxiv.org/html/2605.06885#bib.bib21); Nieet al\.,[2025b](https://arxiv.org/html/2605.06885#bib.bib87); Yeet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib82)\)\. By framing generation as any\-order decoding\(Sohl\-Dicksteinet al\.,[2015](https://arxiv.org/html/2605.06885#bib.bib78); Hoet al\.,[2020](https://arxiv.org/html/2605.06885#bib.bib74); Yanget al\.,[2019](https://arxiv.org/html/2605.06885#bib.bib89); Ghazvininejadet al\.,[2019](https://arxiv.org/html/2605.06885#bib.bib90)\), DLMs naturally support non\-left\-to\-right behaviors such as infilling and iterative refinement\(Sahooet al\.,[2024](https://arxiv.org/html/2605.06885#bib.bib21); Gulrajani and Hashimoto,[2023](https://arxiv.org/html/2605.06885#bib.bib33); Changet al\.,[2022](https://arxiv.org/html/2605.06885#bib.bib14)\)\. Despite these advantages, scaling DLMs remains expensive\. In theory DLMs learnL\!L\!paths to generate a sequence vs\. one left\-to\-right generation in ARs, and thus requireLL\-times more compute\. While several recent methods reduce this cost by initializing from pretrained AR checkpoints or converting AR models into DLMs\(Gonget al\.,[2025a](https://arxiv.org/html/2605.06885#bib.bib2); Yeet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib82); Fuet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib84)\), existing conversion recipes largely reuse AR parameters through continued denoising training, attention\-pattern modifications, or sampling conventions\. They do not explicitly constrain the converted DLM to preserve the internal representation geometry of the AR model\.

In this paper, we question the need to treat ARs and DLMs as two disjoint paradigms\. We start from a simple view: the hard part of language generation is learning language representations—the semantic and syntactic structure of the data—not committing to a particular generation order\. Autoregressive pretraining has already learned strong internal features that organize this structure\. If so, training a diffusion language model should not require relearning language representations from scratch\. Instead, the remaining work is mainly mechanical: adapt these existing features to an iterative any\-order decoder\. This reframes DLM training from representation learning to an alignment problem, where we reuse the AR backbone for the representations and train the diffusion mechanism to operate in the same feature space but with any\-order generation\.

To test this hypothesis, we apply*Representation Alignment*\(Yuet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib3); Singhet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib4); Wuet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib5); Jianget al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib6)\)for the first time for a minimalist adaptation of a pretrained AR transformer into a masked diffusion language model \([Figure2](https://arxiv.org/html/2605.06885#S1.F2)\)\. Our setup uses two models with identical architecture: \(i\) a pretrained AR model with causal attention, and \(ii\) the same architecture initialized from the AR weights but with bidirectional attention\. During training, we randomly mask a sequence and optimize the DLM to predict the masked tokens\. In parallel, we feed the clean sequence into the frozen AR model \(teacher\-forced under causal attention\) and extract its hidden states at each layer\. Because the two networks share the same layer structure and hidden sizes, we can directly align their intermediate representations via a layer\-wise cosine\-similarity loss, without introducing adapters or additional parameters\. Intuitively, the AR model provides a stable representational anchor, and diffusion training is reduced to learning an any\-order decoding mechanism that operates in that anchored feature space\. The method is designed to change as little as possible, so that any gains can be attributed to the reuse of AR features rather than to architectural modifications or heavy fine\-tuning\.

Our experiments support the reuse\-through\-alignment view: representation alignment turns AR→\\rightarrowDLM conversion into a largely mechanical adaptation problem rather than relearning linguistic representations\. On HumanEval, alignment improves conversion quality and the gains grow with model size \([Figure3](https://arxiv.org/html/2605.06885#S4.F3)\), increasing pass@10 from 24\.9 to 31\.0 at 0\.6B and from 31\.1 to 40\.5 at 1\.7B\. Beyond quality, alignment enables substantially cheaper conversion through selective training \([Figure4](https://arxiv.org/html/2605.06885#S4.F4)\) and remains effective under a 0\.8B\-token*tiny*subset \([Figure5](https://arxiv.org/html/2605.06885#S4.F5)\)\. As a scale\-up validation, we train a 4B oDLM using the same representation\-preserving conversion recipe\. Against Dream\-7B, a strong public DLM that also builds on AR initialization, oDLM achieves a better HumanEval\-family pass@10 trade\-off \([Figures1\(b\)](https://arxiv.org/html/2605.06885#S0.F1.sf2)and[1](https://arxiv.org/html/2605.06885#S4.T1)\): it improves HumanEval and HumanEval\+ pass@10 by 2\.39 and 2\.40 points, respectively, despite using fewer parameters and a substantially lighter data\-and\-compute budget\.

#### Contributions\.

Our contributions are summarized as follows:

- •We identify representation preservation as a missing ingredient in AR→\\rightarrowDLM conversion\. Instead of merely initializing from an AR checkpoint, we explicitly anchor the DLM student to the frozen AR model’s layer\-wise hidden\-state geometry during masked denoising training\.
- •We introduce a simple representation\-preserving conversion recipe that requires no adapters or architectural changes beyond switching from causal to bidirectional attention\. Across model scales, representation alignment improves conversion quality and sample efficiency, with larger gains at larger model sizes\.
- •We show that AR→\\rightarrowDLM conversion is not inherently data\- or parameter\-update hungry\. With representation alignment, training on a 0\.8B\-token subset can outperform training on the full 50B\-token stream under the same step budget, and freezing embeddings and MLP blocks improves throughput by up to∼\\sim2×\\timeswithout degrading quality\.
- •As a scale\-up validation, we train a 4B oDLM using the same recipe\. Compared with Dream\-7B, a strong public DLM that also leverages AR initialization, oDLM improves HumanEval and HumanEval\+ pass@10 by 2\.39 and 2\.40 points, respectively, while using a smaller backbone and a substantially lighter data\-and\-compute budget\.

## 2Related Work

#### Diffusion language models\.

Diffusion language models span continuous diffusion over embeddings, discrete categorical diffusion, likelihood\-based diffusion LMs, and masked diffusion LMs\(Liet al\.,[2022](https://arxiv.org/html/2605.06885#bib.bib88); Austinet al\.,[2021a](https://arxiv.org/html/2605.06885#bib.bib53); Gulrajani and Hashimoto,[2023](https://arxiv.org/html/2605.06885#bib.bib33); Sahooet al\.,[2024](https://arxiv.org/html/2605.06885#bib.bib21); Nieet al\.,[2025a](https://arxiv.org/html/2605.06885#bib.bib8)\)\. Recent large\-scale systems such as LLaDA and Dream show that masked diffusion can support instruction following, reasoning, and code generation at billion\-parameter scale\(Nieet al\.,[2025b](https://arxiv.org/html/2605.06885#bib.bib87); Yeet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib82)\)\. These advances make DLMs a serious alternative to autoregressive generation, but competitive DLMs still require substantial diffusion\-specific optimization\. We provide a more detailed discussion of DLM formulations in[SectionA\.1](https://arxiv.org/html/2605.06885#A1.SS1)\.

#### Adapting autoregressive models to diffusion language models\.

A growing line of work avoids training DLMs from scratch by converting pretrained AR checkpoints into denoising models\(Gonget al\.,[2025a](https://arxiv.org/html/2605.06885#bib.bib2); Yeet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib82); Xieet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib83); Fuet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib84); Xueet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib85)\)\. These methods establish that AR checkpoints are strong initializations, but they primarily adapt the objective, masking process, attention pattern, or sampling convention\. Our work instead explicitly preserves the AR model’s internal representation geometry by aligning a bidirectional DLM student to a frozen same\-architecture AR teacher, as formalized in[Section3\.3](https://arxiv.org/html/2605.06885#S3.SS3)\.[SectionsA\.2](https://arxiv.org/html/2605.06885#A1.SS2)and[A\.3](https://arxiv.org/html/2605.06885#A1.SS3)give a fuller comparison with AR→\\rightarrowDLM conversion, any\-order generation, iterative masked decoding, and path\-planning methods\.

#### Representation Alignment for Generative Models\.

Representation alignment has recently accelerated diffusion training by matching generative\-model hidden states to representations from strong pretrained encoders\(Yuet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib3); Singhet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib4); Wuet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib5); Jianget al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib6)\)\. Our setting differs in both teacher and purpose: the teacher is not an external encoder, but the exact AR model being converted, with the same tokenizer, architecture, hidden size, and initialization as the DLM student\. Alignment therefore acts as representation preservation during mechanism adaptation, rather than feature import from another model;[Equation2](https://arxiv.org/html/2605.06885#S3.E2)gives the exact objective\.[SectionA\.4](https://arxiv.org/html/2605.06885#A1.SS4)expands this distinction\.

## 3Method

We study the question:*are representations learned by next\-token prediction sufficient for masked denoising generation?*

To isolate this factor, we keep the architecture fixed and change only what is necessary for diffusion\-style denoising\. Our method instantiates two transformers with identical parameterization and dimensionality: a pretrained autoregressive \(AR\) model with causal attention, and a masked diffusion model with bidirectional attention\. We then train the diffusion model with the standard masked prediction objective, while adding a single layer\-wise representation alignment loss to reuse the AR model’s internal features\. Alg\.[1](https://arxiv.org/html/2605.06885#algorithm1)summarizes the overall conversion procedure\. No adapters or auxiliary modules are introduced\. The four design choices below map directly to the experimental setup in[Section4\.1](https://arxiv.org/html/2605.06885#S4.SS1)and the ablations in[Section4\.3](https://arxiv.org/html/2605.06885#S4.SS3)\.

Algorithm 1Repr\-AlignAR→\\rightarrowDLM conversion with layer\-wise representation alignment\.theta=theta\_AR

freeze\(f\_AR\);f\_AR\.eval\(\)

forxindata\_stream:

r=Uniform\(0,1\)

M=sample\_positions\(x,r\)

x\_tilde=x\.clone\(\);x\_tilde\[M\]=mask\_id

logits,H\_D=f\_D\(x\_tilde,bidir=True,output\_hidden\_states=True\)

withno\_grad\(\):

\_,H\_AR=f\_AR\(x,causal=True,output\_hidden\_states=True\)

loss\_diff=CE\(shift\_logits\_1\(logits\)\[M\],x\[M\]\)

loss\_align=\(1\-cos\(H\_D,H\_AR\)\)\.mean\(\)

loss=loss\_diff\+lambda\_align\*loss\_align

step\_optimizer\(theta,loss\)

### 3\.1Two models, same architecture

Letx=\(x1,…,xn\)∈𝒱nx=\(x\_\{1\},\\dots,x\_\{n\}\)\\in\\mathcal\{V\}^\{n\}be a token sequence, and let𝒱\\mathcal\{V\}include a special mask token⟨M⟩\\langle\\mathrm\{M\}\\rangle\. We define: \(i\) an autoregressive transformerfAR​\(⋅;θAR\)f\_\{\\mathrm\{AR\}\}\(\\cdot;\\theta\_\{\\mathrm\{AR\}\}\)with a*causal*attention mask, pretrained by next\-token prediction; and \(ii\) a diffusion transformerfD​\(⋅;θ\)f\_\{\\mathrm\{D\}\}\(\\cdot;\\theta\)with*bidirectional*attention\. Crucially,fARf\_\{\\mathrm\{AR\}\}andfDf\_\{\\mathrm\{D\}\}share the same layer structure and hidden sizedd; the only architectural difference is the attention mask\. We keepθAR\\theta\_\{\\mathrm\{AR\}\}frozen throughout training\. In practice, we initializeθ\\thetafromθAR\\theta\_\{\\mathrm\{AR\}\}and switch the attention mask from causal to bidirectional, so that any gains can be attributed to mechanism adaptation rather than new capacity\.

LethAR\(ℓ\)​\(x\)∈ℝn×dh^\{\(\\ell\)\}\_\{\\mathrm\{AR\}\}\(x\)\\in\\mathbb\{R\}^\{n\\times d\}andhD\(ℓ\)​\(x\)∈ℝn×dh^\{\(\\ell\)\}\_\{\\mathrm\{D\}\}\(x\)\\in\\mathbb\{R\}^\{n\\times d\}denote the hidden states at layerℓ∈\{1,…,L\}\\ell\\in\\\{1,\\dots,L\\\}for the two models\. This matched\-conversion design is evaluated under the fixed\-budget protocol in[Sections4\.1](https://arxiv.org/html/2605.06885#S4.SS1)and[4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px1)\.

### 3\.2Masked diffusion objective

We trainfDf\_\{\\mathrm\{D\}\}as a masked denoiser\. We sample a mask setM⊆\{1,…,n\}M\\subseteq\\\{1,\\dots,n\\\}and construct a corrupted inputx~\\tilde\{x\}by replacingxix\_\{i\}with⟨M⟩\\langle\\mathrm\{M\}\\ranglefori∈Mi\\in M\. The diffusion model predicts the masked tokens conditioned onx~\\tilde\{x\}, and we optimize cross\-entropy only on masked positions:

ℒdiff\(θ\)=𝔼x,M\[∑i∈MCE\(pθ\(⋅∣x~\)i,xi\)\],\\mathcal\{L\}\_\{\\mathrm\{diff\}\}\(\\theta\)=\\mathbb\{E\}\_\{x,M\}\\left\[\\sum\_\{i\\in M\}\\mathrm\{CE\}\\\!\\left\(p\_\{\\theta\}\(\\cdot\\mid\\tilde\{x\}\)\_\{i\},\\;x\_\{i\}\\right\)\\right\],\(1\)wherepθ\(⋅∣x~\)ip\_\{\\theta\}\(\\cdot\\mid\\tilde\{x\}\)\_\{i\}is the diffusion model’s predicted distribution at positionii\. This denoising objective is the shared training loss used across the baseline and aligned models in[Section4\.1](https://arxiv.org/html/2605.06885#S4.SS1)\.

#### Shift convention\.

To match the next\-token indexing convention inherited from the AR checkpoint, we apply the standard shift operation used in AR→\\rightarrowDLM adaptation: the student produces logits at positionsii, which are shifted by one position before computing the denoising cross\-entropy and during sampling\. We use this same shift convention for both the baseline and aligned models; implementation details are provided in[SectionB\.1](https://arxiv.org/html/2605.06885#A2.SS1)\.

### 3\.3Layer\-wise representation alignment

To reuse AR features, we run a frozen autoregressive teacher on the*clean*sequencexxunder causal attention, and anchor the diffusion student—which consumes the corruptedx~\\tilde\{x\}under bidirectional attention—to the teacher’s hidden states\. Because the two networks share identical architecture and hidden dimensionality, we align their layer\-wise representations directly without adapters\.

LethAR\(ℓ\)​\(x\)∈ℝn×dh^\{\(\\ell\)\}\_\{\\mathrm\{AR\}\}\(x\)\\in\\mathbb\{R\}^\{n\\times d\}denote the teacher hidden states at layerℓ\\ellgiven cleanxx, andhD\(ℓ\)​\(x~\)∈ℝn×dh^\{\(\\ell\)\}\_\{\\mathrm\{D\}\}\(\\tilde\{x\}\)\\in\\mathbb\{R\}^\{n\\times d\}denote the student hidden states given corruptedx~\\tilde\{x\}\. We minimize a cosine distance loss:

ℒalign​\(θ\)=1L​∑ℓ=1L𝔼x,M​\[1\|ℐ\|​∑i∈ℐ\(1−cos⁡\(hD\(ℓ\)​\(x~\)i,stopgrad​\(hAR\(ℓ\)​\(x\)i\)\)\)\],\\mathcal\{L\}\_\{\\mathrm\{align\}\}\(\\theta\)=\\frac\{1\}\{L\}\\sum\_\{\\ell=1\}^\{L\}\\mathbb\{E\}\_\{x,M\}\\left\[\\frac\{1\}\{\|\\mathcal\{I\}\|\}\\sum\_\{i\\in\\mathcal\{I\}\}\\left\(1\-\\cos\\\!\\Big\(h^\{\(\\ell\)\}\_\{\\mathrm\{D\}\}\(\\tilde\{x\}\)\_\{i\},\\;\\mathrm\{stopgrad\}\\big\(h^\{\(\\ell\)\}\_\{\\mathrm\{AR\}\}\(x\)\_\{i\}\\big\)\\Big\)\\right\)\\right\],\(2\)whereℐ\\mathcal\{I\}is the aligned position set \(default: all positions\), and the teacher is run in evaluation mode with stop\-gradient\.*Intuitively, the teacher provides a stable representational coordinate system induced by next\-token pretraining, and the student learns a denoising mechanism that operates within this coordinate system\.*The choice of cosine distance, aligned layers, and alignment strength is ablated in[Section4\.3](https://arxiv.org/html/2605.06885#S4.SS3), while the exact hidden\-state tuple and masking positions used for alignment are specified in[SectionB\.4](https://arxiv.org/html/2605.06885#A2.SS4)\.

### 3\.4Training objective

The full objective is a weighted sum:

ℒ​\(θ\)=ℒdiff​\(θ\)\+λ​ℒalign​\(θ\),\\mathcal\{L\}\(\\theta\)=\\mathcal\{L\}\_\{\\mathrm\{diff\}\}\(\\theta\)\+\\lambda\\,\\mathcal\{L\}\_\{\\mathrm\{align\}\}\(\\theta\),\(3\)with a scalarλ\\lambda\(set to 10 by default\) controlling the strength of the anchor\. We optimizeθ\\thetawhile keepingθAR\\theta\_\{\\mathrm\{AR\}\}fixed\. Overall, this procedure changes as little as possible: the architecture is shared, the pretrained AR representations are preserved, and diffusion training is reduced to learning an any\-order denoising mechanism that operates within an already\-formed feature space\. In addition to the standard masked diffusion model loss, we include the PAPL\(Penget al\.,[2026](https://arxiv.org/html/2605.06885#bib.bib80)\)loss from prior work as part of the default DLM training recipe, with weight 1\.[Sections4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px1),[4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px2)and[4\.3](https://arxiv.org/html/2605.06885#S4.SS3)test whether the alignment term in[Equation3](https://arxiv.org/html/2605.06885#S3.E3)improves conversion quality and how sensitive it is toλ\\lambda\.

## 4Experiments

### 4\.1Experimental Setup

#### AR\-to\-DLM conversion setting\.

We study efficient adaptation from an autoregressive \(AR\) causal language model to a masked diffusion language model \(DLM\) using Qwen3 checkpoints\(Yanget al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib91)\)at three scales \(0\.6B, 1\.7B, and 4B parameters\)\. For each scale, we treat the original causal model as a frozen teacher and initialize the DLM student from the same checkpoint, changing only the attention mask from causal to bidirectional\. This setup instantiates the same\-architecture construction in[Section3\.1](https://arxiv.org/html/2605.06885#S3.SS1)\.

#### Training data\.

We train on the Nemotron\-SFT\-Code dataset\(NVIDIAet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib81)\)\(approximately 50B tokens and 70M sequences\)\. This release is a large synthetically generated and curated SFT\-style corpus covering STEM, academic, reasoning, and multilingual instruction data, and includes code\-focused instruction data within theNemotron\-SFT\-Codesubset\. The resulting training stream contains approximately 70M sequences and∼\\sim50B tokens\. To study data efficiency, we also consider a*tiny*subset constructed by uniformly subsampling to∼\\sim0\.8B tokens\. For fixed\-compute comparisons, we sample with replacement so that the number of optimization steps is matched\. In both the full and tiny settings, training examples are sampled with replacement, so the optimization budget is controlled by steps \(and thus total tokens processed\) rather than epochs\. The tiny\-data comparison in[Section4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px5)uses this matched\-step construction\.

#### Representation alignment\.

In addition to the denoising loss, we apply layer\-wise representation alignment to anchor the student to the frozen AR teacher\. The teacher consumes the*clean*sequencexx\(causal attention\), while the student consumes the corruptedx~\\tilde\{x\}\(bidirectional attention\)\. We align the output of every transformer block at every token position using a cosine distance loss, with teacher features stop\-gradiented and the teacher run in evaluation mode\. We useλ=10\\lambda=10as the default alignment weight, selected from the ablation in[Table2](https://arxiv.org/html/2605.06885#S4.T2)\. We additionally enable the PAPL loss\(Penget al\.,[2026](https://arxiv.org/html/2605.06885#bib.bib80)\)in*all*runs with weight11\. Unless stated otherwise, we treat PAPL as part of the default DLM training recipe and include it in*all*methods \(baseline and aligned\) so comparisons isolate the effect of representation alignment\.[Section4\.3](https://arxiv.org/html/2605.06885#S4.SS3)varies the metric, layer set, andλ\\lambdain this alignment term\.

#### Optimization, batching, and evaluation\.

We optimize using AdamW with learning rate3×10−43\\times 10^\{\-4\}, cosine decay, warmup ratio 0\.001, weight decay 0\.01, gradient clipping \(max norm 1\.0\), and mixed\-precision training\. The maximum sequence length is 4096, and the global batch size is 96 sequences per optimization step\. We evaluate code generation on HumanEval\(Chenet al\.,[2021](https://arxiv.org/html/2605.06885#bib.bib92)\), MBPP\(Austinet al\.,[2021b](https://arxiv.org/html/2605.06885#bib.bib93)\), and their EvalPlus variants\(Liuet al\.,[2023](https://arxiv.org/html/2605.06885#bib.bib94)\)\. HumanEval uses the canonical docstring prompts, and all evaluations are zero\-shot\. For decoding, we use P2\-self sampling\(Penget al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib28)\)with 128 sampling steps,max\_new\_tokens=128\\texttt\{max\\\_new\\\_tokens\}=128, temperature 0\.8, and top\-pp0\.95\. Full data preprocessing, optimization, decoding, and evaluation configurations are reported in[SectionsB\.2](https://arxiv.org/html/2605.06885#A2.SS2),[B\.6](https://arxiv.org/html/2605.06885#A2.SS6)and[B\.7](https://arxiv.org/html/2605.06885#A2.SS7)\.

### 4\.2Results

The experiments are organized around four questions induced by[Section3](https://arxiv.org/html/2605.06885#S3)\. First, does the alignment term in[Equation2](https://arxiv.org/html/2605.06885#S3.E2)improve AR→\\rightarrowDLM adaptation under a matched training budget? Second, do the gains scale with model size? Third, does representation preservation reduce the need to update all parameters? Fourth, does conversion remain effective when the adaptation data is sharply reduced? The following paragraphs answer these questions, respectively\.

#### Repr\-Alignimproves adaptation efficiency and quality\.

[Figure3](https://arxiv.org/html/2605.06885#S4.F3)evaluates autoregressive\-to\-diffusion conversion on HumanEval under a fixed 200k\-step budget\. Starting from the same pretrained Qwen3 checkpoint, the baseline performs masked diffusion training \(MDLM\-style masking with the shift convention and PAPL\), while our method additionally anchors the student to the frozen AR teacher via layer\-wise cosine alignment\.Repr\-Alignconsistently improves pass@10 throughout training, indicating substantially better sample efficiency during adaptation\. At the 200k\-step cutoff, the aligned model achieves higher pass@10 than the baseline \(a gain of 6\.1 points at 0\.6B and 9\.4 points at 1\.7B\), demonstrating that anchoring pretrained representations is beneficial even when the student is trained with a bidirectional denoising objective\. This validates the alignment hypothesis encoded by[Equation2](https://arxiv.org/html/2605.06885#S3.E2)under the matched conversion setup in[Section3\.1](https://arxiv.org/html/2605.06885#S3.SS1)\.

![Refer to caption](https://arxiv.org/html/2605.06885v1/figs/repralign.png)

![Refer to caption](https://arxiv.org/html/2605.06885v1/figs/repralign_scales.png)

Figure 3:Repr\-Alignimproves both adaptation speed and final quality\.Left:HumanEval pass@10 vs\. training steps for Qwen3\-0\.6B during AR→\\rightarrowDLM conversion; adding representation alignment to the frozen AR teacher improves sample efficiency throughout training\.Right:pass@10 results for 0\.6B and 1\.7B models; representation alignment provides larger gains at 1\.7B than at 0\.6B\.
#### Alignment gains grow with model size\.

[Figure3](https://arxiv.org/html/2605.06885#S4.F3)summarizes the same conversion procedure across model scales\. For each size, we compare the last checkpoint after 200k steps for the baseline and aligned variants under identical evaluation\. We observe that alignment benefits increase with model capacity: the absolute improvement from alignment is larger for the 1\.7B model than for the 0\.6B model\. This scaling trend supports the view that AR pretraining learns a strong representational geometry that is increasingly valuable to preserve as model capacity grows, while the remaining adaptation primarily concerns the generation mechanism induced by the attention mask and denoising dynamics\. This directly tests whether the preservation term in[Equation3](https://arxiv.org/html/2605.06885#S3.E3)becomes more useful as the AR teacher’s representation capacity increases\.

#### oDLM is competitive with public diffusion language models\.

[Figure1\(b\)](https://arxiv.org/html/2605.06885#S0.F1.sf2)gives the high\-level comparison that motivates the paper: after conversion, oDLM reaches the public frontier of diffusion language models on code\-generation pass@10 while using pretrained AR representations rather than retraining a diffusion LM from scratch\.[Table1](https://arxiv.org/html/2605.06885#S4.T1)reports the full benchmark breakdown across HumanEval, HumanEval\+, MBPP, and MBPP\+ against recent public DLM systems, with code\-focused diffusion LMs such as DiffuCoder and Dream\-Coder providing closely related context for this evaluation setting\(Nieet al\.,[2025b](https://arxiv.org/html/2605.06885#bib.bib87); Yeet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib82); Gonget al\.,[2025b](https://arxiv.org/html/2605.06885#bib.bib86); Xieet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib83)\)\. The strongest oDLM checkpoint is especially competitive on pass@10, which is the metric most directly tied to iterative diffusion sampling quality\. This comparison evaluates the full objective in[Equation3](https://arxiv.org/html/2605.06885#S3.E3)rather than the alignment term in isolation\.

Table 1:Code generation results on HumanEval, HumanEval\+, MBPP, and MBPP\+ benchmarks, compared with recent public diffusion language models\(Nieet al\.,[2025b](https://arxiv.org/html/2605.06885#bib.bib87); Yeet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib82)\)\.
#### Freezing large blocks improves throughput with a mild quality gain\.

If AR pretraining already provides strong representations, and representation alignment preserves the teacher’s internal feature space during conversion, then AR→\\rightarrowDLM adaptation should not require updating the entire network\. We test this by freezing large parameter blocks in the aligned student while keeping the rest of the training protocol fixed\.[Figure4](https://arxiv.org/html/2605.06885#S4.F4)shows a favorable performance–efficiency trade\-off at the 1\.7B scale: freezing token embeddings and, more aggressively, freezing both embeddings and MLP blocks yields a substantial increase in training throughput \(up to∼\\sim2×\\times\), while maintaining and even slightly improving HumanEval pass@10\. This supports the view that, once representations are anchored, the remaining learning signal is concentrated in adapting the denoising mechanism under bidirectional attention, and freezing provides a practical knob to reduce conversion cost without sacrificing quality\. This result tests the mechanism\-adaptation interpretation of[Section3\.3](https://arxiv.org/html/2605.06885#S3.SS3); the freezing protocol is specified in[SectionB\.9](https://arxiv.org/html/2605.06885#A2.SS9)\.

![Refer to caption](https://arxiv.org/html/2605.06885v1/figs/freezing.png)Figure 4:Freezing improves training efficiency with a mild performance gain \(1\.7B\)\. All runs use representation alignment and share the same training protocol and budget; we only vary which parameter blocks are frozen\. Freezing embeddings and MLP blocks increases throughput by up to∼\\sim2×\\timeswhile slightly improving HumanEval pass@10\.
#### Alignment is especially data\-efficient: tiny data can improve conversion\.

To test whether AR\-to\-DLM conversion is inherently data\-hungry, we reduce the training corpus from the full 50B\-token Nemotron\-SFT\-Code stream to a*tiny*0\.8B\-token random subsample, while keeping the optimization budget fixed \(same batch size, sequence length, and number of steps; tiny data is sampled with replacement\)\.[Figure5](https://arxiv.org/html/2605.06885#S4.F5)shows a clear and somewhat surprising outcome: with representation alignment, training on the tiny subset yields*higher*HumanEval pass@10 than training on the full data stream at the same training steps\. This indicates that, once pretrained AR representations are preserved, the remaining learning problem is primarily*mechanism adaptation*\(adapting bidirectional attention and denoising dynamics\), which can be accomplished with remarkably little data under a fixed\-step budget\. This validates the low\-data implication of anchoring the student to the teacher representation in[Equation2](https://arxiv.org/html/2605.06885#S3.E2)\.

![Refer to caption](https://arxiv.org/html/2605.06885v1/figs/tiny_pass1.png)\(a\)HumanEval pass@1\.
![Refer to caption](https://arxiv.org/html/2605.06885v1/figs/tiny.png)\(b\)HumanEval pass@10\.

Figure 5:Alignment is not data\-hungry: a tiny subset can improve conversion\. The*tiny*run trains on a 0\.8B\-token random subsample instead of the full 50B\-token stream\. With representation alignment, the tiny subset yields consistently higher pass@1 \(left\) and pass@10 \(right\), supporting the view that AR→\\rightarrowDLM conversion is primarily mechanism adaptation without massive training data\.
#### Practical takeaway\.

Across scales and data regimes, our experiments support a simple recipe for training diffusion language models efficiently\. Given a pretrained AR transformer, we can convert it into a competitive masked diffusion model by switching to bidirectional attention, training with an MDLM\-style denoising objective with the shift convention, and anchoring intermediate representations to the frozen AR teacher with a layer\-wise cosine loss\. When compute is constrained, freezing token embeddings and even MLP blocks further improves efficiency with little or no degradation in quality\. When data is constrained, alignment enables effective adaptation even on a tiny subset, suggesting that large diffusion pretraining costs are not intrinsic, but largely reflect relearning representations that AR pretraining already provides\.

### 4\.3Ablation Studies

We ablate the main design choices in[Equations2](https://arxiv.org/html/2605.06885#S3.E2)and[3](https://arxiv.org/html/2605.06885#S3.E3): the distance used to match hidden states, the strengthλ\\lambdaof the alignment term, and the set of layers included in the alignment loss\. All ablations use the same AR→\\rightarrowDLM conversion protocol unless otherwise stated: the student is initialized from the AR checkpoint, trained with bidirectional masked denoising, evaluated on HumanEval, and decoded with the same sampling configuration\. This setup isolates whether the gain comes from preserving the pretrained representation geometry rather than from changes in architecture or decoding\.[Table2](https://arxiv.org/html/2605.06885#S4.T2)summarizes the main ablation outcomes\. Cosine alignment is the best default metric;λ=10\\lambda=10gives the strongest overall anchor; and all\-layer alignment provides the best pass@1 while maintaining competitive pass@10\.

Table 2:Ablations of representation alignment on HumanEval\.#### Cosine alignment is better than matching hidden states by L2\.

We first compare two natural choices for the representation loss: an L2 loss on hidden states and a cosine\-distance loss\. This directly tests the cosine\-distance choice in[Equation2](https://arxiv.org/html/2605.06885#S3.E2)\. Cosine alignment improves HumanEval pass@1 from12\.012\.0to18\.018\.0and pass@10 from25\.025\.0to31\.031\.0, suggesting that the useful signal in the AR teacher is primarily geometric rather than metric\. Because transformer hidden\-state norms vary across layers, tokens, and training states, an L2 objective can be dominated by scale; cosine alignment instead preserves representation direction\. We therefore use cosine alignment as the default representation loss\.

#### The alignment weight controls a trade\-off between anchoring and adaptation\.

We next vary the alignment weightλ\\lambdain[Equation3](https://arxiv.org/html/2605.06885#S3.E3), which controls the trade\-off between anchoring to the AR teacher and adapting to the denoising objective\. The sweep shows a clear trade\-off: weak alignment underuses the AR teacher, while excessive anchoring constrains the diffusion student\. Performance rises fromλ=1\\lambda=1toλ=10\\lambda=10, reaching18\.018\.0pass@1 and31\.031\.0pass@10, but drops atλ=20\\lambda=20\. This supports treating representation alignment as an auxiliary constraint rather than exact teacher imitation\. We therefore setλ=10\\lambda=10by default\.

#### Where to align matters\.

Finally, we partition hidden states into lower, middle, and upper thirds to test where the AR teacher is most useful\. Middle\- and upper\-layer anchors improve pass@10, but only all\-layer alignment gives the strongest pass@1 while preserving competitive pass@10\. The transferable signal therefore appears distributed across depth\. We therefore use all\-layer cosine alignment as the default\.

## 5Conclusion

In this work, we challenge the prevailing dichotomy between autoregressive and diffusion language models\. Our empirical investigation substantiates that the rich semantic representations acquired during standard autoregressive pretraining are not specific to sequential decoding, but are broadly applicable to non\-autoregressive generation\. By introducingRepr\-Align, we demonstrate that it is possible to inherit this semantic topology directly\. We achieve state\-of\-the\-art performance with a fraction of the standard pretraining cost\.

Our findings have broader implications for discrete generation\. Alternative frameworks such as uniform, latent, and simplex diffusion models have historically struggled to scale, often due to optimization difficulties\. By anchoring new generation mechanisms to a robust AR prior,Repr\-Alignoffers a recipe for adapting pretrained representations rather than training each paradigm ab initio\. We hope this encourages a shift toward mechanism alignment as a practical way to unlock new forms of generation\.

## Acknowledgments and Disclosure of Funding

We thank Jarrid Rector\-brooks for helpful discussions and assistance\. F\.Z\.P\. and A\.R\.Z\. are partially supported by NIH R01HL169347\.

## References

- J\. Achiam, S\. Adler, S\. Agarwal, L\. Ahmad, I\. Akkaya, F\. L\. Aleman, D\. Almeida, J\. Altenschmidt, S\. Altman, S\. Anadkat,et al\.\(2023\)Gpt\-4 technical report\.arXiv\.Cited by:[§1](https://arxiv.org/html/2605.06885#S1.p1.2)\.
- J\. Austin, D\. D\. Johnson, J\. Ho, D\. Tarlow, and R\. van den Berg \(2021a\)Structured denoising diffusion models in discrete state\-spaces\.arXiv\.Cited by:[§A\.1](https://arxiv.org/html/2605.06885#A1.SS1.p1.1),[§1](https://arxiv.org/html/2605.06885#S1.p1.2),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px1.p1.1)\.
- J\. Austin, A\. Odena, M\. Nye, M\. Bosma, H\. Michalewski, D\. Dohan, E\. Jiang, C\. Cai, M\. Terry, Q\. Le, and C\. Sutton \(2021b\)Program synthesis with large language models\.External Links:2108\.07732,[Link](https://arxiv.org/abs/2108.07732)Cited by:[§B\.7](https://arxiv.org/html/2605.06885#A2.SS7.p1.1),[§4\.1](https://arxiv.org/html/2605.06885#S4.SS1.SSS0.Px4.p1.3)\.
- A\. Campbell, J\. Benton, V\. D\. Bortoli, T\. Rainforth, G\. Deligiannidis, and A\. Doucet \(2022\)A continuous time framework for discrete denoising models\.Cited by:[§A\.1](https://arxiv.org/html/2605.06885#A1.SS1.p1.1)\.
- H\. Chang, H\. Zhang, L\. Jiang, C\. Liu, and W\. T\. Freeman \(2022\)MaskGIT: masked generative image transformer\.InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition \(CVPR\),pp\. 11315–11325\.Cited by:[§A\.3](https://arxiv.org/html/2605.06885#A1.SS3.p2.1),[§1](https://arxiv.org/html/2605.06885#S1.p1.2)\.
- M\. Chen, J\. Tworek, H\. Jun, Q\. Yuan, H\. P\. de Oliveira Pinto, J\. Kaplan, H\. Edwards, Y\. Burda, N\. Joseph, G\. Brockman, A\. Ray, R\. Puri, G\. Krueger, M\. Petrov, H\. Khlaaf, G\. Sastry, P\. Mishkin, B\. Chan, S\. Gray, N\. Ryder, M\. Pavlov, A\. Power, L\. Kaiser, M\. Bavarian, C\. Winter, P\. Tillet, F\. P\. Such, D\. Cummings, M\. Plappert, F\. Chantzis, E\. Barnes, A\. Herbert\-Voss, W\. H\. Guss, A\. Nichol, A\. Paino, N\. Tezak, J\. Tang, I\. Babuschkin, S\. Balaji, S\. Jain, W\. Saunders, C\. Hesse, A\. N\. Carr, J\. Leike, J\. Achiam, V\. Misra, E\. Morikawa, A\. Radford, M\. Knight, M\. Brundage, M\. Murati, K\. Mayer, P\. Welinder, B\. McGrew, D\. Amodei, S\. McCandlish, I\. Sutskever, and W\. Zaremba \(2021\)Evaluating large language models trained on code\.External Links:2107\.03374,[Link](https://arxiv.org/abs/2107.03374)Cited by:[§B\.7](https://arxiv.org/html/2605.06885#A2.SS7.p1.1),[§4\.1](https://arxiv.org/html/2605.06885#S4.SS1.SSS0.Px4.p1.3)\.
- Y\. Fu, L\. Whalen, Z\. Ye, X\. Dong, S\. Diao, J\. Liu, C\. Wu, H\. Zhang, E\. Xie, S\. Han, M\. Khadkevich, J\. Kautz, Y\. C\. Lin, and P\. Molchanov \(2025\)Efficient\-dlm: from autoregressive to diffusion language models, and beyond in speed\.External Links:2512\.14067,[Link](https://arxiv.org/abs/2512.14067)Cited by:[§A\.2](https://arxiv.org/html/2605.06885#A1.SS2.p1.1),[Table 3](https://arxiv.org/html/2605.06885#A1.T3.4.1.5.4.1),[§1](https://arxiv.org/html/2605.06885#S1.p1.2),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px2.p1.1)\.
- M\. Ghazvininejad, O\. Levy, Y\. Liu, and L\. Zettlemoyer \(2019\)Mask\-predict: parallel decoding of conditional masked language models\.InProceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing,pp\. 6112–6121\.External Links:[Document](https://dx.doi.org/10.18653/v1/D19-1633),[Link](https://aclanthology.org/D19-1633/)Cited by:[§A\.3](https://arxiv.org/html/2605.06885#A1.SS3.p2.1),[§1](https://arxiv.org/html/2605.06885#S1.p1.2)\.
- S\. Gong, S\. Agarwal, Y\. Zhang, J\. Ye, L\. Zheng, M\. Li, C\. An, P\. Zhao, W\. Bi, J\. Han, H\. Peng, and L\. Kong \(2025a\)Scaling diffusion language models via adaptation from autoregressive models\.InInternational Conference on Learning Representations,Cited by:[§A\.2](https://arxiv.org/html/2605.06885#A1.SS2.p1.1),[Table 3](https://arxiv.org/html/2605.06885#A1.T3.4.1.2.1.1),[§1](https://arxiv.org/html/2605.06885#S1.p1.2),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px2.p1.1)\.
- S\. Gong, R\. Zhang, H\. Zheng, J\. Gu, N\. Jaitly, L\. Kong, and Y\. Zhang \(2025b\)DiffuCoder: understanding and improving masked diffusion models for code generation\.External Links:2506\.20639,[Link](https://arxiv.org/abs/2506.20639)Cited by:[§A\.2](https://arxiv.org/html/2605.06885#A1.SS2.p1.1),[§4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px3.p1.1)\.
- I\. Gulrajani and T\. Hashimoto \(2023\)Likelihood\-based diffusion language models\.Neural Information Processing Systems\.Cited by:[§A\.1](https://arxiv.org/html/2605.06885#A1.SS1.p1.1),[§1](https://arxiv.org/html/2605.06885#S1.p1.2),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px1.p1.1)\.
- J\. Ho, A\. Jain, and P\. Abbeel \(2020\)Denoising diffusion probabilistic models\.Advances in neural information processing systems33,pp\. 6840–6851\.Cited by:[§1](https://arxiv.org/html/2605.06885#S1.p1.2)\.
- D\. Jiang, M\. Wang, L\. Li, L\. Zhang, H\. Wang, W\. Wei, G\. Dai, Y\. Zhang, and J\. Wang \(2025\)No other representation component is needed: diffusion transformers can provide representation guidance by themselves\.arXiv preprint arXiv:2505\.02831\.Cited by:[§A\.4](https://arxiv.org/html/2605.06885#A1.SS4.p1.1),[§1](https://arxiv.org/html/2605.06885#S1.p3.1),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px3.p1.1)\.
- X\. L\. Li, J\. Thickstun, I\. Gulrajani, P\. Liang, and T\. B\. Hashimoto \(2022\)Diffusion\-lm improves controllable text generation\.InAdvances in Neural Information Processing Systems,Cited by:[§A\.1](https://arxiv.org/html/2605.06885#A1.SS1.p1.1),[§1](https://arxiv.org/html/2605.06885#S1.p1.2),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px1.p1.1)\.
- J\. Liu, C\. S\. Xia, Y\. Wang, and L\. Zhang \(2023\)Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation\.InAdvances in Neural Information Processing Systems,External Links:[Link](https://openreview.net/forum?id=1qvx610Cu7)Cited by:[§B\.7](https://arxiv.org/html/2605.06885#A2.SS7.p1.1),[§4\.1](https://arxiv.org/html/2605.06885#S4.SS1.SSS0.Px4.p1.3)\.
- A\. Lou, C\. Meng, and S\. Ermon \(2023\)Discrete diffusion modeling by estimating the ratios of the data distribution\.InInternational Conference on Machine Learning,Cited by:[§A\.1](https://arxiv.org/html/2605.06885#A1.SS1.p1.1)\.
- S\. Nie, F\. Zhu, C\. Du, T\. Pang, Q\. Liu, G\. Zeng, M\. Lin, and C\. Li \(2025a\)Scaling up masked diffusion models on text\.International Conference on Learning Representations\.Cited by:[§A\.1](https://arxiv.org/html/2605.06885#A1.SS1.p1.1),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px1.p1.1)\.
- S\. Nie, F\. Zhu, Z\. You, X\. Zhang, J\. Ou, J\. Hu, J\. Zhou, Y\. Lin, J\. Wen, and C\. Li \(2025b\)Large language diffusion models\.External Links:2502\.09992,[Link](https://arxiv.org/abs/2502.09992)Cited by:[§A\.1](https://arxiv.org/html/2605.06885#A1.SS1.p1.1),[§1](https://arxiv.org/html/2605.06885#S1.p1.2),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px1.p1.1),[§4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px3.p1.1),[Table 1](https://arxiv.org/html/2605.06885#S4.T1),[Table 1](https://arxiv.org/html/2605.06885#S4.T1.3.2)\.
- NVIDIA, :, A\. Basant, A\. Khairnar, A\. Paithankar, A\. Khattar, A\. Renduchintala, A\. Malte, A\. Bercovich, A\. Hazare, A\. Rico, A\. Ficek, A\. Kondratenko, A\. Shaposhnikov, A\. Bukharin, A\. Taghibakhshi, A\. Barton, A\. S\. Mahabaleshwarkar, A\. Shen, A\. Tao, A\. Guan, A\. Shors, A\. Mandarwal, A\. Mehta, A\. Venkatesan, A\. Sharabiani, A\. Aithal, A\. Poojary, A\. Dattagupta, B\. Buddharaju, B\. Zhu, B\. Simkin, B\. Kartal, B\. D\. Rouhani, B\. Chen, B\. Ginsburg, B\. Norick, B\. Yu, B\. Catanzaro, C\. Wang, C\. Truong, C\. Mungekar, C\. Patel, C\. Alexiuk, C\. Munley, C\. Parisien, D\. Su, D\. Afrimi, D\. Korzekwa, D\. Rohrer, D\. Gitman, D\. Mosallanezhad, D\. Narayanan, D\. Rekesh, D\. Yared, D\. Pykhtar, D\. Ahn, D\. Riach, E\. Long, E\. Ning, E\. Chung, E\. Galinkin, E\. Bakhturina, G\. Prasad, G\. Shen, H\. Qian, H\. Elisha, H\. Sharma, H\. Ross, H\. Ngo, H\. Sahota, H\. Wang, H\. C\. Shin, H\. Huang, I\. Cunningham, I\. Gitman, I\. Moshkov, J\. Jung, J\. Kautz, J\. P\. Scowcroft, J\. Casper, J\. Zhang, J\. Zeng, J\. Zhang, J\. Xue, J\. Huang, J\. Conway, J\. Kamalu, J\. Cohen, J\. Jennings, J\. V\. Vialard, J\. Yi, J\. Parmar, K\. Briski, K\. Cheung, K\. Luna, K\. Wyss, K\. Santhanam, K\. Kong, K\. Pawelec, K\. Anik, K\. Li, K\. Ahmadian, L\. McAfee, L\. Sleiman, L\. Derczynski, L\. Vega, M\. R\. de Melo, M\. N\. Sreedhar, M\. Chochowski, M\. Cai, M\. Kliegl, M\. Stepniewska\-Dziubinska, M\. Novikov, M\. Samadi, M\. Price, M\. Boubdir, M\. Boone, M\. Evans, M\. Bien, M\. Zawalski, M\. Martinez, M\. Chrzanowski, M\. Shoeybi, M\. Patwary, N\. Dhameja, N\. Assaf, N\. Habibi, N\. Bhatia, N\. Pope, N\. Tajbakhsh, N\. K\. Juluru, O\. Rybakov, O\. Hrinchuk, O\. Kuchaiev, O\. Olabiyi, P\. Ribalta, P\. Subramanian, P\. Chadha, P\. Molchanov, P\. Dykas, P\. Jin, P\. Bialecki, P\. Januszewski, P\. Thalasta, P\. Gaikwad, P\. Varshney, P\. Gundecha, P\. Tredak, R\. K\. Mahabadi, R\. Patel, R\. El\-Yaniv, R\. Rajan, R\. Cheruvu, R\. Shahbazyan, R\. Borkar, R\. Gala, R\. Waleffe, R\. Zhang, R\. J\. Hewett, R\. Prenger, S\. Jain, S\. Kriman, S\. Satheesh, S\. Kaji, S\. Yurick, S\. Muralidharan, S\. Narenthiran, S\. Bak, S\. Sameni, S\. Han, S\. Ramasamy, S\. Ghosh, S\. T\. Sreenivas, S\. Thomas, S\. Diao, S\. Gopal, S\. Prabhumoye, S\. Toshniwal, S\. Ding, S\. Singh, S\. Jain, S\. Majumdar, S\. Singhal, S\. Alborghetti, S\. N\. Akter, T\. Kong, T\. Moon, T\. Hliwiak, T\. Asida, T\. Wang, T\. Konuk, T\. Vashishth, T\. Poon, U\. Karpas, V\. Noroozi, V\. Srinivasan, V\. Korthikanti, V\. Fugro, V\. Kalluru, V\. Kurin, V\. Lavrukhin, W\. U\. Ahmad, W\. Du, W\. Byeon, X\. Lu, X\. Dong, Y\. Karnati, Y\. Choi, Y\. Zhang, Y\. Lin, Y\. Fu, Y\. Suhara, Z\. Dong, Z\. Li, Z\. Zhu, and Z\. Chen \(2025\)NVIDIA nemotron nano 2: an accurate and efficient hybrid mamba\-transformer reasoning model\.External Links:2508\.14444,[Link](https://arxiv.org/abs/2508.14444)Cited by:[§4\.1](https://arxiv.org/html/2605.06885#S4.SS1.SSS0.Px2.p1.2)\.
- F\. Z\. Peng, Z\. Bezemek, S\. Patel, J\. Rector\-Brooks, S\. Yao, A\. J\. Bose, A\. Tong, and P\. Chatterjee \(2025\)Path planning for masked diffusion model sampling\.External Links:2502\.03540,[Link](https://arxiv.org/abs/2502.03540)Cited by:[§A\.3](https://arxiv.org/html/2605.06885#A1.SS3.p3.1),[§4\.1](https://arxiv.org/html/2605.06885#S4.SS1.SSS0.Px4.p1.3)\.
- F\. Z\. Peng, Z\. Bezemek, J\. Rector\-Brooks, S\. Zhang, M\. M\. Bronstein, A\. Zhang, J\. Bose, and A\. Tong \(2026\)Planner aware path learning in diffusion language models training\.InThe Fourteenth International Conference on Learning Representations,External Links:[Link](https://openreview.net/forum?id=lAlI5FuIf7)Cited by:[§A\.3](https://arxiv.org/html/2605.06885#A1.SS3.p3.1),[§3\.4](https://arxiv.org/html/2605.06885#S3.SS4.p1.4),[§4\.1](https://arxiv.org/html/2605.06885#S4.SS1.SSS0.Px3.p1.5)\.
- A\. Radford, J\. Wu, R\. Child, D\. Luan, D\. Amodei, and I\. Sutskever \(2019\)Language models are unsupervised multitask learners\.preprint\.Cited by:[§1](https://arxiv.org/html/2605.06885#S1.p1.2)\.
- S\. S\. Sahoo, M\. Arriola, A\. Gokaslan, E\. M\. Marroquin, A\. M\. Rush, Y\. Schiff, J\. T\. Chiu, and V\. Kuleshov \(2024\)Simple and effective masked diffusion language models\.InThe Thirty\-eighth Annual Conference on Neural Information Processing Systems,Cited by:[§A\.1](https://arxiv.org/html/2605.06885#A1.SS1.p1.1),[§1](https://arxiv.org/html/2605.06885#S1.p1.2),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px1.p1.1)\.
- A\. Shih, D\. Sadigh, and S\. Ermon \(2022\)Training and inference on any\-order autoregressive models the right way\.Neural Information Processing Systems\.Cited by:[§A\.3](https://arxiv.org/html/2605.06885#A1.SS3.p1.1)\.
- J\. Singh, X\. Leng, Z\. Wu, L\. Zheng, R\. Zhang, E\. Shechtman, and S\. Xie \(2025\)What matters for representation alignment: global information or spatial structure?\.arXiv preprint arXiv:2512\.10794\.Cited by:[§A\.4](https://arxiv.org/html/2605.06885#A1.SS4.p1.1),[§1](https://arxiv.org/html/2605.06885#S1.p3.1),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px3.p1.1)\.
- J\. Sohl\-Dickstein, E\. Weiss, N\. Maheswaranathan, and S\. Ganguli \(2015\)Deep unsupervised learning using nonequilibrium thermodynamics\.InProceedings of the 32nd International Conference on Machine Learning,F\. Bach and D\. Blei \(Eds\.\),Proceedings of Machine Learning Research, Vol\.37,Lille, France,pp\. 2256–2265\.External Links:[Link](https://proceedings.mlr.press/v37/sohl-dickstein15.html)Cited by:[§1](https://arxiv.org/html/2605.06885#S1.p1.2)\.
- H\. Touvron, L\. Martin, K\. R\. Stone, P\. Albert, A\. Almahairi, Y\. Babaei, N\. Bashlykov, S\. Batra, P\. Bhargava, S\. Bhosale, D\. M\. Bikel, L\. Blecher, C\. C\. Ferrer, M\. Chen, G\. Cucurull, D\. Esiobu, J\. Fernandes, J\. Fu, W\. Fu, B\. Fuller, C\. Gao, V\. Goswami, N\. Goyal, A\. S\. Hartshorn, S\. Hosseini, R\. Hou, H\. Inan, M\. Kardas, V\. Kerkez, M\. Khabsa, I\. M\. Kloumann, A\. V\. Korenev, P\. S\. Koura, M\. Lachaux, T\. Lavril, J\. Lee, D\. Liskovich, Y\. Lu, Y\. Mao, X\. Martinet, T\. Mihaylov, P\. Mishra, I\. Molybog, Y\. Nie, A\. Poulton, J\. Reizenstein, R\. Rungta, J\. Hilton, R\. Nakano, C\. Hesse, J\. Schulman, R\. Rungta, K\. Saladi, A\. Schelten, R\. Silva, E\. M\. Smith, R\. Subramanian, X\. Tan, B\. Tang, R\. Taylor, A\. Williams, J\. X\. Kuan, P\. Xu, Z\. Yan, I\. Zarov, Y\. Zhang, A\. Fan, M\. H\. M\. Kambadur, S\. Narang, A\. Rodriguez, R\. Stojnic, S\. Edunov, and T\. Scialom \(2023\)Llama 2: open foundation and fine\-tuned chat models\.arXiv\.Cited by:[§1](https://arxiv.org/html/2605.06885#S1.p1.2)\.
- G\. Wu, S\. Zhang, R\. Shi, S\. Gao, Z\. Chen, L\. Wang, Z\. Chen, H\. Gao, Y\. Tang, J\. Yang, M\. Cheng, and X\. Li \(2025\)Representation entanglement for generation: training diffusion transformers is much easier than you think\.arXiv preprint arXiv:2507\.01467\.Cited by:[§A\.4](https://arxiv.org/html/2605.06885#A1.SS4.p1.1),[§1](https://arxiv.org/html/2605.06885#S1.p3.1),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px3.p1.1)\.
- Z\. Xie, J\. Ye, L\. Zheng, J\. Gao, J\. Dong, Z\. Wu, X\. Zhao, S\. Gong, X\. Jiang, Z\. Li, and L\. Kong \(2025\)Dream\-coder 7b: an open diffusion language model for code\.External Links:2509\.01142,[Link](https://arxiv.org/abs/2509.01142)Cited by:[§A\.2](https://arxiv.org/html/2605.06885#A1.SS2.p1.1),[Table 3](https://arxiv.org/html/2605.06885#A1.T3.4.1.4.3.1),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px2.p1.1),[§4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px3.p1.1)\.
- S\. Xue, T\. Xie, T\. Hu, Z\. Feng, J\. Sun, K\. Kawaguchi, Z\. Li, and Z\. Ma \(2025\)Any\-order gpt as masked diffusion model: decoupling formulation and architecture\.External Links:2506\.19935,[Link](https://arxiv.org/abs/2506.19935)Cited by:[§A\.2](https://arxiv.org/html/2605.06885#A1.SS2.p1.1),[Table 3](https://arxiv.org/html/2605.06885#A1.T3.4.1.6.5.1),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px2.p1.1)\.
- A\. Yang, A\. Li, B\. Yang, B\. Zhang, B\. Hui, B\. Zheng, B\. Yu, C\. Gao, C\. Huang, C\. Lv, C\. Zheng, D\. Liu, F\. Zhou, F\. Huang, F\. Hu, H\. Ge, H\. Wei, H\. Lin, J\. Tang, J\. Yang, J\. Tu, J\. Zhang, J\. Yang, J\. Yang, J\. Zhou, J\. Zhou, J\. Lin, K\. Dang, K\. Bao, K\. Yang, L\. Yu, L\. Deng, M\. Li, M\. Xue, M\. Li, P\. Zhang, P\. Wang, Q\. Zhu, R\. Men, R\. Gao, S\. Liu, S\. Luo, T\. Li, T\. Tang, W\. Yin, X\. Ren, X\. Wang, X\. Zhang, X\. Ren, Y\. Fan, Y\. Su, Y\. Zhang, Y\. Zhang, Y\. Wan, Y\. Liu, Z\. Wang, Z\. Cui, Z\. Zhang, Z\. Zhou, and Z\. Qiu \(2025\)Qwen3 technical report\.External Links:2505\.09388,[Link](https://arxiv.org/abs/2505.09388)Cited by:[§4\.1](https://arxiv.org/html/2605.06885#S4.SS1.SSS0.Px1.p1.1)\.
- Z\. Yang, Z\. Dai, Y\. Yang, J\. Carbonell, R\. Salakhutdinov, and Q\. V\. Le \(2019\)XLNet: generalized autoregressive pretraining for language understanding\.InAdvances in Neural Information Processing Systems,Cited by:[§A\.3](https://arxiv.org/html/2605.06885#A1.SS3.p1.1),[§1](https://arxiv.org/html/2605.06885#S1.p1.2)\.
- J\. Ye, Z\. Xie, L\. Zheng, J\. Gao, Z\. Wu, X\. Jiang, Z\. Li, and L\. Kong \(2025\)Dream 7b: diffusion large language models\.External Links:2508\.15487,[Link](https://arxiv.org/abs/2508.15487)Cited by:[§A\.1](https://arxiv.org/html/2605.06885#A1.SS1.p1.1),[§A\.2](https://arxiv.org/html/2605.06885#A1.SS2.p1.1),[Table 3](https://arxiv.org/html/2605.06885#A1.T3.4.1.3.2.1),[§1](https://arxiv.org/html/2605.06885#S1.p1.2),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px1.p1.1),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px2.p1.1),[§4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px3.p1.1),[Table 1](https://arxiv.org/html/2605.06885#S4.T1),[Table 1](https://arxiv.org/html/2605.06885#S4.T1.3.2)\.
- S\. Yu, S\. Kwak, H\. Jang, J\. Jeong, J\. Huang, J\. Shin, and S\. Xie \(2025\)Representation alignment for generation: training diffusion transformers is easier than you think\.InInternational Conference on Learning Representations,Cited by:[§A\.4](https://arxiv.org/html/2605.06885#A1.SS4.p1.1),[Table 3](https://arxiv.org/html/2605.06885#A1.T3.4.1.7.6.1),[§1](https://arxiv.org/html/2605.06885#S1.p3.1),[§2](https://arxiv.org/html/2605.06885#S2.SS0.SSS0.Px3.p1.1)\.

## Appendix Contents

## Appendix AExtended Related Work

This appendix provides a more detailed comparison with the closest lines of work\. The purpose is not to survey all diffusion or language\-generation methods exhaustively, but to clarify the specific position of our method: we study AR→\\rightarrowDLM conversion through*representation preservation*\. Existing conversion methods show that pretrained autoregressive checkpoints are useful initializations for diffusion language models, but they primarily adapt the objective, attention pattern, masking convention, or sampling procedure\. Our method instead explicitly preserves the hidden\-state geometry of the pretrained AR model by anchoring the DLM student to a frozen same\-architecture AR teacher\. This appendix supports the positioning claims made in[Section2](https://arxiv.org/html/2605.06885#S2)and the methodological distinction formalized in[Section3\.3](https://arxiv.org/html/2605.06885#S3.SS3)\.

### A\.1Diffusion language models

Diffusion language models have been developed through several distinct formulations\. Early approaches introduced diffusion into language generation by applying continuous diffusion over word embeddings or latent representations\[Liet al\.,[2022](https://arxiv.org/html/2605.06885#bib.bib88)\]\. In parallel, discrete diffusion models define corruption and denoising processes directly over categorical state spaces, including structured discrete transition kernels, continuous\-time Markov formulations, ratio\-estimation objectives, and likelihood\-based language\-modeling objectives\[Austinet al\.,[2021a](https://arxiv.org/html/2605.06885#bib.bib53), Campbellet al\.,[2022](https://arxiv.org/html/2605.06885#bib.bib54), Louet al\.,[2023](https://arxiv.org/html/2605.06885#bib.bib25), Gulrajani and Hashimoto,[2023](https://arxiv.org/html/2605.06885#bib.bib33)\]\. More recent masked diffusion language models simplify the discrete diffusion process by using absorbing mask corruption and masked\-token denoising objectives, yielding strong likelihood and generation performance with a training objective closely related to masked language modeling\[Sahooet al\.,[2024](https://arxiv.org/html/2605.06885#bib.bib21), Nieet al\.,[2025a](https://arxiv.org/html/2605.06885#bib.bib8)\]\. Large\-scale systems such as LLaDA and Dream further demonstrate that diffusion language models can support instruction following, reasoning, and general\-purpose language generation at billion\-parameter scale\[Nieet al\.,[2025b](https://arxiv.org/html/2605.06885#bib.bib87), Yeet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib82)\]\.

Our work is orthogonal to the choice of diffusion parameterization\. We use a masked denoising objective as the target conversion objective, but we do not propose a new discrete diffusion process, transition kernel, or likelihood estimator\. Instead, we study how a pretrained AR model can be converted into a DLM without relearning its internal language representations from scratch\. This changes the optimization problem from full diffusion pretraining to representation\-preserving mechanism adaptation: the DLM must learn to denoise under bidirectional attention and any\-order generation, while remaining anchored to the semantic coordinate system already formed by AR pretraining\. The masked\-denoising objective used in the main method is given in[Section3\.2](https://arxiv.org/html/2605.06885#S3.SS2)\.

### A\.2Autoregressive\-to\-diffusion language model conversion

The closest line of work studies how pretrained autoregressive language models can be adapted into diffusion language models\. Gong et al\. propose one of the first systematic AR→\\rightarrowDLM adaptation recipes, showing that GPT\- and LLaMA\-style checkpoints can be converted into DiffuGPT and DiffuLLaMA through continued diffusion training\[Gonget al\.,[2025a](https://arxiv.org/html/2605.06885#bib.bib2)\]\. A key ingredient in this conversion is reconciling the next\-token indexing convention of AR models with the position\-wise denoising convention of masked diffusion, for example through shift operations during training and sampling\. Subsequent large\-scale DLMs further exploit AR initialization\. Dream uses AR\-based initialization together with diffusion\-specific training strategies for large\-scale diffusion language modeling\[Yeet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib82)\], while Dream\-Coder adapts pretrained AR checkpoints to masked diffusion for code generation\[Xieet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib83)\]\. Efficient\-DLM studies the conversion problem through attention and objective design, emphasizing that preserving favorable properties of the AR checkpoint is important for both quality and inference efficiency\[Fuet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib84)\]\. Relatedly, Any\-Order GPT connects masked diffusion with any\-order autoregressive generation in a decoder\-only framework, further highlighting that the distinction between AR and DLMs is partly a distinction between generation mechanisms rather than model families\[Xueet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib85)\]\. Code\-oriented diffusion models such as DiffuCoder provide additional evidence that masked diffusion can be competitive in execution\-based code\-generation settings\[Gonget al\.,[2025b](https://arxiv.org/html/2605.06885#bib.bib86)\]\.

These works establish that AR checkpoints are valuable starting points for DLM training\. However, existing conversion methods primarily reuse AR parameters and then adapt the model through objective\-level, attention\-level, masking\-level, or sampling\-level changes\. They do not explicitly constrain the converted DLM to preserve the internal representation geometry of the AR model throughout training\. Our method adds this missing constraint\. We keep a frozen AR teacher, initialize the DLM student from the same checkpoint, switch the student from causal to bidirectional attention, and align the student’s hidden states to the teacher’s layer\-wise representations during denoising training\. Thus, our method does not merely initialize from an AR model; it uses the AR model as a persistent representational anchor\. The corresponding conversion procedure in our work is defined in[Section3\.1](https://arxiv.org/html/2605.06885#S3.SS1)and evaluated under matched conditions in[Sections4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px1)and[4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px2)\.

Table 3:Comparison with closely related AR\-to\-DLM conversion and representation\-alignment methods\. Existing AR\-to\-DLM methods reuse pretrained AR parameters, but do not explicitly preserve the AR model’s hidden\-state geometry during denoising training\. REPA\-style methods use representation alignment, but typically align diffusion models to external encoders rather than to the same AR model being converted\.
### A\.3Any\-order generation, iterative decoding, and path planning

Our work is also connected to prior studies of generation order\. Autoregressive language models are usually trained with a fixed left\-to\-right factorization, but this factorization is only one possible ordering of the joint distribution\. XLNet introduced permutation language modeling, showing that autoregressive pretraining can incorporate bidirectional context by optimizing over multiple factorization orders\[Yanget al\.,[2019](https://arxiv.org/html/2605.06885#bib.bib89)\]\. More generally, any\-order autoregressive models study training and inference when the generation order is allowed to vary rather than being fixed in advance\[Shihet al\.,[2022](https://arxiv.org/html/2605.06885#bib.bib62)\]\. These works support the view that generation order is a modeling mechanism rather than an intrinsic property of the data distribution\.

Iterative masked decoding provides another precursor to masked diffusion generation\. Mask\-Predict generates sequences by repeatedly predicting masked positions and remasking low\-confidence tokens, offering an early non\-autoregressive sequence\-generation procedure based on masked language models\[Ghazvininejadet al\.,[2019](https://arxiv.org/html/2605.06885#bib.bib90)\]\. MaskGIT later demonstrated the effectiveness of confidence\-based iterative masked generation in visual token models\[Changet al\.,[2022](https://arxiv.org/html/2605.06885#bib.bib14)\]\. Modern masked diffusion language models can be viewed as probabilistic successors to these iterative refinement methods: they define an explicit corruption process, train a denoiser over partially masked sequences, and sample by progressively resolving uncertainty across positions\.

Recent work further shows that the choice of denoising path and token order can substantially affect DLM sampling and training\. P2 studies path planning for masked diffusion sampling and shows that non\-uniform remasking and update schedules can improve generation quality without changing the denoising model\[Penget al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib28)\]\. PAPL extends this view to training by incorporating planner\-aware path learning into diffusion language model optimization\[Penget al\.,[2026](https://arxiv.org/html/2605.06885#bib.bib80)\]\. In this paper, we use PAPL as part of the default DLM training recipe for both the baseline and representation\-aligned models\. Therefore, the comparisons isolate the contribution of representation alignment rather than path\-planning improvements\. Conceptually, this line of work reinforces our central premise: if generation order and denoising paths are mechanisms that can be changed after pretraining, then the key question is whether the underlying language representations can be preserved while the mechanism changes\. The role of PAPL in our experimental protocol is specified in[Sections4\.1](https://arxiv.org/html/2605.06885#S4.SS1)and[B\.5](https://arxiv.org/html/2605.06885#A2.SS5); because PAPL is shared across baselines and aligned models, it does not define the comparison axis\.

### A\.4Representation alignment and feature preservation

Representation alignment has recently emerged as an effective way to accelerate generative model training\. REPA aligns the hidden states of diffusion or flow\-based transformers to representations from strong pretrained visual encoders, showing that generative training can be bottlenecked by slow representation learning rather than by denoising alone\[Yuet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib3)\]\. Follow\-up work studies which components of the target representation are most useful, whether global information or spatial structure matters most, and whether diffusion transformers can provide representation guidance without external encoders\[Singhet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib4), Wuet al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib5), Jianget al\.,[2025](https://arxiv.org/html/2605.06885#bib.bib6)\]\. Together, these methods suggest that explicitly constraining intermediate representations can improve the efficiency and stability of generative training\.

Our setting differs in both the teacher and the purpose of alignment\. Prior representation\-alignment methods typically align a generative model to an external encoder, often in the vision domain\. The teacher and student may have different architectures, modalities, or training objectives, so alignment acts as a form of semantic feature distillation\. In contrast, our teacher is the exact AR model being converted: it has the same tokenizer, transformer architecture, hidden dimensionality, and pretrained weights as the DLM student before conversion\. The student differs only in the generation mechanism induced by bidirectional attention and masked denoising\. Thus, alignment is not used to import features from an external model; it is used to preserve the internal coordinate system of the pretrained AR model while learning a new denoising mechanism\.

This distinction is central to our interpretation\. If AR pretraining has already learned useful semantic and syntactic representations, then AR→\\rightarrowDLM conversion should not require relearning those representations from scratch\. The remaining problem is to adapt the model so that these representations can support any\-order denoising rather than left\-to\-right next\-token prediction\. Layer\-wise representation alignment directly implements this view: it anchors the DLM student to the frozen AR teacher while the diffusion loss trains the student to operate under masked bidirectional generation\. The resulting method combines the practical benefits of AR initialization with an explicit constraint that preserves the AR model’s hidden\-state geometry throughout conversion\. The same\-architecture AR\-teacher version of this idea is formalized in[Section3\.3](https://arxiv.org/html/2605.06885#S3.SS3)and ablated in[Section4\.3](https://arxiv.org/html/2605.06885#S4.SS3)\.

## Appendix BExperimental Details

This appendix gives the implementation details for the experimental protocol summarized in[Section4\.1](https://arxiv.org/html/2605.06885#S4.SS1)and used in[Sections4\.2](https://arxiv.org/html/2605.06885#S4.SS2)and[4\.3](https://arxiv.org/html/2605.06885#S4.SS3)\. Unless otherwise specified, all experiments use the same model initialization, data preprocessing, masked denoising objective, path\-planning loss, optimization configuration, and decoding protocol\. Ablations vary only the factor explicitly stated\.

### B\.1Model Conversion

We convert a pretrained autoregressive language model into a masked diffusion language model by changing the attention pattern from causal to bidirectional during masked diffusion training\. The transformer architecture is otherwise unchanged: the model remains a decoder\-only Qwen\-style transformer with rotary position embeddings, RMSNorm, MLP blocks, and a tied input embedding/output language\-modeling head\. The original AR model is used as a frozen teacher, and the DLM student is initialized from the same checkpoint\. This implements the two\-model construction in[Section3\.1](https://arxiv.org/html/2605.06885#S3.SS1)\.

In the implementation, the bidirectional attention mode is activated only for masked diffusion training\. When the forward pass receives a mask ratio, the model setsis\_causal=False; otherwise, it preserves the standard causal mode\. Diffusion generation uses full\-sequence non\-causal forward passes and does not use KV caching\.

Listing 1:Causal\-to\-bidirectional conversion\.ifmask\_ratioisnotNone:

is\_causal=False

else:

is\_causal=True

outputs=self\.model\(

input\_ids=input\_ids,

attention\_mask=attention\_mask,

is\_causal=is\_causal,

output\_hidden\_states=output\_hidden\_states,

\)

Following AR\-to\-DLM adaptation, we use a one\-token shift convention to reconcile next\-token prediction with position\-wise masked denoising\. During training, labels are shifted left and hidden states are sliced accordingly\. During sampling, logits are shifted so that diffusion predictions remain consistent with the inherited AR indexing convention\. The same shift convention is used for both the baseline and representation\-aligned models\.

### B\.2Data and Preprocessing

Training uses the Nemotron\-SFT\-Code corpus stored as local Parquet files\. The training stream is loaded in streaming mode\. Each example is read from thetextfield, appended with an EOS token, and tokenized withadd\_special\_tokens=False\. Examples longer than the maximum sequence length are split into chunks of length 4096, while shorter examples are kept and packed\. When packing is enabled, multiple examples are concatenated into packed rows with reset position ids, so that sequence boundaries are preserved\. No additional filtering is applied\. This preprocessing supports the data regimes described in[Section4\.1](https://arxiv.org/html/2605.06885#S4.SS1), including the tiny\-data comparison in[Section4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px5)\.

Table 4:Data preprocessing configuration\.The main assets used in the experiments are licensed or governed as follows: the Qwen3 checkpoints are released under Apache 2\.0; the Nemotron\-SFT\-Code training corpus is governed by NVIDIA’s data\-access terms for model training; HumanEval is released under the original MIT license from OpenAI; MBPP is released under CC BY 4\.0 via the Google Research dataset release; and the EvalPlus HumanEval\+/MBPP\+ releases are Apache 2\.0\.

### B\.3Masked Diffusion Objective

This section implements the main denoising loss in[Equation1](https://arxiv.org/html/2605.06885#S3.E1)\. The DLM student is trained with a masked denoising objective\. For each original sequence, we sample a mask ratio

r∼Uniform​\(1500,1−1500\)\.r\\sim\\mathrm\{Uniform\}\\left\(\\frac\{1\}\{500\},1\-\\frac\{1\}\{500\}\\right\)\.\(4\)Conditioned on this ratio, each token position is independently masked with probabilityrr\. Thus, the implementation uses Bernoulli masking rather than selecting exactly⌊r​n⌉\\lfloor rn\\rceilmasked positions\. The cross\-entropy loss is computed only on masked positions, while unmasked positions and packed\-sequence boundary positions are assignedIGNORE\_INDEX\.

Listing 2:Masked diffusion corruption\.mask\_ratio=torch\.rand\(1\)\.clamp\(1/500,1\-1/500\)

mask=torch\.rand\_like\(input\_ids\.float\(\)\)<mask\_ratio

labels=input\_ids\.clone\(\)

input\_ids\[mask\]=mask\_token\_id

labels\[input\_ids\!=mask\_token\_id\]=IGNORE\_INDEX

LetMMdenote the set of masked and shift\-valid positions\. The masked denoising loss is

ℒmdm=1\|M\|∑i∈MCE\(pθ\(⋅∣x~\)i,xi\),\\mathcal\{L\}\_\{\\mathrm\{mdm\}\}=\\frac\{1\}\{\|M\|\}\\sum\_\{i\\in M\}\\mathrm\{CE\}\\\!\\left\(p\_\{\\theta\}\(\\cdot\\mid\\tilde\{x\}\)\_\{i\},x\_\{i\}\\right\),\(5\)wherex~\\tilde\{x\}is the corrupted input sequence\. In implementation, token losses are summed over valid masked positions and normalized by the number of valid masked tokens\.

### B\.4Representation Alignment

For representation alignment, we construct a frozen teacher by deep\-copying the initialized AR model\. The teacher is placed in evaluation mode and all teacher parameters are frozen\. During training, the teacher consumes the clean sequence under causal attention, while the student consumes the masked sequence under bidirectional attention\. Teacher features are computed undertorch\.no\_grad\(\)\. This section implements the alignment loss in[Equation2](https://arxiv.org/html/2605.06885#S3.E2), including which hidden states and token positions are included\.

We align the hidden\-state tuple returned by the model, including the embedding hidden state, block outputs, and final post\-normalization hidden state\. By default, all returned hidden states are aligned\. Layer ablations select contiguous thirds of this hidden\-state tuple\. Alignment is applied only on masked and shift\-valid positions, matching the positions that contribute to the denoising loss\.

For cosine alignment, hidden states are normalized along the feature dimension and the loss is computed as one minus the mean cosine similarity:

ℒalign=1−1\|ℋ\|​∑h∈ℋ1\|M\|​∑i∈M⟨hD,i‖hD,i‖2,sg​\(hAR,i\)‖sg​\(hAR,i\)‖2⟩,\\mathcal\{L\}\_\{\\mathrm\{align\}\}=1\-\\frac\{1\}\{\|\\mathcal\{H\}\|\}\\sum\_\{h\\in\\mathcal\{H\}\}\\frac\{1\}\{\|M\|\}\\sum\_\{i\\in M\}\\left\\langle\\frac\{h\_\{\\mathrm\{D\},i\}\}\{\\\|h\_\{\\mathrm\{D\},i\}\\\|\_\{2\}\},\\frac\{\\mathrm\{sg\}\(h\_\{\\mathrm\{AR\},i\}\)\}\{\\\|\\mathrm\{sg\}\(h\_\{\\mathrm\{AR\},i\}\)\\\|\_\{2\}\}\\right\\rangle,\(6\)whereℋ\\mathcal\{H\}is the selected set of hidden states,MMis the masked and shift\-valid position set, andsg​\(⋅\)\\mathrm\{sg\}\(\\cdot\)denotes stop\-gradient\.

Listing 3:Layer\-wise representation alignment\.withtorch\.no\_grad\(\):

teacher\_outputs=teacher\(

input\_ids=clean\_input\_ids,

is\_causal=True,

output\_hidden\_states=True,

\)

student\_outputs=student\(

input\_ids=masked\_input\_ids,

is\_causal=False,

output\_hidden\_states=True,

\)

loss\_mask=labels\!=IGNORE\_INDEX

repr\_loss=cosine\_distance\(

student\_outputs\.hidden\_states,

teacher\_outputs\.hidden\_states,

loss\_mask=loss\_mask,

\)

loss=mdm\_loss\+path\_loss\+lambda\_repr\*repr\_loss

The default alignment weight isλrepr=10\\lambda\_\{\\mathrm\{repr\}\}=10\. Unless otherwise specified, all aligned models use cosine alignment over all selected hidden states\.

### B\.5Path\-Planning Loss

All DLM runs, including both the baseline and representation\-aligned models, include the same path\-planning auxiliary loss\. We therefore treat this loss as part of the default DLM training recipe rather than as a separate method\-specific component\. This ensures that comparisons isolate the effect ofRepr\-Align\. This term is included in both baseline and aligned models in[Section4\.1](https://arxiv.org/html/2605.06885#S4.SS1), so the main comparisons isolateℒalign\\mathcal\{L\}\_\{\\mathrm\{align\}\}\.

For Qwen3\-based runs, the path\-planning loss reweights the masked\-token cross\-entropy by the detached model confidence:

ℒpath=1\|M\|∑i∈Mexp\(−ℓi\)sgℓi,\\mathcal\{L\}\_\{\\mathrm\{path\}\}=\\frac\{1\}\{\|M\|\}\\sum\_\{i\\in M\}\\exp\(\-\\ell\_\{i\}\)\_\{\\mathrm\{sg\}\}\\,\\ell\_\{i\},\(7\)whereℓi\\ell\_\{i\}is the token\-level cross\-entropy loss at masked positionii, and the confidence weight is stop\-gradiented\. The total training loss is

ℒ=ℒmdm\+ℒpath\+λrepr​ℒalign\.\\mathcal\{L\}=\\mathcal\{L\}\_\{\\mathrm\{mdm\}\}\+\\mathcal\{L\}\_\{\\mathrm\{path\}\}\+\\lambda\_\{\\mathrm\{repr\}\}\\mathcal\{L\}\_\{\\mathrm\{align\}\}\.\(8\)For baseline DLM conversion, the representation\-alignment term is omitted:

ℒbaseline=ℒmdm\+ℒpath\.\\mathcal\{L\}\_\{\\mathrm\{baseline\}\}=\\mathcal\{L\}\_\{\\mathrm\{mdm\}\}\+\\mathcal\{L\}\_\{\\mathrm\{path\}\}\.\(9\)

### B\.6Optimization

All models are optimized with AdamW\. Unless otherwise specified, we use the same optimization configuration across baseline and representation\-aligned runs\. This optimization protocol is the fixed\-budget setting used in[Sections4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px1)and[4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px2)\.

Table 5:Default optimization configuration\.
### B\.7Decoding and Evaluation

We evaluate code generation on HumanEval\[Chenet al\.,[2021](https://arxiv.org/html/2605.06885#bib.bib92)\], MBPP\[Austinet al\.,[2021b](https://arxiv.org/html/2605.06885#bib.bib93)\], and the HumanEval\+/MBPP\+ variants from EvalPlus\[Liuet al\.,[2023](https://arxiv.org/html/2605.06885#bib.bib94)\]\. HumanEval uses the canonical prompt from the benchmark\. MBPP uses the task description together with the first three tests and ends with a Python code block\. All evaluations are zero\-shot\. This evaluation protocol underlies the main comparisons in[Section4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px3)and the ablations in[Section4\.3](https://arxiv.org/html/2605.06885#S4.SS3)\.

For each problem, we generate 10 samples and compute execution\-based pass@1 and pass@10 using the standard code\-evaluation pipeline\. The same decoding configuration is used for all methods being compared\.

Table 6:Default decoding and evaluation configuration\.The P2 sampler keeps prompt tokens fixed and iteratively remasks low\-confidence variable positions\. At each step, the model predicts the full sequence, samples variable positions, and then remasks a fraction of the lowest\-confidence generated tokens according to the sampling schedule\.

Listing 4:Evaluation command template\.acceleratelaunch\-\-num\_processes4eval\.py\\

\-\-modelcustom\_coder\\

\-\-model\_args"pretrained=<checkpoint\>,max\_new\_tokens=128,steps=128,temperature=0\.8,alg=p2"\\

\-\-taskshumaneval\-\-num\_fewshot0\-\-batch\_size10\\

\-\-output\_pathevals\_results/humaneval\-ns0\-\-log\_samples\\

\-\-confirm\_run\_unsafe\_code

### B\.8Ablation Protocols

All ablations use the same AR→\\rightarrowDLM conversion pipeline unless otherwise stated\. In particular, the model is initialized from the same AR checkpoint, trained with the same masked denoising objective and path\-planning loss, and evaluated with the same decoding configuration\. Each ablation varies only the listed factor\. These protocols correspond to the ablation results in[Section4\.3](https://arxiv.org/html/2605.06885#S4.SS3)\.

Table 7:Ablation protocols\.For the alignment\-metric ablation, L2 alignment uses mean\-squared error between student and teacher hidden states, while cosine alignment normalizes hidden states along the feature dimension and matches their directions\. For the layer ablation, we partition the hidden\-state tuple into three contiguous groups and apply the representation loss only to one group at a time\. The default setting aligns all hidden states with cosine distance andλrepr=10\\lambda\_\{\\mathrm\{repr\}\}=10\.

Table 8:Freezing configuration\.
### B\.9Freezing Protocol

To test whether AR→\\rightarrowDLM conversion requires updating the full network, we freeze selected parameter blocks during representation\-aligned training\. Freezing is implemented by case\-insensitive substring matching on parameter names\. In the main freezing variant, parameters whose names containembed\_tokens,lm\_head, ormlpare frozen\. Attention layers and normalization layers remain trainable\. This protocol supports the efficiency result in[Section4\.2](https://arxiv.org/html/2605.06885#S4.SS2.SSS0.Px4)\.

This freezing protocol is designed to preserve most of the pretrained representational and feed\-forward computation while allowing the attention mechanism and normalization statistics to adapt to bidirectional denoising\.

## Appendix CLimitations

Our study is limited to same\-architecture AR→\\rightarrowDLM conversion on Qwen3 decoder\-only checkpoints and code\-generation benchmarks\. The gains we report may not transfer unchanged to other model families, modalities, or downstream tasks, and the method still depends on access to a strong pretrained AR teacher and substantial training compute\.

## Appendix DBroader Impacts

This work can lower the compute barrier for training competitive diffusion language models, which may make efficient generation research more accessible\. Cheaper conversion of pretrained language models can also make code synthesis and other generative capabilities easier to scale for harmful or deceptive uses, so deployment should follow ordinary model\-stewardship practices\. Because our contribution is a training method rather than a released model or dataset, we do not propose separate release restrictions\.

## NeurIPS Paper Checklist

1. 1\.Claims
2. Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope?
3. Answer:\[Yes\]
4. Justification: The abstract and introduction accurately describe the paper’s scope: same\-architecture AR→\\rightarrowDLM conversion with layer\-wise representation alignment, evaluated on Qwen3 checkpoints and code\-generation benchmarks\. The claims are consistent with the experiments and ablations reported in[Sections4\.2](https://arxiv.org/html/2605.06885#S4.SS2)and[4\.3](https://arxiv.org/html/2605.06885#S4.SS3)\.
5. Guidelines: - •The answer\[N/A\]means that the abstract and introduction do not include the claims made in the paper\. - •The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations\. A\[No\]or\[N/A\]answer to this question will not be perceived well by the reviewers\. - •The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings\. - •It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper\.
6. 2\.Limitations
7. Question: Does the paper discuss the limitations of the work performed by the authors?
8. Answer:\[Yes\]
9. Justification: The revised paper now includes a dedicated[AppendixC](https://arxiv.org/html/2605.06885#A3)section\. It states the main scope limits clearly: same\-architecture Qwen3 conversion, code\-generation benchmarks, dependence on a strong pretrained AR teacher, and substantial training compute\.
10. Guidelines: - •The answer\[N/A\]means that the paper has no limitation while the answer\[No\]means that the paper has limitations, but those are not discussed in the paper\. - •The authors are encouraged to create a separate “Limitations” section in their paper\. - •The paper should point out any strong assumptions and how robust the results are to violations of these assumptions \(e\.g\., independence assumptions, noiseless settings, model well\-specification, asymptotic approximations only holding locally\)\. The authors should reflect on how these assumptions might be violated in practice and what the implications would be\. - •The authors should reflect on the scope of the claims made, e\.g\., if the approach was only tested on a few datasets or with a few runs\. In general, empirical results often depend on implicit assumptions, which should be articulated\. - •The authors should reflect on the factors that influence the performance of the approach\. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting\. Or a speech\-to\-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon\. - •The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size\. - •If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness\. - •While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper\. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community\. Reviewers will be specifically instructed to not penalize honesty concerning limitations\.
11. 3\.Theory assumptions and proofs
12. Question: For each theoretical result, does the paper provide the full set of assumptions and a complete \(and correct\) proof?
13. Answer:\[N/A\]
14. Justification: The paper does not contain new theorems, lemmas, or formal proofs; it presents an empirical method with explicit equations and implementation details instead\.
15. Guidelines: - •The answer\[N/A\]means that the paper does not include theoretical results\. - •All the theorems, formulas, and proofs in the paper should be numbered and cross\-referenced\. - •All assumptions should be clearly stated or referenced in the statement of any theorems\. - •The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition\. - •Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material\. - •Theorems and Lemmas that the proof relies upon should be properly referenced\.
16. 4\.Experimental result reproducibility
17. Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper \(regardless of whether the code and data are provided or not\)?
18. Answer:\[Yes\]
19. Justification: The paper specifies the model family, checkpoint source, training data, preprocessing, optimization hyperparameters, decoding setup, and ablation protocol in[Sections4\.1](https://arxiv.org/html/2605.06885#S4.SS1)and[B](https://arxiv.org/html/2605.06885#A2)\. Those details are sufficient for another group to reproduce the main experiments in principle\.
20. Guidelines: - •The answer\[N/A\]means that the paper does not include experiments\. - •If the paper includes experiments, a\[No\]answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not\. - •If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable\. - •Depending on the contribution, reproducibility can be accomplished in various ways\. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model\. In general\. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model \(e\.g\., in the case of a large language model\), releasing of a model checkpoint, or other means that are appropriate to the research performed\. - •While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution\. For example 1. \(a\)If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm\. 2. \(b\)If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully\. 3. \(c\)If the contribution is a new model \(e\.g\., a large language model\), then there should either be a way to access this model for reproducing the results or a way to reproduce the model \(e\.g\., with an open\-source dataset or instructions for how to construct the dataset\)\. 4. \(d\)We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility\. In the case of closed\-source models, it may be that access to the model is limited in some way \(e\.g\., to registered users\), but it should be possible for other researchers to have some path to reproducing or verifying the results\.
21. 5\.Open access to data and code
22. Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
23. Answer:\[No\]
24. Justification: The manuscript gives detailed reproduction instructions, but it does not yet provide a public release of the training data or converted model checkpoints from this paper\. Reproducibility is therefore documented, but open access to all code/data assets is not provided in the paper itself\.
25. Guidelines: - •The answer\[N/A\]means that paper does not include experiments requiring code\. - • - •While we encourage the release of code and data, we understand that this might not be possible, so\[No\]is an acceptable answer\. Papers cannot be rejected simply for not including code, unless this is central to the contribution \(e\.g\., for a new open\-source benchmark\)\. - •The instructions should contain the exact command and environment needed to run to reproduce the results\. See the NeurIPS code and data submission guidelines \([https://neurips\.cc/public/guides/CodeSubmissionPolicy](https://neurips.cc/public/guides/CodeSubmissionPolicy)\) for more details\. - •The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc\. - •The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines\. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why\. - •At submission time, to preserve anonymity, the authors should release anonymized versions \(if applicable\)\. - •Providing as much information as possible in supplemental material \(appended to the paper\) is recommended, but including URLs to data and code is permitted\.
26. 6\.Experimental setting/details
27. Question: Does the paper specify all the training and test details \(e\.g\., data splits, hyperparameters, how they were chosen, type of optimizer\) necessary to understand the results?
28. Answer:\[Yes\]
29. Justification: The core paper and appendix specify the model families, training corpus, preprocessing, optimization hyperparameters, decoding configuration, and evaluation protocol\. The ablation appendix also records the factors changed in each study\.
30. Guidelines: - •The answer\[N/A\]means that the paper does not include experiments\. - •The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them\. - •The full details can be provided either with the code, in appendix, or as supplemental material\.
31. 7\.Experiment statistical significance
32. Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
33. Answer:\[No\]
34. Justification: The reported benchmark tables and plots give point estimates only; the manuscript does not include error bars, confidence intervals, or multi\-seed significance tests for the main results\.
35. Guidelines: - •The answer\[N/A\]means that the paper does not include experiments\. - •The authors should answer\[Yes\]if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper\. - •The factors of variability that the error bars are capturing should be clearly stated \(for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions\)\. - •The method for calculating the error bars should be explained \(closed form formula, call to a library function, bootstrap, etc\.\) - •The assumptions made should be given \(e\.g\., Normally distributed errors\)\. - •It should be clear whether the error bar is the standard deviation or the standard error of the mean\. - •It is OK to report 1\-sigma error bars, but one should state it\. The authors should preferably report a 2\-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified\. - •For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range \(e\.g\., negative error rates\)\. - •If error bars are reported in tables or plots, the authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text\.
36. 8\.Experiments compute resources
37. Question: For each experiment, does the paper provide sufficient information on the computer resources \(type of compute workers, memory, time of execution\) needed to reproduce the experiments?
38. Answer:\[No\]
39. Justification: The appendix reports that training used 16 GPUs with DDP and gives the batch/optimizer setup, but it does not yet specify the accelerator model, memory footprint, or wall\-clock time for the main runs\.
40. Guidelines: - •The answer\[N/A\]means that the paper does not include experiments\. - •The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage\. - •The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute\. - •The paper should disclose whether the full research project required more compute than the experiments reported in the paper \(e\.g\., preliminary or failed experiments that didn’t make it into the paper\)\.
41. 9\.Code of ethics
43. Answer:\[Yes\]
44. Justification: The research uses public or properly licensed model and benchmark assets, involves no human subjects or sensitive data collection, and does not identify any deviation from the NeurIPS Code of Ethics\. The paper’s conduct is consistent with the ethics guidelines as written\.
45. Guidelines: - •The answer\[N/A\]means that the authors have not reviewed the NeurIPS Code of Ethics\. - •If the authors answer\[No\], they should explain the special circumstances that require a deviation from the Code of Ethics\. - •The authors should make sure to preserve anonymity \(e\.g\., if there is a special consideration due to laws or regulations in their jurisdiction\)\.
46. 10\.Broader impacts
47. Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
48. Answer:\[Yes\]
49. Justification: The revised paper now includes a short broader\-impacts section that discusses both benefits from lower compute cost and risks from cheaper scaling of generative code capabilities\. It also states that the paper does not propose gated release or usage restrictions because it is a training method rather than a released model\.
50. Guidelines: - •The answer\[N/A\]means that there is no societal impact of the work performed\. - •If the authors answer\[N/A\]or\[No\], they should explain why their work has no societal impact or why the paper does not address societal impact\. - •Examples of negative societal impacts include potential malicious or unintended uses \(e\.g\., disinformation, generating fake profiles, surveillance\), fairness considerations \(e\.g\., deployment of technologies that could make decisions that unfairly impact specific groups\), privacy considerations, and security considerations\. - •The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments\. However, if there is a direct path to any negative applications, the authors should point it out\. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate Deepfakes for disinformation\. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster\. - •The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from \(intentional or unintentional\) misuse of the technology\. - •If there are negative societal impacts, the authors could also discuss possible mitigation strategies \(e\.g\., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML\)\.
51. 11\.Safeguards
52. Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse \(e\.g\., pre\-trained language models, image generators, or scraped datasets\)?
53. Answer:\[N/A\]
54. Justification: The paper does not release a new model or dataset with elevated misuse risk, so there is no separate release mechanism to safeguard here\. If a future checkpoint release is added, it should be accompanied by a dedicated safeguards statement\.
55. Guidelines: - •The answer\[N/A\]means that the paper poses no such risks\. - •Released models that have a high risk for misuse or dual\-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters\. - •Datasets that have been scraped from the Internet could pose safety risks\. The authors should describe how they avoided releasing unsafe images\. - •We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort\.
56. 12\.Licenses for existing assets
57. Question: Are the creators or original owners of assets \(e\.g\., code, data, models\), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
58. Answer:\[Yes\]
59. Justification: The paper now states the relevant licenses/terms for the main assets: Qwen3 checkpoints \(Apache 2\.0\), Nemotron\-SFT\-Code \(NVIDIA model\-training data agreement\), HumanEval \(MIT\), MBPP \(CC BY 4\.0\), and EvalPlus HumanEval\+/MBPP\+ \(Apache 2\.0\)\. Each asset is also cited in the bibliography\.
60. Guidelines: - •The answer\[N/A\]means that the paper does not use existing assets\. - •The authors should cite the original paper that produced the code package or dataset\. - •The authors should state which version of the asset is used and, if possible, include a URL\. - •The name of the license \(e\.g\., CC\-BY 4\.0\) should be included for each asset\. - •For scraped data from a particular source \(e\.g\., website\), the copyright and terms of service of that source should be provided\. - •If assets are released, the license, copyright information, and terms of use in the package should be provided\. For popular datasets,[paperswithcode\.com/datasets](https://arxiv.org/html/2605.06885v1/paperswithcode.com/datasets)has curated licenses for some datasets\. Their licensing guide can help determine the license of a dataset\. - •For existing datasets that are re\-packaged, both the original license and the license of the derived asset \(if it has changed\) should be provided\. - •If this information is not available online, the authors are encouraged to reach out to the asset’s creators\.
61. 13\.New assets
62. Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
63. Answer:\[N/A\]
64. Justification: The manuscript does not introduce a newly released dataset, benchmark, or model package in its current form\.
65. Guidelines: - •The answer\[N/A\]means that the paper does not release new assets\. - •Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates\. This includes details about training, license, limitations, etc\. - •The paper should discuss whether and how consent was obtained from people whose asset is used\. - •At submission time, remember to anonymize your assets \(if applicable\)\. You can either create an anonymized URL or include an anonymized zip file\.
66. 14\.Crowdsourcing and research with human subjects
67. Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation \(if any\)?
68. Answer:\[N/A\]
69. Justification: The paper does not involve crowdsourcing or human\-subject experiments\.
70. Guidelines: - •The answer\[N/A\]means that the paper does not involve crowdsourcing nor research with human subjects\. - •Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper\. - •According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector\.
71. 15\.Institutional review board \(IRB\) approvals or equivalent for research with human subjects
72. Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board \(IRB\) approvals \(or an equivalent approval/review based on the requirements of your country or institution\) were obtained?
73. Answer:\[N/A\]
74. Justification: The paper does not involve human subjects, so IRB approval or equivalent review is not applicable\.
75. Guidelines: - •The answer\[N/A\]means that the paper does not involve crowdsourcing nor research with human subjects\. - •Depending on the country in which research is conducted, IRB approval \(or equivalent\) may be required for any human subjects research\. If you obtained IRB approval, you should clearly state this in the paper\. - •We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution\. - •For initial submissions, do not include any information that would break anonymity \(if applicable\), such as the institution conducting the review\.
76. 16\.Declaration of LLM usage
77. Question: Does the paper describe the usage of LLMs if it is an important, original, or non\-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does*not*impact the core methodology, scientific rigor, or originality of the research, declaration is not required\.
78. Answer:\[Yes\]
79. Justification: The core method and evaluation are explicitly built around LLM checkpoints and benchmarks, especially Qwen3\-based AR/DLM conversion\. The paper documents that usage throughout the method and experimental sections\.
80. Guidelines: - •The answer\[N/A\]means that the core method development in this research does not involve LLMs as any important, original, or non\-standard components\. - •Please refer to our LLM policy in the NeurIPS handbook for what should or should not be described\.

Similar Articles