LangFlow: Continuous Diffusion Rivals Discrete in Language Modeling
Summary
LangFlow presents the first continuous diffusion language model that rivals discrete diffusion approaches, challenging the long-held belief that continuous diffusion is inferior for language modeling. The work introduces key ingredients like optimal Gumbel-based noise scheduling and demonstrates competitive perplexity and transfer learning performance compared to discrete diffusion baselines.
View Cached Full Text
Cached at: 04/20/26, 08:29 AM
Paper page - LangFlow: Continuous Diffusion Rivals Discrete in Language Modeling
Source: https://huggingface.co/papers/2604.11748 Continuous diffusion has dominated image and video generation β but in language modeling, it has long been considered inferior to discrete diffusion.
We challenge this belief with πLangFlow: the first continuous diffusion language model that rivals β and even surpasses β discrete diffusion.
π Blog:https://caradryanl.github.io/blog/2026/langflow/ π» GitHub:https://github.com/nealchen2003/LangFlow π‘ HuggingFace:https://huggingface.co/Continuous-Rivals-Discrete π Arxiv:https://arxiv.org/abs/2604.11748
LangFlow shows for the first time that continuous diffusion can rival discrete counterparts on language modeling: π On LM1B and OpenWebText, both our perplexity (PPL) and generative perplexity (Gen. PPL) match or surpass the best discrete diffusion models. π On zero-shot transfer, LangFlow outperforms the best discrete diffusion on 3 out of 7 and autoregressive baselines on 4 out of 7 benchmarks.
LangFlow connects embedding-space DLMs to Flow Matching. It predicts clean token probabilities from noisy token embeddings, and then derives the embedding-space flow in closed form. Endorsed by Bregman divergence, we train the model via the cross-entropy loss.
However, training such continuous diffusion on language has long been a struggle. We thereby reveal several crucial ingredients:
π The noise schedule should make the information gain per unit time uniform. Under this principle, the optimal noise scheduler for language is a Gumbel distribution, greatly different from that for images.
π Self-conditioning significantly improves likelihood and sample quality, with effects substantially different from discrete diffusion. Disabling it during comparison with discrete DLMs is unfair. The protocol of training and evaluating continuous DLMs should be rectified!
π Using ODE sampling, LangFlow can naturally derive a novel ODE-based likelihood bound from Flow Matching, enabling principled evaluation by perplexity, the core metric of language modeling.
The potential of continuous DLMs extends far beyond just performance. They open the door for all continuous diffusion techniques to be introduced into language modeling: β One-step generation, such as Consistency Models β Guided generation, such as CFG β Unified multimodal generation, such as protein structure-sequence co-design
π± LangFlow suggests: continuous diffusion is a viable and promising paradigm for language modeling!
Iβm grateful to be part of such an amazing team pushing this forward. A huge thank you to our great advisors Ge Liu and Jiaxuan You, and to our amazing collaborators Chumeng Liang, Yuxin (Neal) Chen, Ruihan Guo, Chaoran Cheng.
Similar Articles
CRoCoDiL: Continuous and Robust Conditioned Diffusion for Language
CRoCoDiL proposes a continuous and robust conditioned diffusion approach for language that shifts masked diffusion models into a continuous semantic space, achieving superior generation quality and 10x faster sampling speeds compared to discrete methods like LLaDA.
Continuous Latent Diffusion Language Model
Cola DLM is a hierarchical latent diffusion language model that uses text-to-latent mapping and conditional decoding to achieve efficient, non-autoregressive text generation.
Diffusion Model as a Generalist Segmentation Learner
This paper introduces DiGSeg, a framework that repurposes pretrained diffusion models for state-of-the-art semantic and open-vocabulary segmentation by leveraging latent space conditioning and text-guided alignment.
Conditional Diffusion Under Linear Constraints: Langevin Mixing and Information-Theoretic Guarantees
This paper analyzes zero-shot conditional sampling with pretrained diffusion models for linear inverse problems, providing information-theoretic guarantees and proposing a projected-Langevin initialization method.
$R^2$-dLLM: Accelerating Diffusion Large Language Models via Spatio-Temporal Redundancy Reduction
RΒ²-dLLM introduces spatio-temporal redundancy reduction techniques that cut diffusion LLM decoding steps by up to 75% while preserving generation quality, addressing a key deployment bottleneck.