Simplifying, stabilizing, and scaling continuous-time consistency models
Summary
OpenAI presents sCM (simplified continuous-time consistency models), a new approach that scales consistency models to 1.5B parameters and achieves ~50x speedup over diffusion models by generating high-quality samples in just 2 steps. The method demonstrates comparable sample quality to state-of-the-art diffusion models while using less than 10% of the effective sampling compute.
View Cached Full Text
Cached at: 04/20/26, 02:47 PM
Similar Articles
Consistency Models
OpenAI introduces Consistency Models, a new family of generative models that enable fast one-step image generation by directly mapping noise to data, while supporting multi-step sampling and zero-shot editing tasks like inpainting and super-resolution. The approach achieves state-of-the-art FID scores on CIFAR-10 and ImageNet 64x64 for one-step generation.
Improved Techniques for Training Consistency Models
OpenAI presents improved techniques for training consistency models that enable high-quality single-step image generation without distillation, achieving significant FID improvements on CIFAR-10 and ImageNet 64×64 through novel loss functions and training strategies.
D-OPSD: On-Policy Self-Distillation for Continuously Tuning Step-Distilled Diffusion Models
This paper introduces D-OPSD, a novel training paradigm for step-distilled diffusion models that enables on-policy self-distillation during supervised fine-tuning. It allows models to learn new concepts or styles without compromising their efficient few-step inference capabilities.
LangFlow: Continuous Diffusion Rivals Discrete in Language Modeling
LangFlow presents the first continuous diffusion language model that rivals discrete diffusion approaches, challenging the long-held belief that continuous diffusion is inferior for language modeling. The work introduces key ingredients like optimal Gumbel-based noise scheduling and demonstrates competitive perplexity and transfer learning performance compared to discrete diffusion baselines.
New technique makes AI models leaner and faster while they’re still learning
Researchers from MIT CSAIL and other institutions introduced CompreSSM, a technique that compresses state-space AI models during training by removing unnecessary components early, resulting in faster training and smaller models without sacrificing performance.