Consistency Models
Summary
OpenAI introduces Consistency Models, a new family of generative models that enable fast one-step image generation by directly mapping noise to data, while supporting multi-step sampling and zero-shot editing tasks like inpainting and super-resolution. The approach achieves state-of-the-art FID scores on CIFAR-10 and ImageNet 64x64 for one-step generation.
View Cached Full Text
Cached at: 04/20/26, 02:47 PM
Similar Articles
Improved Techniques for Training Consistency Models
OpenAI presents improved techniques for training consistency models that enable high-quality single-step image generation without distillation, achieving significant FID improvements on CIFAR-10 and ImageNet 64×64 through novel loss functions and training strategies.
Simplifying, stabilizing, and scaling continuous-time consistency models
OpenAI presents sCM (simplified continuous-time consistency models), a new approach that scales consistency models to 1.5B parameters and achieves ~50x speedup over diffusion models by generating high-quality samples in just 2 steps. The method demonstrates comparable sample quality to state-of-the-art diffusion models while using less than 10% of the effective sampling compute.
Generative models
OpenAI publishes an overview of generative models as an approach to developing machine understanding of the world, explaining how these models work by learning to generate data similar to their training sets and their potential applications across various domains.
OpenAI cooked with the new Images 2 Model, the characters can stay extremely consistent, while text is clear and stays the same
OpenAI released an upgraded image model that keeps character appearance perfectly consistent across frames and renders crisp, stable text.
The new ChatGPT images model is the new standard in photorealistic image generation
OpenAI has released a new ChatGPT image model that sets a new benchmark for photorealistic image generation.