Hierarchical text-conditional image generation with CLIP latents

OpenAI Blog Papers

Summary

OpenAI proposes a hierarchical two-stage model for text-conditional image generation using CLIP latents: a prior that generates CLIP image embeddings from text captions, and a diffusion-based decoder that generates images from embeddings. The approach improves image diversity and enables zero-shot language-guided image manipulations.

No content available
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:46 PM

# Hierarchical text-conditional image generation with CLIP latents Source: [https://openai.com/index/hierarchical-text-conditional-image-generation-with-clip-latents/](https://openai.com/index/hierarchical-text-conditional-image-generation-with-clip-latents/) Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style\. To leverage these representations for image generation, we propose a two\-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding\. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity\. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non\-essential details absent from the image representation\. Moreover, the joint embedding space of CLIP enables language\-guided image manipulations in a zero\-shot fashion\. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher\-quality samples\.

Similar Articles

CLIP: Connecting text and images

OpenAI Blog

CLIP is OpenAI's vision-language model that learns from text-image pairs from the internet, enabling zero-shot visual classification without task-specific training data. It addresses major limitations in traditional computer vision by reducing dependence on expensive labeled datasets and improving real-world generalization.

krthr/clip-embeddings

Replicate Explore

A CLIP-based embedding model hosted on Replicate that generates 768-dimensional embeddings for both images and text using the clip-vit-large-patch14 architecture, costing ~$0.00022 per run.

Alien Dreams: An Emerging Art Scene

ML at Berkeley

The article highlights the emerging scene of AI-generated art using OpenAI's CLIP model as a steering mechanism for generative models, showcasing various examples of text-to-image outputs.

Representations Before Pixels: Semantics-Guided Hierarchical Video Prediction

Hugging Face Daily Papers

Re2Pix is a hierarchical video prediction framework that improves future video generation by first predicting semantic representations using frozen vision foundation models, then conditioning a latent diffusion model on these predictions to generate photorealistic frames. The approach addresses train-test mismatches through nested dropout and mixed supervision strategies, achieving improved temporal semantic consistency and perceptual quality on autonomous driving benchmarks.