DALL·E 2 research preview update

OpenAI Blog Products

Summary

OpenAI announces an expansion of DALL·E 2 research preview access, sharing safety metrics and learnings from 3 million images created by early users. The company plans to onboard up to 1,000 new users weekly while continuing to refine content policy enforcement and address training data biases.

Early users have created over 3 million images to date and helped us improve our safety processes. We’re excited to begin adding up to 1,000 new users from our waitlist each week. 
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:46 PM

# DALL·E 2 research preview update Source: [https://openai.com/index/dall-e-2-update/](https://openai.com/index/dall-e-2-update/) Last month, we started previewing DALL·E 2 to a limited number of trusted users to learn about the technology’s capabilities and limitations\. Since then, we’ve been working with our users to actively incorporate the lessons we learn\. As of today: - Our users have collectively created over 3 million images with DALL·E\. - We’ve enhanced our safety system, improving the text filters and tuning the automated detection & response system for content policy violations\. - Less than 0\.05% of downloaded or publicly shared images were flagged as potentially violating our content policy\. About 30% of those flagged images were confirmed by human reviewers to be policy violations, leading to an account deactivation\. - As we work to understand and address the biases that DALL·E has inherited from its training data, we’ve asked early users not to share photorealistic generations that include faces and to flag problematic generations\. We believe this has been effective in limiting potential harm, and we plan to continue the practice in the current phase\. Learning from real\-world use is an[important part⁠](https://openai.com/index/language-model-safety-and-misuse/)of our commitment to develop and deploy AI responsibly, so we’re starting to widen access to users who joined our waitlist, slowly but steadily\. We intend to onboard up to 1,000 people every week as we iterate on our safety system and require all users to abide by our[content policy⁠\(opens in a new window\)](https://labs.openai.com/policies/content-policy)\. We hope to increase the rate at which we onboard new users as we learn more and gain confidence in our safety system\. We’re inspired by what our users have created with DALL·E so far, and excited to see what new users will create\. In the meantime, you can get a preview of these creations on our Instagram account:[@openaidalle⁠\(opens in a new window\)](https://www.instagram.com/openaidalle/)\.

Similar Articles

Reducing bias and improving safety in DALL·E 2

OpenAI Blog

OpenAI announces improvements to DALL·E 2's safety systems and bias mitigation based on research preview feedback, including measures to prevent deceptive content creation and enhanced content filtering.

DALL·E now available without waitlist

OpenAI Blog

OpenAI removes the waitlist for DALL·E beta, making the text-to-image generation tool immediately available to all users. The announcement reveals 1.5M+ active users creating 2M+ images daily, with plans to expand DALL·E API access to developers.

DALL·E 2 pre-training mitigations

OpenAI Blog

OpenAI describes the pre-training data filtering and active learning techniques used to reduce harmful content in DALL·E 2, while also addressing unintended bias amplification caused by data filtering—particularly demographic biases in generated images.

DALL·E now available in beta

OpenAI Blog

OpenAI's DALL·E image generation system is now available in public beta, inviting 1 million users from the waitlist with free monthly credits and optional paid plans. Users gain full commercial rights to generated images and can use features like editing, variations, and collections.

DALL·E API now available in public beta

OpenAI Blog

OpenAI announces DALL·E API is now available in public beta, allowing developers to integrate image generation capabilities directly into their applications. Early adopters include Microsoft, CALA, and Mixtiles, with built-in safety features and content moderation.