New and improved content moderation tooling
Summary
OpenAI has launched an improved Moderation API endpoint that uses GPT-based classifiers to detect sexual, hateful, violent, or self-harm content, offering free access to developers. They also released a technical paper and evaluation dataset alongside the tool.
View Cached Full Text
Cached at: 04/20/26, 02:55 PM
Similar Articles
Upgrading the Moderation API with our new multimodal moderation model
OpenAI is launching `omni-moderation-latest`, a new multimodal moderation model built on GPT-4o that supports both text and image inputs, adds new harm categories, and significantly improves accuracy across 40 languages. The updated model is free to use via the Moderation API for all developers.
A Holistic Approach to Undesired Content Detection in the Real World
OpenAI presents a comprehensive framework for building robust content moderation systems through careful taxonomy design, data quality control, active learning pipelines, and techniques to prevent overfitting. The approach detects multiple categories of undesired content including sexual content, hate speech, violence, and self-harm, achieving performance superior to existing off-the-shelf models.
Using GPT-4 for content moderation
OpenAI describes using GPT-4 for content moderation by enabling policy experts to develop and refine content policies in hours rather than months through an iterative process of comparing GPT-4 judgments against human labels. The approach reduces manual moderation burden while keeping humans in the loop for complex cases and bias monitoring.
Helping developers build safer AI experiences for teens
OpenAI releases prompt-based safety policies and the open-weight gpt-oss-safeguard model to help developers build age-appropriate AI experiences for teens, covering risks like graphic content, harmful behaviors, and dangerous activities.
OpenAI API
OpenAI announces the release of an API for accessing its AI models with a general-purpose text interface, launching in private beta with strict safety measures including mandatory production reviews and content restrictions to prevent harmful use cases.