Disrupting malicious uses of AI: October 2025

OpenAI Blog News

Summary

OpenAI released its October 2025 report on disrupting malicious uses of AI, detailing over 40 disrupted networks violating usage policies including state-affiliated threats, scams, and influence operations since February 2024.

Discover how OpenAI is detecting and disrupting malicious uses of AI in our October 2025 report. Learn how we’re countering misuse, enforcing policies, and protecting users from real-world harms.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:49 PM

# Disrupting malicious uses of AI: October 2025 Source: [https://openai.com/global-affairs/disrupting-malicious-uses-of-ai-october-2025/](https://openai.com/global-affairs/disrupting-malicious-uses-of-ai-october-2025/) Our mission is to ensure that artificial general intelligence benefits all of humanity\. We advance this mission by deploying innovations that help people solve difficult problems and by building democratic AI grounded in common\-sense rules that protect people from real harms\. Since we began our public threat reporting in[February 2024](https://openai.com/index/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors/), we’ve disrupted and reported over 40 networks that violated our usage policies\. This includes preventing uses of AI by authoritarian regimes to control populations or coerce other states, as well as abuses like scams, malicious cyber activity, and covert influence operations\. In this update, we share case studies from the past quarter and how we’re detecting and disrupting malicious use of our models\. We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models\. When activity violates our policies, we ban accounts and, where appropriate, share insights with partners\. Our public reporting, policy enforcement, and collaboration with peers aim to raise awareness of abuse while improving protections for everyday users\.

Similar Articles

Disrupting malicious uses of AI | February 2026

OpenAI Blog

OpenAI released a February 2026 threat report detailing case studies on detecting and preventing malicious uses of AI, highlighting how threat actors combine AI models with traditional tools and abuse multiple platforms and models in coordinated campaigns.

Disrupting malicious uses of AI

OpenAI Blog

OpenAI publishes an annual report on disrupting malicious uses of AI, detailing its efforts to prevent state-affiliated actors and other bad actors from misusing AI tools for purposes including authoritarian control, child exploitation, influence operations, and cyber attacks.

An update on disrupting deceptive uses of AI

OpenAI Blog

OpenAI publishes a threat intelligence report detailing efforts to disrupt over 20 deceptive AI operations globally, with a focus on state-linked actors and influence campaigns particularly concerning given global elections.

Disrupting malicious uses of AI by state-affiliated threat actors

OpenAI Blog

OpenAI and Microsoft disrupted five state-affiliated threat actors (from China, Iran, North Korea, and Russia) who were misusing AI services for phishing campaigns, code analysis, and information gathering. The actors were identified and their accounts terminated, with findings showing limited incremental capabilities of GPT-4 for malicious cybersecurity tasks beyond existing tools.

Disrupting deceptive uses of AI by covert influence operations

OpenAI Blog

OpenAI reports disrupting five covert influence operations attempting to misuse its AI models for deceptive campaigns, with findings showing that safety-designed models prevented threat actors from generating desired content. The company is publishing trend analysis and collaborating with industry, civil society, and government to combat AI-enabled information manipulation.