An update on disrupting deceptive uses of AI

OpenAI Blog News

Summary

OpenAI publishes a threat intelligence report detailing efforts to disrupt over 20 deceptive AI operations globally, with a focus on state-linked actors and influence campaigns particularly concerning given global elections.

OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity. We are dedicated to identifying, preventing, and disrupting attempts to abuse our models for harmful ends.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:47 PM

# An update on disrupting deceptive uses of AI Source: [https://openai.com/global-affairs/an-update-on-disrupting-deceptive-uses-of-ai/](https://openai.com/global-affairs/an-update-on-disrupting-deceptive-uses-of-ai/) OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity\. We are dedicated to identifying, preventing, and disrupting attempts to abuse our models for harmful ends\. In this year of global elections, we know it is particularly important to build robust, multi\-layered defenses against[state\-linked cyber actors](https://openai.com/index/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors/)and[covert influence operations](https://openai.com/index/disrupting-deceptive-uses-of-ai-by-covert-influence-operations/)that may attempt to use our models in furtherance of deceptive campaigns on social media and other internet platforms\. Since the beginning of the year, we’ve disrupted more than 20 operations and deceptive networks from around the world that attempted to use our models\. To understand the ways in which threat actors attempt to use AI, we’ve analyzed the activity we’ve disrupted, identifying an initial set of trends that we believe can inform debate on how AI fits into the broader threat landscape\. Today, we are publishing OpenAI’s latest threat intelligence report, which represents a snapshot of our understanding as of October 2024\. As we look to the future, we will continue to work across our intelligence, investigations, security, safety, and policy teams to anticipate how malicious actors may use advanced models for dangerous ends and to plan enforcement steps appropriately\. We will continue to share our findings with our internal safety and security teams, communicate lessons to key stakeholders, and partner with our industry peers and the broader research community to stay ahead of risks and strengthen our collective safety and security\.

Similar Articles

Disrupting deceptive uses of AI by covert influence operations

OpenAI Blog

OpenAI reports disrupting five covert influence operations attempting to misuse its AI models for deceptive campaigns, with findings showing that safety-designed models prevented threat actors from generating desired content. The company is publishing trend analysis and collaborating with industry, civil society, and government to combat AI-enabled information manipulation.

Disrupting malicious uses of AI

OpenAI Blog

OpenAI publishes an annual report on disrupting malicious uses of AI, detailing its efforts to prevent state-affiliated actors and other bad actors from misusing AI tools for purposes including authoritarian control, child exploitation, influence operations, and cyber attacks.

Disrupting malicious uses of AI: October 2025

OpenAI Blog

OpenAI released its October 2025 report on disrupting malicious uses of AI, detailing over 40 disrupted networks violating usage policies including state-affiliated threats, scams, and influence operations since February 2024.

Disrupting malicious uses of AI | February 2026

OpenAI Blog

OpenAI released a February 2026 threat report detailing case studies on detecting and preventing malicious uses of AI, highlighting how threat actors combine AI models with traditional tools and abuse multiple platforms and models in coordinated campaigns.

Disrupting malicious uses of AI by state-affiliated threat actors

OpenAI Blog

OpenAI and Microsoft disrupted five state-affiliated threat actors (from China, Iran, North Korea, and Russia) who were misusing AI services for phishing campaigns, code analysis, and information gathering. The actors were identified and their accounts terminated, with findings showing limited incremental capabilities of GPT-4 for malicious cybersecurity tasks beyond existing tools.