An update on disrupting deceptive uses of AI
Summary
OpenAI publishes a threat intelligence report detailing efforts to disrupt over 20 deceptive AI operations globally, with a focus on state-linked actors and influence campaigns particularly concerning given global elections.
View Cached Full Text
Cached at: 04/20/26, 02:47 PM
Similar Articles
Disrupting deceptive uses of AI by covert influence operations
OpenAI reports disrupting five covert influence operations attempting to misuse its AI models for deceptive campaigns, with findings showing that safety-designed models prevented threat actors from generating desired content. The company is publishing trend analysis and collaborating with industry, civil society, and government to combat AI-enabled information manipulation.
Disrupting malicious uses of AI
OpenAI publishes an annual report on disrupting malicious uses of AI, detailing its efforts to prevent state-affiliated actors and other bad actors from misusing AI tools for purposes including authoritarian control, child exploitation, influence operations, and cyber attacks.
Disrupting malicious uses of AI: October 2025
OpenAI released its October 2025 report on disrupting malicious uses of AI, detailing over 40 disrupted networks violating usage policies including state-affiliated threats, scams, and influence operations since February 2024.
Disrupting malicious uses of AI | February 2026
OpenAI released a February 2026 threat report detailing case studies on detecting and preventing malicious uses of AI, highlighting how threat actors combine AI models with traditional tools and abuse multiple platforms and models in coordinated campaigns.
Disrupting malicious uses of AI by state-affiliated threat actors
OpenAI and Microsoft disrupted five state-affiliated threat actors (from China, Iran, North Korea, and Russia) who were misusing AI services for phishing campaigns, code analysis, and information gathering. The actors were identified and their accounts terminated, with findings showing limited incremental capabilities of GPT-4 for malicious cybersecurity tasks beyond existing tools.