Tag
OpenAI released a February 2026 threat report detailing case studies on detecting and preventing malicious uses of AI, highlighting how threat actors combine AI models with traditional tools and abuse multiple platforms and models in coordinated campaigns.
Doppel launches an AI defense system powered by OpenAI's GPT-5 and o4-mini models that autonomously detects and stops deepfakes and online impersonation attacks at scale, reducing analyst workload by 80% and response times from hours to minutes.
OpenAI released its October 2025 report on disrupting malicious uses of AI, detailing over 40 disrupted networks violating usage policies including state-affiliated threats, scams, and influence operations since February 2024.
Outtake, an AI-powered cybersecurity platform built with GPT-4o and OpenAI o3, resolves digital threats 100x faster by deploying always-on AI agents that scan millions of surfaces per minute to detect and investigate threats, reducing takedown timelines from 60 days to hours.
OpenAI outlines comprehensive security measures on the path to AGI, including AI-powered cyber defense, continuous adversarial red teaming with SpecterOps, and security frameworks for emerging AI agents like Operator. The company emphasizes proactive threat detection, industry collaboration, and security integration into infrastructure and models.
OpenAI publishes an annual report on disrupting malicious uses of AI, detailing its efforts to prevent state-affiliated actors and other bad actors from misusing AI tools for purposes including authoritarian control, child exploitation, influence operations, and cyber attacks.
OpenAI disclosed the disruption of a covert Iranian influence operation (Storm-2035) that used ChatGPT accounts to generate political content targeting the 2024 U.S. election and other topics across social media and fake news websites. The operation achieved minimal audience engagement and was identified through collaboration with Microsoft's threat intelligence.
OpenAI reports disrupting five covert influence operations attempting to misuse its AI models for deceptive campaigns, with findings showing that safety-designed models prevented threat actors from generating desired content. The company is publishing trend analysis and collaborating with industry, civil society, and government to combat AI-enabled information manipulation.