AI is breaking two vulnerability cultures

Hacker News Top News

Summary

AI is disrupting traditional vulnerability disclosure cultures (coordinated disclosure vs. bugs-are-bugs) by accelerating the detection and exploitation of security flaws, making long embargoes less effective and forcing a need for faster, AI-assisted responses.

No content available
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 05/08/26, 09:29 PM

# AI is Breaking Two Vulnerability Cultures Source: [https://www.jefftk.com/p/ai-is-breaking-two-vulnerability-cultures](https://www.jefftk.com/p/ai-is-breaking-two-vulnerability-cultures) A week ago the[Copy Fail](https://copy.fail/)vulnerability came out, and Hyunwoo Kim immediately realized that the fixes were insufficient, sharing a patch the[same day](https://github.com/V4bel/dirtyfrag/blob/master/assets/write-up.md#disclosure-timeline-1)\. In doing this he followed standard procedure for Linux, especially within networking: share the security impact with a closed list of Linux security engineers, while fixing the bug quietly and efficiently in the open\. His goal was that with only the raw fix public, the knowledge that a serious vulnerability existed could be "embargoed": the people in a position to address it know, but they've agreed not to say anything for a few days\. Someone else[noticed](https://www.openwall.com/lists/oss-security/2026/05/07/12)the change, however, realized the security implications, and[shared it publicly](https://github.com/0xdeadbeefnetwork/Copy_Fail2-Electric_Boogaloo)\. Since it was now out, the embargo was deemed over, and we can now see the[full details](https://github.com/V4bel/dirtyfrag/blob/master/assets/write-up.md)\. It's interesting to see the tension here between two different approaches to vulnerabilities, and think about how this is likely to change with AI acceleration\. On one side you have "coordinated disclosure" culture\. This is probably the most common approach in computer security\. When you discover a security bug you tell the maintainers privately and give them some amount of time \(often 90d\) to fix it\. The goal is that a fix is out before anyone learns about the hole\. On the other side you have "bugs are bugs" culture\. This is especially common in Linux, where the argument is that if the kernel is doing something it shouldn't then someone somewhere may be able to turn it into an attack\. Just fix things as quickly as possible, without drawing attention to them\. Often people won't notice, with so many changes going past, and there's still time to get machines patched\. This approach never worked perfectly, but with AI getting good at finding vulnerabilities it's a much bigger problem\. So many security fixes are coming out now that examining commits is much more attractive: the signal\-to\-noise ratio is higher\. Additionally, having AI evaluate each commit as it passes is increasingly cheap and effective\. \[1\] Long embargoes, however, aren't doing well either\. The historical pace of detection was slow: if you found something and reported it to the vendor with a 90d disclosure window, there was a very good chance no one else would notice during that time\. But now with so many AI\-assisted groups scanning software for vulnerabilities, that no longer holds\. In this case, just nine hours after Kim reported the ESP vulnerability Kuan\-Ting Chen also[independently reported it](https://github.com/V4bel/dirtyfrag/blob/master/assets/write-up.md#disclosure-timeline)\. Embargoes can increase risk: they create a false sense of non\-urgency and limit which actors can work to fix a flaw\. I don't know how to resolve this, but personally very short embargoes seem like a good approach, and they'd need to get even shorter over time\. Luckily AI can speed up defenders as well as attackers here, allowing embargoes that would previously have been uselessly short\. \[1\] I tested on Gemini 3\.1 Pro, ChatGPT\-Thinking 5\.5, and Claude Opus 4\.7\. All three all got it right away when given[f4c50a403](https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=f4c50a4034e62ab75f1d5cdd191dd5f9c77fdff4)\. When I gave them just the diff, imagining a hypothetical future where diffs are still public right away but with less context, Gemini was sure it was a security fix, GPT thought it probably was, and Claude thought it probably wasn't\. This is just a very quick test to illustrate what's possible: one run of each with the prompt "Without searching, does this look like a security patch?" Don't put much stock in the cross\-model comparison\!

Similar Articles

Scaling security with responsible disclosure

OpenAI Blog

OpenAI publishes an Outbound Coordinated Vulnerability Disclosure Policy outlining how it responsibly reports security vulnerabilities discovered in third-party software, anticipating increased vulnerability detection as AI systems become more capable at finding and patching security issues.

Outbound coordinated vulnerability disclosure policy

OpenAI Blog

OpenAI has published its outbound coordinated vulnerability disclosure policy, outlining how it responsibly reports security vulnerabilities discovered in third-party software to vendors and open-source maintainers, including through AI-powered security analysis. The policy covers detection methods, peer review processes, and disclosure procedures under its Security Research team branded 'Aardvark'.

AI has another security problem

Lobsters Hottest

Article argues that AI-generated code and closed-source software are inherently less secure, and that LLMs like Anthropic’s Mythos will exacerbate vulnerabilities, making open-source projects the only trustworthy option.

AI and the Future of Cybersecurity: Why Openness Matters

Hugging Face Blog

Hugging Face analyzes the implications of Anthropic's Mythos model on cybersecurity, arguing that open tools and semi-autonomous agents offer a structural advantage in defending against AI-driven threats.