@Dan_Jeffries1: We are less safe as a society by keeping Mythos (or any other smart model) tightly gated so only a few companies get it…

X AI KOLs Following News

Summary

The article argues that gating smart models like Mythos reduces societal safety, advocating for wider distribution of AI technology to secure the vast ecosystem of open-source and closed-source software projects.

We are less safe as a society by keeping Mythos (or any other smart model) tightly gated so only a few companies get it. Protecting 100 companies is not enough. There are 96 million open source projects on Github alone. What about securing all of those projects? What about the other $820 billion worth of closed source software that has hidden cracks too? It's like patching a 100 buildings in a city of 10 million buildings and saying we just saved the city. You did not. Open source alone alone has an estimated economic value of 8.8 trillion dollars to say nothing of its societal value. It is embedded in almost every other piece of software, closed or open on the planet. Society becomes stronger by wider distribution of technology not by adding gatekeepers. When we tried to gatekeep encryption, the gates were so high that most Americans didn't even bother getting the 128 bit encrypted browser. They just used the easier to get 40 bit one that was totally unsafe. When we finally took the restrictions away the era of ecommerce took off like a rocket because now it was feasible. The world did not become smarter in the era of the monks scribbling every text by hand in caves in the dark ages. It became smarter when we scaled reading, and as a byproduct, intelligence, with the printing press. Wide distribution raises the bar for everyone and makes society safer and more secure. Simple as that. It's counterintuitive but also true.
Original Article

Similar Articles

AI has another security problem

Lobsters Hottest

Article argues that AI-generated code and closed-source software are inherently less secure, and that LLMs like Anthropic’s Mythos will exacerbate vulnerabilities, making open-source projects the only trustworthy option.

Claude Mythos Opens The Cybersecurity Pandora's box

Reddit r/artificial

Anthropic has unveiled Claude Mythos, a highly capable AI model designed to automatically discover security vulnerabilities in operating systems, browsers, and software libraries. Initially restricted to select enterprise and open-source partners under Project Glasswing due to dual-use risks, the release has sparked industry debate over AI security capabilities and corporate marketing tactics.

AI and the Future of Cybersecurity: Why Openness Matters

Hugging Face Blog

Hugging Face analyzes the implications of Anthropic's Mythos model on cybersecurity, arguing that open tools and semi-autonomous agents offer a structural advantage in defending against AI-driven threats.

MythosWatch: Tracking who has access to Anthropic's Mythos AI

Hacker News Top

MythosWatch reveals that early access to Anthropic’s powerful Mythos AI is concentrated among US-aligned infrastructure, finance, and government bodies, while the Pentagon, EU, and China are blocked or excluded, prompting global regulatory responses.