menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Anthropic is limiting access to its latest AI model, Mythos. The real risks may already be out there

8 0
10.04.2026

Anthropic is limiting access to its latest AI model, Mythos. The real risks may already be out there

Anthropic’s new AI model, Mythos, is causing a stir among cybersecurity experts and policymakers. The company says its new model is so skilled at finding and exploiting software vulnerabilities that it’s too dangerous to release. Instead, it is limiting access to a small group of major technology companies whose software is the foundation for many other digital services, hoping to give defenders time to strengthen their systems.

Anthropic is not the only AI lab producing models with these kinds of capabilities, or considering similar release strategies to try to ensure cyber defenders have access to these systems before hackers do. OpenAI is reportedly preparing a new model—internally known as “Spud”—that could match Mythos in cybersecurity capabilities. According to a report from Axios, the company is also working on an advanced cybersecurity-focused system that it plans to release in a phased rollout to a small group of partners, again to try to give defenders a head start.

Some analysts have dismissed these cautious, limited releases as more about marketing and creating hype around new models, rather than purely safety-driven decisions. But most agree that AI-driven cyber capabilities have reached a dangerous tipping point. Even without the powerful new model, they say existing, publicly available AI models can already carry out sophisticated cyberattacks—sometimes in minutes.

Researchers are concerned about both the scale and accessibility of AI‑enabled attacks. Tasks that once required advanced expertise—like scanning code for vulnerabilities or running attacks that require chaining multiple exploit together—are increasingly being automated or semi‑automated by AI systems. Attackers, even those lacking high levels of technical skills, can now launch highly-automated attacks across thousands of systems at once in a massive, coordinated assault. In practical terms, that raises questions both for enterprises and policymakers about how to protect critical infrastructure in a world where these advanced AI capabilities will soon be in the hands of bad actors and hostile nation states. Unless government and industry harden defenses, the world could see a wave of devastating cyber attacks taking down banking systems, power grids, hospitals, or water systems. It is exactly such a nightmare scenario that Anthropic says it is hoping to head off by limiting Mythos’ release.

Some researcher say is not clear, however, how much the new models increase the chances of this kind of cyber-Armageddon. But the reason for their skepticism is not reassuring: they say that much........

© Fortune