← Front Page
AI Daily
Security • April 9, 2026

The AI That Found Thousands of Zero-Days: Why Anthropic Won't Let Anyone Else Use It

By AI Daily Editorial • April 9, 2026

Anthropic has released a preview of a new frontier model called Mythos with capabilities so advanced in cybersecurity that the company has decided it cannot be made publicly available. Instead, it is being distributed to a curated group of 52 organisations through an initiative called Project Glasswing, with the explicit goal of giving defenders a head start before an equivalent capability reaches the open market. The announcement, published this week, is the first time a major AI lab has withheld a model not because it falls short, but because it works too well.

The capabilities Anthropic documented are striking. Over the preceding weeks, Mythos Preview identified thousands of zero-day vulnerabilities across widely-used software, including a 27-year-old bug in OpenBSD and a 16-year-old flaw in FFmpeg, a library with hundreds of millions of installations. The model can chain multiple vulnerabilities together autonomously: in one documented case it constructed a web browser exploit by chaining four separate vulnerabilities, writing what Anthropic describes as "a complex JIT heap spray that escaped both renderer and OS sandboxes." Anthropic says Mythos is capable of identifying and exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed to do so.

Project Glasswing is Anthropic's attempt to put that capability to defensive use before the window closes. The named launch partners are Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks, and Anthropic itself, with over 40 additional organisations also granted access. Anthropic is committing $100 million in model usage credits across these efforts and $4 million in direct donations to open-source security organisations. The model will be used to scan first-party and open-source software for vulnerabilities, with the goal of identifying and patching critical issues before Mythos-class capabilities become broadly accessible.

The framing is defensive, and the goal is genuinely public-spirited: securing the Linux kernel, widely-used open source libraries, and critical infrastructure software benefits everyone who relies on that software, which is essentially everyone. The Linux Foundation's inclusion in the partner list is notable precisely because it represents public interest rather than commercial interest. The argument Anthropic is making is that defenders need AI assistance more urgently than attackers, because defenders are responsible for all software while attackers need only find one gap.

What makes this announcement different from previous AI security tools is the level of autonomy involved. Earlier AI-assisted security work accelerated human analysts: better pattern recognition, faster triage, smarter search. Mythos does not accelerate a human analyst; it conducts the analysis independently. It identifies the vulnerability, constructs the exploit chain, tests it, and reports back. The human is not in the loop during the discovery process, only before it starts and after it ends. That is a meaningful shift in what "AI-assisted security" means.

Anthropic's decision to withhold the model from general release is explicit about the reason. The company says Mythos-class capability in the hands of adversaries would tip the balance toward attackers in a way that cannot be corrected by better defence alone. The controlled rollout is an attempt to establish what Anthropic calls "a new equilibrium" before that happens. The theory is that if defenders patch the most critical vulnerabilities first, the attackers who eventually gain access to equivalent capability will find less to exploit.

The open question is whether that equilibrium is achievable, or whether it is a temporary condition that will resolve as equivalent capability becomes available through other means. The compute required to run Mythos-class models is decreasing on the same trajectory as every other AI capability. The architectural innovations that make such models possible are documented in public research. The window in which Glasswing partners have a meaningful head start may be shorter than the ambition of the project assumes.

Sources