← Front Page
AI Daily
Security • 2 May 2026

OpenAI Called It Fear-Based Marketing. Then It Did the Same Thing.

By AI Daily Editorial • 2 May 2026

When Anthropic restricted public access to Mythos, its frontier cybersecurity model, Sam Altman had a ready verdict. The move, he said on X, amounted to "fear-based marketing." It was a sharp criticism of a rival's decision to gate a powerful tool behind a selective vetting process, framing caution as a PR calculation rather than a genuine safety judgment. Within days, OpenAI announced that its own competing cybersecurity AI, GPT-5.5 Cyber, would also be released only to a select group of vetted "critical cyber defenders." The language was different. The structure was identical.

The two tools are designed for similar purposes: penetration testing, vulnerability identification and exploitation, malware reverse engineering. In the hands of security professionals, these capabilities are genuinely useful for finding and patching weaknesses before attackers do. The concern that drove both companies to the same restrictive conclusion is what those capabilities look like outside that context. Anthropic's own description of Mythos, according to reporting from Bloomberg, is frank: the model can identify software vulnerabilities and exploit them autonomously, build its own hacking tools, and target systems including Linux. Researcher testing found Mythos could "break into digital infrastructure easily" and act on its own rather than simply assisting human operators. That is not a model you hand to anonymous requesters through a public API.

The US government appears to have reached the same conclusion through a different route. The Trump administration raised concerns about Anthropic's proposal to expand Mythos access from its initial controlled group to roughly 70 organisations. Officials worried about potential misuse and, in a more logistical register, about whether a broader rollout would strain computing capacity the government also relies on. A White House official characterised the challenge as balancing public safety with technological progress. The caution is notable because it came not from AI critics but from an administration that has generally pushed for faster AI adoption inside federal agencies.

The restricted-release strategy has an obvious limitation, which the Mythos situation illustrated within its first weeks. An unauthorised group reportedly gained access to the model despite Anthropic's controls. This is not unusual for high-profile restricted releases; determined actors find ways around access barriers. It raises a question about what restricted release actually achieves: whether it meaningfully limits harm, or whether it functions primarily as a due-diligence step that satisfies regulators while providing incomplete protection in practice. Both outcomes are not mutually exclusive; a vetting barrier that stops casual misuse while being permeable to sophisticated actors still does something, just not as much as it implies.

OpenAI's version of the vetting approach is its Trusted Access for Cyber programme, which the company says has scaled to thousands of verified defenders and hundreds of teams responsible for protecting critical infrastructure. The programme operates in tiers: general users can apply for access to AI tools with standard cybersecurity capabilities, while more capable and less restricted models, including GPT-5.5 Cyber, require application and approval against stricter criteria. The company says it is working to expand access over time by consulting with the US government and broadening the pool of vetted users. The Anthropic process follows similar logic, though neither company has published detailed criteria for how decisions are made.

What the sequence of events reveals, underneath the competitive friction, is that both companies arrived at roughly the same position when confronted with the same problem. The capabilities of frontier cybersecurity AI are genuinely dual-use in a way that raises real concerns, not hypothetical ones. Altman's "fear-based marketing" comment may have been accurate in one sense: Anthropic's framing of Mythos probably did serve competitive and attention-seeking purposes alongside whatever safety reasoning motivated it. But the framing now looks less like a critique and more like a preview of where OpenAI itself would land when it faced the same trade-offs. Restricting access to tools that can autonomously exploit critical infrastructure is not fear-based marketing. It is the obvious response to having built something dangerous.

Sources