On April 7, 2026, Anthropic announced something that had not happened before in the public history of commercial AI: a new model the company would not release. Claude Mythos, the announcement made clear, was capable of identifying cybersecurity vulnerabilities at a scale beyond what human experts could detect, and that capability was precisely the reason it would remain private. The company described it as a "dangerous capabilities" finding: a system too useful to potential adversaries to put into general circulation.
Five weeks later, the consequences are rippling outward in ways Anthropic's announcement may not have fully anticipated.
The International Monetary Fund has weighed in with a formal warning. In a recent blog post, the IMF described systems like Mythos as potential sources of "systematic risk" to global financial infrastructure. The concern is specific rather than abstract: an AI capable of identifying thousands of software vulnerabilities simultaneously could enable attacks to proceed at what the IMF terms "machine speed," giving adversaries a time advantage over defenders that human-staffed security teams could not close. The more unsettling concept in the warning is what the IMF calls "correlated failures": if an attacker uses AI to find a vulnerability in one widely-used banking platform and then replicates that exploit across similar systems, the result is not one institution in trouble but a coordinated disruption of payment infrastructure across multiple countries simultaneously. The comparison to financial contagion is implicit but deliberate. When the IMF uses the phrase "systematic risk," it is invoking the language of 2008.
In Washington, the Mythos announcement has apparently done something that years of AI safety advocacy failed to accomplish: it has moved the Trump administration toward regulation. President Trump signed an executive order in December 2025 that explicitly targeted state-level AI laws, framing federal oversight as the threat to innovation rather than the solution to it. That framing is now under internal revision. The administration is reportedly weighing an executive order that would create a formal review process for the most advanced AI models, analogous to the FDA's drug approval system. National Economic Council Director Kevin Hassett used that comparison explicitly on Fox Business this week. National Cyber Director Sean Cairncross is coordinating the government's response. Treasury Secretary Scott Bessent has reportedly warned banking executives directly about the threat the IMF has now confirmed in writing.
The internal dissent is real. David Sacks, a White House AI adviser with a long record of skepticism toward AI safety arguments, has publicly pushed back on his own podcast, arguing the threat is overblown. Prediction markets reflect the uncertainty: the "Trump orders federal review of AI model releases by May 31" contract on Polymarket sits around 19%, suggesting traders are not yet confident the order materialises on the stated timeline. The debate within the administration mirrors a wider argument about whether Mythos represents a genuine inflection point or a well-timed piece of competitive positioning by Anthropic, which stands to benefit from a regulatory environment that emphasises safety evaluation.
What makes the moment genuinely new is not the policy debate, which will resolve itself in the usual way that Washington policy debates do. It is the category shift that Mythos represents. Until this announcement, AI safety debates were largely hypothetical: arguments about what systems might eventually be capable of, contested by people who disagreed about timelines and magnitudes. Mythos introduces a different structure: a specific system, with specific capabilities its creators judged too dangerous to publish, sitting in the hands of a private company, with no established process for government oversight. The question of who decides what happens with it, on what basis, and with what accountability, has no existing answer.
That vacuum is what Washington is now trying to fill. Whether it succeeds before the next Mythos-level capability arrives at a different lab, with a different approach to disclosure, is the more interesting question behind the current news cycle.