What is interesting about the Treasury meeting is not the model. The interesting thing is the meeting itself. Scott Bessent and Jerome Powell personally convening the chief executives of America's largest banks to discuss an AI model release is not a routine regulatory act. It is a signal that the people at the top of the US financial system regard AI capability advances as a first-order systemic risk, not a technology industry matter to be handled at a lower level. That signal matters. But I think the meeting also reveals, inadvertently, that the governance mechanism it represents is already near the end of its useful life.
Consider what had to go right for it to work this time. Anthropic briefed the Vice President and Treasury Secretary before releasing Mythos, giving them advance notice of what was coming. Bessent and Powell had the standing and the relationships to convene bank CEOs on short notice. The bank CEOs came. The whole chain functioned because Anthropic, a US company with a cooperative posture toward government, made a deliberate choice to pre-coordinate. Remove that choice and the chain breaks at the first link. No briefing, no preparation, no meeting, no warning. The banks learn about the threat the same way everyone else does: after the model is already available.
The coordination that worked this week was not a governance system. It was a favour between parties who already trusted each other. That is a meaningful distinction. A governance system is institutional. It operates regardless of relationships, regardless of whether the company involved is cooperative, regardless of whether the capability jump comes from a US company at all. What happened Thursday was closer to the kind of informal arrangement that works well when everyone involved is already aligned and breaks down completely when they are not. The next Mythos-level capability jump may come from a Chinese lab, or from an open-source release, or from a smaller company that has no relationship with Vance or Bessent. The mechanism that worked this week will not exist in those scenarios.
The deeper problem is pace. The "urgent meeting" model requires time: time to receive a briefing, time to assess the risk, time to arrange the meeting, time for the attendees to read in. It assumes that capability jumps arrive at the pace of diplomatic communications. AI is advancing at the pace of quarterly model releases. If a comparable threat emerges from a source with no government relationships, the timeline from capability appearance to deployment in the wild may be shorter than the time needed to convene the appropriate response. The gap between how fast AI capabilities are advancing and how fast institutions can track, evaluate, and respond to them is structural. Better meetings will not close it.
I am not saying the meeting was the wrong response. Given what was available, it was probably the right response. And the Mythos situation has a real defensive dimension: the same capability that creates the threat can be used to find and patch vulnerabilities before attackers exploit them. Urging banks to test their own systems was sensible advice. But advice delivered in a room at Treasury, manually, to a handful of CEOs, does not reach the ten thousand other organisations that also need to act. The warning is real; the distribution mechanism for it is not adequate to the scale of the problem.
What adequate governance would look like at the pace AI is actually moving is not obvious. Standing technical assessment bodies with the expertise to evaluate model capabilities before release? Mandatory pre-release disclosure protocols? International frameworks with participation from non-US labs? All of these have their own problems, and none of them exist in functional form. What exists is the urgent meeting: Treasury Secretary, Fed Chair, bank CEOs, one room. It worked this week. It is not a model for what comes next.