← Front Page
AI Daily
Opinion
April 2, 2026

The Week Source Code Stopped Being a Moat

By Peter Harrison • April 2, 2026

Three big code stories broke this week and nobody has connected them. Anthropic accidentally published 163,000 lines of its own proprietary TypeScript, panicked, and sent DMCA notices to thousands of GitHub repositories. Anthropic's own Frontier Red Team published research showing that Claude can autonomously find novel, high-severity security vulnerabilities in well-audited open source codebases, including bugs that had survived decades and millions of CPU-hours of fuzzing, at a scale no human team can match. And the chardet Python library dispute, which broke in early March, revealed that AI-assisted clean room reimplementation can produce a functionally complete, relicensed version of a GPL library with less than 1.3% textual similarity to the original. Three stories. One problem. And the irony is almost too neat, because the company at the centre of all three is the same company.

The problem is this: the legal and practical architecture that the software industry built around source code as ownable property has just had three of its load-bearing walls knocked out at once.

Think about what those walls were. The first was proprietary secrecy: if nobody can read your source, they cannot copy it, cannot find its weaknesses, cannot understand your architecture. The second was the GPL friction barrier: rewriting someone else's copyleft code cleanly enough to escape the licence has historically been so expensive that most companies simply comply or avoid the code entirely. The third was security through obscurity: even if your code leaks, understanding it well enough to find exploitable vulnerabilities requires skilled human analysts working for weeks or months. These were not perfect protections. But they were real ones, and the software industry priced them in everywhere, from venture capital to enterprise procurement to open source licensing strategy.

AI has dissolved all three simultaneously. Anthropic's red team did not just find vulnerabilities in open source projects: it found over five hundred of them across multiple codebases, including novel high-severity bugs that professional security researchers had missed for years. TrendMicro's ÆSIR platform has 21 CVEs to its name across NVIDIA, Tencent and MLflow. A startup called AISLE apparently swept the entire January 2026 OpenSSL patch batch before public disclosure. When code is exposed, and all code eventually is, the time between exposure and exploitation has collapsed. The chardet case is the same dynamic applied to licensing: Dan Blanchard used Claude to rewrite the library from scratch, achieved a 48x speed improvement, relicensed from LGPL to MIT, and posted plagiarism detection results showing 1.29% similarity to the original. Bruce Perens called it the death of software licensing economics, which is too dramatic but not entirely wrong. The friction that made clean room reimplementation prohibitively expensive is gone.

Now consider Anthropic's position. They built the tool that can do all of this. They built it very well, which is genuinely impressive. And then they shipped a build artifact that made their own source code public, and scrambled to contain the leak with a DMCA campaign that went further than intended and had to be walked back. I am not making a point about competence: packaging mistakes happen, and the DMCA overcorrection is understandable panic. The point is the shape of the trap. Anthropic has built a product that renders source code secrecy much harder to maintain and security through obscurity largely ineffective. They then demonstrated both conclusions with their own code in the same news cycle.

The legal question the chardet case raises is genuinely unresolved and I do not think it will be resolved cleanly. The traditional clean room defence depends on separating the people who read the original code from the people who write the new code. An LLM trained on GPL software carries that code in its weights, with no separation possible at all. Whether a court will treat that as equivalent to a human developer who studied the original source and then sat down to write a new implementation is a question for lawyers and judges, and I would not bet on a consistent answer across jurisdictions. What I am confident about is that the chardet case is not the last of its kind. It is the first one that got reported.

I have been a software developer long enough to remember when the argument for open source was that it would make software better by exposing it to more eyes. The argument was right, and it won. What nobody fully worked through is what happens when the eyes doing the reading are AI systems that can read millions of lines in hours, find every vulnerability, understand every architectural decision, and produce a clean reimplementation faster than a human team could read the original. The same capability that makes AI a remarkable developer tool is the capability that makes source code a much weaker form of intellectual property than we have been treating it as.

None of this makes Anthropic uniquely culpable. Every company building frontier AI is building the same capability, and they are all sitting inside the same blast radius. What this week illustrated is that the blast radius is not theoretical. The protections that software companies and the open source movement have relied on were always partly practical and partly legal. The practical component is eroding fast. The legal component has not caught up and may not be able to. If you are thinking about your software strategy on a five-year horizon and this is not on your list of things to reckon with, I would revisit that list.