The Supreme Court declined this month to hear a case about whether AI-generated art can be copyrighted, leaving in place a lower court ruling that it cannot — because copyright requires a human creator. That might sound like a clean decision, but it resolves almost nothing about the larger and more economically significant question hanging over the AI industry: whether training models on copyrighted material without a licence constitutes infringement. On that question, the courts are still in the early innings of what looks like a very long game.
The volume of litigation is striking. The Copyright Alliance counts over 70 infringement cases filed against AI companies to date, spanning publishers, authors, newspapers, musicians, visual artists, and — most recently — YouTube creators. A group of YouTubers including the h3h3 channel filed suit against Snap in January, alleging their video content was used to train Snap's AI systems without permission. The same group had previously sued Nvidia, Meta, and ByteDance on similar grounds. The pattern is becoming a playbook: identify a major platform with AI products, allege training data infringement, file suit.
The most financially consequential case currently in motion is the music publishers' lawsuit against Anthropic. Concord Music Group and Universal Music Publishing Group are seeking $3 billion in damages, alleging Anthropic illegally downloaded more than 20,000 copyrighted songs — including sheet music, lyrics, and compositions — to train Claude. Anthropic disputes the characterisation of the training process as copyright infringement; the case has not yet gone to trial. Three billion dollars is large enough to be existentially significant for a company still burning through capital, and the outcome will likely influence how courts treat training data claims across the industry.
Against this backdrop, the music industry's settlements with AI music startups Suno and Udio — both brokered by Warner Music Group — offer an instructive contrast. Rather than pursuing damages to judgment, Warner negotiated licensing deals that allow the AI platforms to operate commercially in exchange for ongoing revenue sharing. It's a pragmatic outcome: the labels get money, the AI companies get legal certainty, and the lengthy uncertainty of litigation is avoided. The question is whether this model — negotiate a licensing framework rather than fight the training data question to a legal conclusion — will become the norm, or whether plaintiffs with more to prove will push the copyright question all the way through the courts.
The fair use argument, which AI companies have largely relied on in their training data defences, remains untested at the Supreme Court level. Fair use is a highly fact-specific analysis, and different training datasets, different use cases, and different ways of reproducing training material in outputs could all produce different results. That uncertainty is itself a problem — for creators trying to understand their rights, for AI companies trying to plan their legal exposure, and for investors trying to value companies whose core assets may or may not be legally defensible. The Supreme Court's refusal to take the AI copyright ownership case doesn't accelerate resolution of the training data question. It just leaves the uncertainty to compound.