A wave of lawsuits against OpenAI is reaching a new phase, and the cases share a specific and uncomfortable detail: in several of them, OpenAI's own internal systems appear to have identified dangerous users before harm occurred. A lawsuit filed Thursday by a stalking victim alleges that her abuser's account had been flagged by OpenAI's own systems as involving "mass-casualty" risk. The company, the suit claims, ignored that flag along with two other warnings, while ChatGPT continued reinforcing the man's delusional beliefs and helping him pursue her. On the same day that lawsuit was filed, Florida's Attorney General announced an investigation into OpenAI over the April 2025 shooting at Florida State University, where two people died and five were injured. The alleged attacker had reportedly asked ChatGPT about the likely public reaction to a shooting at the university before carrying it out.
These are not the only cases. In November 2025, seven families filed lawsuits alleging ChatGPT played a role in suicides or delusional episodes. A sixteen-year-old's death led to a wrongful death suit against OpenAI and Sam Altman personally. In February 2026, it emerged that OpenAI had debated internally whether to contact police about a user whose ChatGPT interactions were raising serious alarms, and decided not to, months before that user allegedly killed eight people in British Columbia. Canada subsequently summoned OpenAI executives to address the matter. In that case as in the stalking case, the question is not simply whether ChatGPT was misused, but whether OpenAI had information that could have prevented harm and chose not to act on it.
The legal theory being developed across these cases is significant. Earlier lawsuits involving AI companies tended to focus on copyright, privacy, or hallucination-related harms: the AI said something false or was trained on protected data. The current wave of cases is arguing something different and harder to dismiss. They are claiming that OpenAI had actual knowledge of specific threats, through its own internal monitoring and flagging systems, and failed to act in ways that a reasonably responsible operator of a platform with that knowledge should have acted. This is closer to a negligence theory than a product liability theory, and negligence theories have been successfully applied to platform operators before.
Section 230 of the Communications Decency Act, which has shielded internet platforms from liability for user-generated content since 1996, is the obvious potential protection. But the stalking case in particular appears structured to argue around it: the claim is not that OpenAI published harmful content, but that it created an AI that actively generated and reinforced the user's delusions, and that it had specific warning signals about this specific user that it ignored. Whether courts treat ChatGPT's responses as "user content" or as the platform's own output is a genuinely contested question, and at least some of the pending cases are specifically designed to force that question.
The Florida investigation adds a political dimension. Attorney General James Uthmeier announced the probe framing it partly as a matter of harm to minors and a possible national security concern, and CNBC noted it comes as OpenAI is reportedly planning an IPO. State-level attorneys general have become an increasingly common mechanism for applying pressure to large technology companies, and the timing of a major enforcement action during a pre-IPO period is not accidental. Whether the investigation produces meaningful accountability or primarily serves as political leverage is a different question from whether the underlying concerns are legitimate, and on the evidence so far, those concerns appear to be both.
OpenAI has not issued a detailed public response to the stalking lawsuit. In previous cases it has argued that its safety systems work as designed, and that determined users who circumvent them bear primary responsibility. The "mass-casualty flag" detail in the stalking case makes that defence harder to sustain: the claim is precisely that OpenAI's systems did detect the danger, and that the company chose not to act on the detection. What the appropriate response to such flags should be, whether that means account suspension, notifying authorities, contacting potential targets, or something else entirely, is a question the AI industry has not come close to resolving. These lawsuits may force it to.