← Front Page
AI Daily
Opinion
April 11, 2026

What Should OpenAI Have Done? The Honest Answer Is That Nobody Knows

By Peter Harrison • April 11, 2026

OpenAI's own systems flagged a user's account as a mass-casualty risk. The company apparently did nothing. That user went on to stalk and harass a woman, reinforced at every step, it is alleged, by ChatGPT. The lawsuit is now filed, and the question everyone is implicitly asking is: what should OpenAI have done? I have been thinking about this carefully, and I think the honest answer is that nobody actually knows. Not OpenAI, not the lawyers bringing the cases, and not the courts that will eventually have to decide. That is not a defence of OpenAI's inaction. It is an observation that AI providers have opened a door to a category of problem that no existing framework is equipped to handle.

Consider what acting on that flag would have required. OpenAI would have had to decide, on the basis of a chat log, that a specific user posed a credible threat to a specific person. Then what? Call the police? On what basis, exactly? "Someone said alarming things in a conversation with our chatbot" is not a report that most police forces have a clear protocol for receiving. In most jurisdictions, preemptive reports of potential violence without an identified specific threat or imminent harm are treated as low priority at best. They can also be weaponised: an ex-partner with a grudge could easily generate a conversation designed to trigger a flag against someone they want harassed by authorities. The concern about vindictive misuse of any reporting system is not paranoid. It is obvious.

The closest legal precedent is the Tarasoff duty, which in several jurisdictions requires therapists to warn identifiable potential victims when a patient poses a credible threat. It is a specific, narrow obligation that applies to a licensed professional in a clinical relationship, and even it is contested and inconsistently applied. ChatGPT is not a therapist. It has no clinical relationship with its users, no professional licensing framework, no trained judgment about risk, and no way to distinguish between someone genuinely planning violence, someone processing dark thoughts safely through an AI, and someone writing fiction. The volume is also incomparable. A therapist has a caseload of dozens. ChatGPT has hundreds of millions of conversations.

That scale introduces the privacy dimension, and I think this is more important than any theoretical question about liability. If OpenAI is expected to monitor conversations for signals of danger and act on them, it is operating a surveillance system. People use AI to process difficult thoughts: grief, rage, suicidal ideation, fantasies of revenge, crises of all kinds. The ability to externalise those thoughts, even to a machine, has real value. If doing so comes with the knowledge that the system is continuously evaluating whether your conversation is the basis for a call to police, the chilling effect on that use is severe. People who most need to process dark thoughts safely will be the ones least likely to do so. The harm from that chilling effect is real, even if it is invisible and unmeasurable in court.

The lawsuits frame OpenAI's failure as negligence, and maybe it was. But negligence requires a standard of care, and no such standard has been defined for AI providers in this situation. What is a reasonable response to an internal danger flag? Suspend the account? Then OpenAI becomes an arbiter of who gets to use its service based on algorithmic assessments of risk, which creates its own set of harms and misuse vectors. Contact potential victims? In most cases the AI has no idea who they are. Escalate to a human reviewer? At what volume, with what training, with what authority? None of these questions have answers that exist outside of this specific legal dispute.

What the OpenAI cases are actually doing, underneath the legal arguments, is forcing a public reckoning with a question the industry has avoided. AI providers now have visibility into the private thoughts of hundreds of millions of people at a scale no institution in human history has had before. What are the obligations that come with that visibility? The answer is not obvious, it is not established in law, and the people most affected by getting it wrong include both the victims of violence and the people whose private conversations become the raw material for surveillance. Both harms are real. Both deserve to be in the frame. That is the conversation the lawsuits are forcing, and it is long overdue.