When ChatGPT Knew Someone Was Dangerous and Said Nothing
A stalking victim's lawsuit alleges OpenAI's systems flagged her abuser's account as a mass-casualty risk and the company did nothing. It joins investigations over the Florida State shooting and a Canadian mass killing where OpenAI debated calling police and decided not to. The cases share a specific claim: OpenAI had advance knowledge and failed to act.
Read full story →What Should OpenAI Have Done? The Honest Answer Is That Nobody Knows
OpenAI's own systems flagged a user as a mass-casualty risk and the company did nothing. The obvious question is what it should have done instead, and no good answer exists. Calling police on a chat log is legally murky, suspending accounts creates misuse vectors, and monitoring for danger turns an AI provider into a surveillance apparatus. The lawsuits are forcing a question the industry has avoided and nobody is ready to answer.
Read opinion →