Victims' families sue OpenAI over school shooting that ChatGPT failed to flag
OpenAI and its chief executive, Sam Altman, have been sued by the families of five children and a teacher who were killed in a mass shooting at a Canadian elementary school earlier this year, and a seventh youth who was wounded. In seven separate lawsuits filed Wednesday, lawyers said the artificial intelligence giant encouraged the shooter's violent tendencies, then decided to protect its profits by remaining silent.
After months of interactions between the now-deceased killer, a young woman, and OpenAI's widely used ChatGPT chatbot, Altman and his colleagues "knew the shooter was planning the attack (and) made the conscious decision not to warn authorities," lawyers for one of the victims, a 12-year-old boy, said one of the lawsuits in federal court in San Francisco, where OpenAI's headquarters is located.
They said ChatGPT "deepened (her) violent fixation and pushed (her) toward the attack." Last June, the suit said, OpenAI's automated system "flagged the Shooter's ChatGPT account for gun violence activity and planning."
But Altman knew that disclosure of OpenAI's assessment of the dangers and its contacts with the young woman "could end his tenure … and wipe out the company's valuation," attorney Ali Moghaddas wrote in the suit. He and his associates "did the math and decided that the safety of the children of Tumbler Ridge was an acceptable risk," Moghaddas said.
In February, the 18-year-old woman fatally shot her mother and 11-year-old brother in their home in Tumbler Ridge, a small mining town in British Columbia. She then drove to a secondary school with a rifle, killed five children and a teacher and wounded 27 others before killing herself.
Altman publicly apologized to the community last week and said his company should have spoken up.
"I am deeply sorry that we did not alert law enforcement to the account that was banned in June," Altman wrote April 23 in a letter to the community that was published by the local news outlet Tumbler RidgeLines. "While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered."
But one lawsuit said Altman was misleadingly claiming that the shooter's account with OpenAI had been permanently "banned." In fact, attorney Moghaddas said, the company had only "deactivated" her account, an action that allowed the shooter to quickly reinstate it - which she did.
The lawsuits have potential legal significance, said Jay Edelson, lead attorney in the firm that filed the cases. He said the suits, and others that have been filed against OpenAI after murders and suicides, ask courts to find that a product that detects a customer's violent tendencies and encourages them without disclosure can be held responsible for the user's violent actions.
"The way they've designed their chatbot is fundamentally dangerous," Edelson told the Chronicle. "It amplifies what people are feeling. And when people are not feeling good about themselves, especially if mentally ill, it can amplify their paranoia," with deadly results.
The lawsuits were filed amid a separate, high-profile trial in federal court in Oakland over a lawsuit against Altman and OpenAI by Elon Musk, who provided $38 million in funding when the company was founded. Musk, the world's wealthiest person, contends the company betrayed its proclaimed benefit-to-humanity mission by pursuing profits. He is seeking damages and Altman's removal from OpenAI's governing board.
Altman and his company did not immediately respond to a request for comment on the lawsuits over the shootings in Canada.
Copyright 2026 Tribune Content Agency. All Rights Reserved.