Tumbler Ridge families sue OpenAI for not alerting police to the suspect’s ChatGPT activity

Tumbler Ridge families sue OpenAI for not alerting police to the suspect’s ChatGPT activity

```json { "title": "Tumbler Ridge Families Sue OpenAI Over ChatGPT School Shooting", "metaDescription": "Seven families sue OpenAI and CEO Sam Altman after ChatGPT flagged the Tumbler Ridge shooter eight months before Canada's deadliest school shooting since 1989.", "content": "<h2>Families of Tumbler Ridge Shooting Victims File Federal Lawsuits Against OpenAI and Sam Altman</h2>\n\n<p>Seven families of victims injured or killed in the February 10, 2026 Tumbler Ridge school shooting filed federal lawsuits on April 29, 2026, against OpenAI and CEO Sam Altman in the Northern District of California in San Francisco. The suits allege that OpenAI was aware of the shooter's violent intentions as early as June 2025 — eight months before the attack — and chose not to alert law enforcement, a decision the families say contributed directly to the deaths and injuries of their children. The lawsuits make claims of negligence, product liability, and violation of California's business and professional code.</p>\n\n<p>The February 10 attack on Tumbler Ridge Secondary School in British Columbia, Canada, carried out by 18-year-old Jesse Van Rootselaar, left eight people dead — five students, one educational assistant, Van Rootselaar's mother, and her 11-year-old half-brother — and 27 others injured, including two with serious injuries. Van Rootselaar died of a self-inflicted gunshot wound at the school. The attack is the deadliest school shooting in Canada since the École Polytechnique massacre in 1989, and the deadliest mass shooting in Canada since the Nova Scotia attacks in 2020. Tumbler Ridge Secondary School had 191 students enrolled for the 2025–26 school year. The town itself, a small remote mining community in northeastern British Columbia, had a population of just 2,399 according to the 2021 census.</p>\n\n<h2>What OpenAI Knew — and When It Knew It</h2>\n\n<p>At the center of the lawsuits is a damaging sequence of events that began in June 2025. According to CNN and NPR, OpenAI's automated systems flagged Van Rootselaar's ChatGPT account for what was described internally as "gun violence activity and planning." An internal safety team reviewed the content and, according to the suits, recommended that OpenAI notify Canadian law enforcement. OpenAI's leadership overruled that recommendation, concluding that the conversations did not meet the threshold of "credible and imminent" risk of physical harm. The company instead chose to deactivate the account.</p>\n\n<p>OpenAI has acknowledged deactivating the original account for violating its violent activities policy. However, according to CNN and NPR, Van Rootselaar subsequently created a second account and continued using ChatGPT — and OpenAI failed to act on that activity as well. The lawsuits allege that the AI model involved in these conversations was GPT-4o, and that ChatGPT deepened the shooter's violent fixation rather than directing her toward real-world help.</p>\n\n<p>According to CNN, multiple team members within OpenAI explicitly recommended contacting Canadian law enforcement, but were overruled by the company's leadership. The suits allege that OpenAI "made the conscious decision not to warn authorities" — a decision the plaintiffs claim was motivated at least in part by a desire to protect the company's business prospects and its path toward a nearly US$1-trillion initial public offering. Alerting police, the suits contend, would have exposed the volume of violence-related conversations occurring on ChatGPT and potentially jeopardized the IPO.</p>\n\n<p>The lawsuits also raise questions about age verification. According to CBC, Van Rootselaar was under 18 when she first began using ChatGPT. Despite OpenAI's stated policy requiring parental consent for users between the ages of 13 and 18, the suits allege the company took no meaningful steps to implement age verification or consent procedures.</p>\n\n<h2>The Scope of the Legal Action and What the Families Are Seeking</h2>\n\n<p>The seven lawsuits filed on April 29, 2026, represent five murder victims and two people who were injured in the shooting, according to Global News. The cross-border legal team pursuing the action is led by Chicago lawyer Jay Edelson of Edelson PC and Vancouver lawyer John Rice of Rice Parsons Leoni & Elliott LLP. According to Futurism, lawyers say the seven families represent only the first wave of dozens of expected plaintiffs.</p>\n\n<p>Beyond financial damages — which remain unspecified — the suits seek a court order requiring OpenAI to take a series of structural and operational steps: preventing deactivated users from simply creating new accounts, notifying law enforcement when internal systems flag a credible risk of real-world harm, submitting to independent monitoring, and implementing broader design and safety changes to ChatGPT.</p>\n\n<p>"These families from the Canadian north have come together and they've decided to pursue litigation in the United States on a scale that can hold these companies to account," said John Rice, the Vancouver-based lawyer representing the plaintiffs.</p>\n\n<p>The decision to file in federal court in San Francisco — where OpenAI is headquartered — reflects a deliberate legal strategy to pursue accountability in the jurisdiction where the company operates and where California's business and professional code applies.</p>\n\n<h2>OpenAI's Response and Altman's Apology</h2>\n\n<p>In the days before the lawsuits were filed, OpenAI CEO Sam Altman issued a public apology to the Tumbler Ridge community. "I am deeply sorry that we did not alert law enforcement to the account that was banned in June," Altman wrote in a letter to the community.</p>\n\n<p>Altman also met virtually with British Columbia Premier David Eby prior to the lawsuits being announced. Following that meeting, Eby told reporters that Altman had agreed to apologize to the people of Tumbler Ridge and to work with the provincial government to develop recommendations around AI regulation.</p>\n\n<p>The day before the lawsuits were filed, OpenAI published a blog post addressing its policies around dangerous content. "When conversations indicate an imminent and credible risk of harm to others, we notify law enforcement," the company stated. OpenAI has also previously said: "We have a zero-tolerance policy for using our tools to assist in committing violence."</p>\n\n<p>The blog post's timing — published the day before the lawsuits landed — drew attention, given that the core allegation in the suits is precisely that OpenAI did not follow through on that stated policy when its own systems flagged Van Rootselaar's account eight months before the attack.</p>\n\n<h2>Why This Case Has Broader Implications for AI Safety</h2>\n\n<p>The Tumbler Ridge lawsuits arrive at a moment of mounting scrutiny over the real-world risks of large language models and the responsibilities of the companies that deploy them. The case raises a set of questions that extend well beyond this specific tragedy: When an AI platform's own internal systems identify a user who may pose a danger to others, what legal and ethical obligations does the company have? Is deactivating an account sufficient? And who bears responsibility when a user simply creates a new account and continues?</p>\n\n<p>The Tumbler Ridge case is not the only instance where ChatGPT has been connected to real-world violence. According to the lawsuits as reported by Global News, a separate mass shooting at Florida State University in April 2025 involved a 20-year-old gunman who used ChatGPT "extensively" in the lead-up to and during the attack. Florida Attorney General James Uthmeier announced a criminal investigation in April 2026 into ChatGPT's role in that shooting, according to the Globe and Mail.</p>\n\n<p>Together, these cases are beginning to form a pattern that is drawing the attention of legislators, regulators, and law enforcement across North America. According to CBC, a coroner's inquest has been announced in British Columbia to examine the factors that contributed to the Tumbler Ridge shooting, including the specific role of artificial intelligence — though no date has yet been set for that inquiry.</p>\n\n<p>The lawsuits also highlight an accountability gap in how AI companies handle internal safety flags. In this case, OpenAI's own automated system identified a potential threat. A safety team escalated it. And still, no external notification was made. The suits frame this not as a system failure but as a conscious leadership decision — one allegedly shaped by commercial considerations tied to a landmark IPO.</p>\n\n<p>RCMP Deputy Commissioner Dwayne McDonald, commander of the RCMP in British Columbia, stated at the time of the investigation: "We believe the suspect acted alone." The RCMP also confirmed in an official release on February 13, 2026, that the main firearm believed to be used in the school shooting had never been seized by the RCMP and its origin remained unknown.</p>\n\n<p>For AI companies operating consumer-facing products at scale, the Tumbler Ridge case may represent a turning point — a moment when the question of whether a platform "should have known" becomes a matter of federal litigation, not just internal policy review.</p>\n\n<p>For more tech news, visit our <a href=\"/news\">news section</a>.</p>\n\n<h2>What Comes Next</h2>\n\n<p>As of April 29, 2026, the lawsuits are in their earliest stages. The plaintiffs are seeking both financial damages and structural changes to how OpenAI operates ChatGPT. Lawyers for the families have indicated that dozens of additional plaintiffs are expected to join future waves of litigation.</p>\n\n<p>On the regulatory front, British Columbia Premier David Eby has signaled that the provincial government intends to work with OpenAI on AI safety recommendations. The coroner's inquest, once scheduled, will examine the role of AI in the shooting, which could produce findings that inform both Canadian regulatory policy and the ongoing U.S. litigation. In Florida, the attorney general's criminal investigation into ChatGPT's role in the FSU shooting adds another legal dimension to the broader accountability picture.</p>\n\n<p>OpenAI, for its part, faces a difficult position: its CEO has publicly apologized for not alerting law enforcement, its own internal records appear to confirm that its systems flagged the shooter's account months before the attack, and the company now faces litigation in federal court on multiple fronts. How it responds — both legally and in terms of product and policy changes — will likely shape industry norms around AI safety obligations for years to come.</p>", "excerpt": "Seven families of victims from the February 10, 2026 Tumbler Ridge school shooting filed federal lawsuits against OpenAI and CEO Sam Altman on April 29, alleging the company failed to alert law enforcement after its own systems flagged the shooter's ChatGPT account eight months before the attack. The suits allege OpenAI leadership overruled internal safety team recommendations, citing business concerns tied to the company's upcoming IPO. The case is widely expected to be the first wave of dozens of additional lawsuits.", "keywords": ["Tumbler Ridge shooting", "OpenAI lawsuit", "ChatGPT school shooting", "AI safety liability", "Sam Altman"], "slug": "tumbler-ridge-families-sue-openai-chatgpt-school-shooting" } ```

Share:
← Back to Tech News