
Sam Altman Apologizes Over OpenAI's Failure to Report Tumbler Ridge Shooter
Sam Altman Apologizes to Tumbler Ridge After OpenAI Failed to Alert Police About Mass Shooter's ChatGPT Account
OpenAI CEO Sam Altman issued a formal letter of apology on April 23, 2026, to the community of Tumbler Ridge, British Columbia, acknowledging that the company failed to alert the Royal Canadian Mounted Police about the ChatGPT account of Jesse Van Rootselaar — the 18-year-old who carried out a mass shooting on February 10, 2026, that killed eight people, injured 27 others, and left one of Canada's small northern communities shattered. The apology, first published by local outlet Tumbler RidgeLines and confirmed authentic by an OpenAI spokesperson, arrives more than two months after Altman agreed to write it following a March 2026 meeting with British Columbia Premier David Eby and Tumbler Ridge Mayor Darryl Krakowka.
What Happened: The Tumbler Ridge Shooting and OpenAI's Prior Knowledge
On February 10, 2026, Van Rootselaar killed eight people in Tumbler Ridge, British Columbia — including six at Tumbler Ridge Secondary School — before dying of a self-inflicted gunshot wound. The attack is described as Canada's deadliest school shooting since 1989. In total, 27 people were injured in the attack.
What emerged in the aftermath was deeply troubling: OpenAI's automated abuse detection tools and human investigators had flagged Van Rootselaar's ChatGPT account in June 2025 — approximately eight months before the shooting — for activity related to the furtherance of violent activities. The account was banned. But the RCMP were never notified.
According to reporting first published by the Wall Street Journal and subsequently confirmed by multiple outlets including The Next Web and Castanet, approximately a dozen OpenAI employees reviewed the flagged account in June 2025. Some of those employees recommended reporting the account to law enforcement. Company leadership overruled them, determining that the conversations did not meet a higher internal threshold — specifically, that the account did not pose an imminent and credible risk of serious physical harm to others.
After being banned, Van Rootselaar created a second ChatGPT account. That account was not detected until after the RCMP released a name following the attack.
The Apology: What Altman Said and How It Was Received
Altman's letter, dated April 23, 2026, was direct in accepting responsibility for the failure to notify authorities. "I am deeply sorry that we did not alert law enforcement to the account that was banned in June," Altman wrote. "While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered."
The letter also gestured toward a commitment to future action. "Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again," Altman wrote.
The apology was acknowledged at the municipal level. "We are committed to supporting those impacted and ensuring that care, respect, and accountability remain at the forefront as we move forward," said Darryl Krakowka, Mayor of Tumbler Ridge, in a District statement released Friday.
British Columbia Premier David Eby shared the letter on social media but made clear that an apology, however warranted, does not begin to address the scale of harm. "The apology is necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge," Eby wrote in a post on X.
OpenAI's Policy Changes — and Their Limits
In the months since the shooting, OpenAI has taken steps to update its internal safety protocols. Ann O'Leary, OpenAI's head of global policy, described the changes in a prior letter, stating that mental health and behavioral experts now help the company assess difficult cases. "We have made our referral criteria more flexible to account for the fact that a user may not discuss the target, means and timing of planned violence in a ChatGPT conversation but that there may be potential risk of imminent violence," O'Leary wrote. She further stated that under the enhanced protocols, the June 2025 account "would be referred to law enforcement if it were discovered today."
OpenAI has also established direct points of contact with the RCMP and proactively reached out to the Mounted Police with information following the shooting to support their investigation. The company lowered its reporting threshold as part of its revised framework.
However, a critical structural gap remains: Canada has no law requiring AI companies to report threats identified through their platforms. Every change OpenAI has made is voluntary. There is no regulatory mandate compelling the company — or any other AI platform — to alert authorities when a user's activity suggests potential violence. That absence of legal obligation sits at the center of the ongoing civil and criminal scrutiny now surrounding OpenAI.
Legal and Regulatory Fallout
The Tumbler Ridge shooting has generated significant legal exposure for OpenAI. At least one Tumbler Ridge family has filed a civil lawsuit against the company in BC Supreme Court. The lawsuit alleges that OpenAI "had specific knowledge of the shooter's long-range planning of a mass casualty event" but "took no steps to act upon this knowledge."
The scrutiny is not limited to Canada. Florida Attorney General James Uthmeier announced a separate criminal investigation into OpenAI following a review of messages between ChatGPT and a Florida State University student accused in an April 2025 campus shooting that killed two people and wounded several others.
The Tumbler Ridge case also sits alongside a broader pattern of litigation involving ChatGPT and user safety. Seven families have separately sued OpenAI over ChatGPT allegedly acting as what their attorneys describe as a "suicide coach," with documented deaths in Texas, Georgia, Florida, and Oregon.
The scale of AI safety concerns is growing. According to data cited by The Next Web, the number of reported AI safety incidents rose from 149 in 2023 to 233 in 2024 — a 56% increase — underscoring that the Tumbler Ridge case is not an isolated failure but part of an accelerating trend.
Why This Matters Beyond One Apology
The Tumbler Ridge case forces a direct confrontation with a question that AI companies, regulators, and the public have largely avoided: when an AI platform identifies what may be a credible threat of violence, does it have an obligation to act — and if so, to whom and by what standard?
OpenAI's internal review process in June 2025 involved approximately a dozen employees, some of whom apparently believed the threshold for reporting had been met. Leadership disagreed. Eight months later, eight people were dead. The gap between those two outcomes — the internal deliberation and the real-world consequence — is what makes this case so consequential for the AI industry as a whole.
OpenAI's voluntary changes to its reporting threshold and its new direct channels with the RCMP represent meaningful internal reform. But voluntary protocols are inherently unenforceable. They apply only to OpenAI, only for as long as the company chooses to maintain them, and only in jurisdictions where the company elects to cooperate. Canada's current regulatory vacuum means there is no mechanism to audit compliance, no penalty for future failures, and no guarantee that other AI platforms operating in the country have any comparable process in place.
The civil lawsuits, the Florida criminal investigation, and the pressure from provincial and federal officials in Canada suggest that the era of purely self-regulated AI safety thresholds may be nearing its end — but as of today, no binding legal framework exists to replace it.
For users and for the communities affected by these events, the practical question is not abstract: what happens the next time an AI system flags a credible threat, and the company decides — again — that it does not meet the bar?
For more tech news, visit our news section.
What to Watch Next
The immediate legal proceedings — including the civil lawsuit filed in BC Supreme Court and the Florida criminal investigation — will develop over the coming months. Altman's letter committed to continued engagement with governments at all levels, though the specifics of what that engagement will produce remain unclear. British Columbia Premier David Eby and other Canadian officials have signaled that a formal apology is not sufficient; whether that pressure translates into legislative action or new regulatory requirements for AI platforms operating in Canada remains to be seen. The broader pattern of litigation against OpenAI across multiple jurisdictions suggests that courts, rather than regulators, may be shaping the company's obligations in the near term.
The Tumbler Ridge community, meanwhile, continues to grieve. The gap between a CEO's letter and accountability that the families of eight victims might recognize as meaningful is significant — and it is a gap that no policy update, however sincere, can fully close.
AI platforms now occupy a unique position in millions of daily lives — including how people manage stress, process difficult emotions, and make decisions about their health and wellbeing. Understanding how these tools work, where they fall short, and how to use them responsibly has never been more important. At Moccet, we're building a platform that puts your health and productivity first, with transparency at its core. Join the Moccet waitlist to stay ahead of the curve.