OpenAI has pledged to strengthen its safety measures after Canadian officials criticised the company for not alerting police about a ChatGPT account linked to the suspect in the Tumbler Ridge shooting, despite the account being flagged internally months before the attack.
In an open letter addressed to Canadian officials, the company said the suspect created a second account after the first was banned, and that the new account was not detected by OpenAI’s internal detection systems at the time. OpenAI said it has since changed how it handles referrals to law enforcement and that the suspect’s activity would be reported under its updated approach if flagged today.
The company said an account linked to Jesse Van Rootselaar, 18, was banned in June 2025 for violating usage terms. OpenAI did not report that account to police at the time because, according to the company, it did not meet its threshold then for “credible and imminent planning” of serious violence.
Canadian officials have argued that earlier intervention may have changed the outcome. OpenAI’s letter described internal changes introduced in recent months and said the company would have handled the case differently under the revised criteria.
Attack Timeline And Official Scrutiny In Ottawa
The shooting occurred on 10 February in Tumbler Ridge, a small town in British Columbia, Canada. Eight people were killed in the attack, which took place at a residence and at the local secondary school. The victims included the suspect’s mother and 11 year old stepbrother, as well as five children and an educator, according to the account in the source material. Police said Van Rootselaar died at the scene.
The incident has become one of the deadliest shootings in Canadian history and has prompted government scrutiny of how technology platforms identify and escalate high-risk behaviour. Canadian officials met OpenAI senior staff earlier this week in Ottawa, after the company disclosed it had shut down a ChatGPT account used by the suspect in June 2025, roughly seven months before the attack.
Officials have criticised OpenAI for not sharing information with law enforcement earlier, focusing on whether the company’s internal thresholds for reporting were too narrow and whether account enforcement mechanisms were effective in preventing a banned user from returning.
OpenAI’s New Measures To Reduce Evasion Risk
In its letter, OpenAI said the suspect created a second account after the first was banned and was able to “slip past” internal detection controls. The company said the second account was provided to police after the shooting, and it committed to improving its systems to reduce attempts to evade safeguards and to prioritise the identification of the highest risk offenders.
OpenAI also said it has enlisted the help of “mental health and behavioural experts” to assess cases and has made its criteria for referral to police “more flexible.” The company said these changes mean it would have reported the suspect’s account under the updated guidelines.
The letter did not specify when the revised protocols took effect. Still, OpenAI framed the changes as a response to perceived gaps in how it evaluated and escalated troubling behaviour, and as part of an effort to make its safety processes more responsive when warning signs fall short of a strict definition of imminent harm.
Separately, the company said it will establish a direct point of contact with Canadian law enforcement to allow faster escalation of cases with “potential for real-world violence.” That direct communication channel was among the requests made by Canadian officials following their meeting with OpenAI staff earlier in the week.
Political Reactions And Potential Regulatory Pressure
Canada’s AI minister, Evan Solomon, described the situation as a “failure,” according to the source material. He said he was “disappointed” following the meeting with OpenAI representatives and indicated he did not hear “any substantial new safety protocols” during those discussions.
Solomon also signalled that legislative options remain possible if the company does not implement changes quickly, saying “all options” were under consideration to address public safety expectations.
In British Columbia, Premier David Eby said he believes the shooting could have been prevented if police had been alerted to the suspect’s account months earlier. He told reporters that the company “tragically missed the mark” by not bringing the information forward and said the consequences would be borne by the victims’ families for years.
Eby also said Sam Altman, OpenAI’s chief executive, has agreed to meet to discuss the company’s safety policies. The premier said he wants Altman to hear directly how the decision not to share information earlier contributed to the scale of the tragedy described in the source material.
For investors and market readers, the episode underscores intensifying scrutiny of AI platforms and the governance decisions surrounding safety escalation, law enforcement coordination, and identity or account controls. The company’s commitments point to a more intervention-oriented posture, but officials’ comments suggest political pressure could continue if regulators view voluntary changes as insufficient.