
OpenAI says its recently-enhanced safety criteria would have flagged the Tumbler Ridge mass shooter’s online behaviour to police if discovered today, but is committing to making further improvements after a meeting with federal ministers.
That includes improving its repeat violator detection system after a second ChatGPT account linked to the shooter was discovered following the tragedy, the company revealed in a letter to ministers Thursday.
OpenAI has faced criticism and calls for regulation after it was revealed that the company flagged and banned an account in June 2025 belonging to the shooter who killed eight people in Tumbler Ridge, B.C., more than seven months later. However, the account wasn’t referred to RCMP until after the shooting because the company did not identify “credible or imminent planning” for real-world violence last summer.
In the letter, OpenAI’s vice-president of global policy Ann O’Leary said the company had already taken steps to improve its criteria for warning authorities “several months ago” based on guidance from mental health, behavioural and law enforcement experts.
The changes made the threshold for a police referral “more flexible to account for the fact that a user may not discuss the target, means, and timing of planned violence in a ChatGPT conversation but that there may be potential risk of imminent violence,” she wrote.
“With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today.”
Global News has asked OpenAI exactly when those changes were put in place. It did not answer similar questions Wednesday after multiple requests for comment.
The company said Thursday it is committing to working with the federal government and experts to continue strengthening its police referral criteria “based on the Tumbler Ridge tragedy and the Canadian context.”
“This will include continuing to analyze how imminent and credible risk is assessed and transparency regarding our reporting to law enforcement,” O’Leary said.
The letter comes after OpenAI representatives met with Artificial Intelligence Minister Evan Solomon in Ottawa on Tuesday at his request. Justice Minister Sean Fraser, Public Safety Minister Gary Anandasangaree and Culture and Identity Minister Marc Miller were also present at the meeting.
Ministers said afterward they were “disappointed” with what they heard in the meeting and had made clear they expected to hear about “concrete actions” the company would take in the coming days.
Get breaking National news
For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.
A spokesperson for Solomon said his office was “reviewing OpenAI’s letter carefully and will have more to say in the coming days.”
OpenAI said it would also enhance its system that detects repeat policy violators, after it discovered a second account linked to 18-year-old Jesse VanRootselaar following police identifying her as the shooter in Tumbler Ridge.
The system is meant to catch “those who have had their ChatGPT accounts shut down for violating our violent activities policy, and then seek to create a new account,” O’Leary wrote.
“Despite this detection system, after the name of the Tumbler Ridge perpetrator was released publicly, we discovered that the perpetrator had used a second ChatGPT account. We shared the second account with law enforcement upon its discovery.”
The letter continued: “We commit to strengthening our detection systems to better prevent attempts to evade our safeguards and prioritize identifying the highest risk offenders. We further commit to periodically assessing the thresholds used by our automated systems for detecting potential violent activities.”
The company added it will also establish direct points of contact with Canadian law enforcement authorities “per the request of the ministers,” and improve how its AI chatbot platforms direct users exhibiting troubling behaviour to local supports in their communities.
“These immediate commitments are only the first step in the work we must do in partnership with the Canadian government to improve AI safety,” O’Leary said, promising more engagement in the months ahead.
“We seek continued dialogue and we would welcome working with the Canadian government to convene local stakeholders and industry to develop best practices for law enforcement referrals and AI model behaviour in cases involving potential violence, including unique considerations for youth.”
OpenAI does not currently have a Canadian office, which Canada’s privacy commissioner has said makes it difficult to investigate foreign tech companies.
Company officials met with a B.C. government representative the day after the Tumbler Ridge shooting for a previously-scheduled meeting to discuss opening an office in Canada, B.C. Premier David Eby’s office said last week.
The mass shooting in Tumbler Ridge, among the deadliest in Canadian history, and OpenAI’s handling of the shooter’s online behaviour months prior has sparked renewed questions about AI regulation.
Eby on Thursday called for a national standard with a minimum reporting threshold in light of OpenAI’s stated commitments, which he called “cold comfort for the people of Tumbler Ridge.”
He said he will meet with OpenAI CEO Sam Altman to discuss the issue directly.
“Clearly they tragically missed the mark in not bringing this information forward,” he told reporters in Victoria.
“These are not small stakes, and it illustrates why these companies cannot be trusted to set their own reporting thresholds, and especially to set their own thresholds where there are no apparent consequences in not meeting them. … We need all companies operating at the same threshold across the country, and that will be our message to the federal government.”
Solomon said Wednesday he would give the company a chance to update him on its actions before he and other ministers address the issue through legislation, though he noted a series of bills addressing AI safety are in the works.
He specifically mentioned legislation that would update Canada’s privacy law — which doesn’t require private companies to escalate illegal or troubling behaviour to law enforcement — but did not say when it will be tabled or offer further details.
Experts in the field and opposition MPs have also questioned why the federal government has been slow to regulate AI safety practices and harm preventions in the past three years since ChatGPT emerged, and say the Tumbler Ridge case shows the AI industry cannot be left to regulate itself.
Eby said the revelation of a second account linked to the shooter raises even more questions that he’s hopeful an investigation will answer.
“I think the part that is just devastating for me, for the families, for the people of British Columbia and Canada, is that this could have been prevented,” he said.
© 2026 Global News, a division of Corus Entertainment Inc.

