
Federal ministers who met with representatives of OpenAI expressed disappointment Wednesday that the company did not present steps it will take to improve its safety measures — including when police are warned of a user’s online behaviour.
Experts in the field, however, are questioning why the federal government has been slow to regulate artificial intelligence before concerns were raised this month following the Tumbler Ridge, B.C., mass shooting.
Artificial Intelligence Minister Evan Solomon said he is giving the company a chance to update him in the coming days on “concrete” actions before he and other ministers address the issue through legislation, though he noted a series of bills addressing AI safety and privacy are in the works.
“Look, we told this company we want to see some hard proposals, some concrete action,” Solomon told reporters in Ottawa while heading into a Liberal caucus meeting.
“We’re disappointed that by the time they came here, they did not have something more concrete to offer, but we’ll see very shortly what they have,” he added, noting that “all options” were on the table for how the government might act.
Solomon summoned representatives of the company behind ChatGPT to Ottawa after it emerged that the shooter who killed eight people in Tumbler Ridge on Feb. 10 was flagged internally last June for her activity on the AI chatbot.
OpenAI did not alert the RCMP until after the mass shooting occurred, saying the “violent” activity did not meet the internal threshold of an “imminent” threat when the account was flagged and banned over seven months prior.
Justice Minister Sean Fraser, Public Safety Minister Gary Anandasangaree and Culture and Identity Minister Marc Miller — whose ministry is working on new online harms legislation — were also present at the meeting.
Prime Minister Mark Carney told reporters Wednesday he had not yet been briefed on the OpenAI meeting, but suggested he would be open to changes.
“I sat with the families of Tumbler Ridge, met with the first responders, saw the horror that — what happened and the pain that’s been caused,” he said.
Get breaking National news
For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.
“Obviously, anything that anyone could have done to prevent that tragedy or future tragedies must be done. We will fully explore it to the full lengths of the law and we’ll be very transparent about that process.”
Solomon and other ministers who were at the meeting said any action the government takes would focus on the threshold used to escalate concerning behaviour to law enforcement.
“There are issues around the assessment on credibility of a threat and the imminence of a thread that in my view, if properly administered, could prevent tragedies on a go-forward basis,” Fraser said.
“The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to changes implemented, and if they’re not forthcoming very quickly, the government is going be making changes.”
OpenAI told Global News Tuesday evening that the company appreciated the “frank discussion on how to prevent tragedies like this in the future.”
“Over the past several months, we have taken steps to strengthen our safeguards and made changes to our law enforcement referral protocol for cases involving violent activities, but the ministers underscored that Canadians expect continued concrete action and we heard that message loud and clear,” a spokesperson said.
“We’ve committed to follow up in the coming days with an update on additional steps we’re taking, as we continue to support law enforcement and work with the government on strengthening AI safety for all Canadians.”
OpenAI did not detail exactly what changes have been made in recent months, and did not immediately respond to Global News’ request for comment Wednesday.
Researchers who study online harms and AI say the Tumbler Ridge incident shows the AI industry shouldn’t be left to regulate itself, and that the government needs to be more proactive.
“The ministers ought to be looking at themselves as the ones who are responsible for undertaking regulation seriously when it comes to ChatGPT and other similar tools,” said Jennifer Raso, an assistant professor in law at McGill University.
“Pulling people up to Ottawa after one of the most horrible mass shootings in Canada to have them account for themselves after the harm’s been done seems to be too little, too late.”
Efforts to regulate the AI industry and address online harms through legislation died in Parliament last year ahead of the federal election.
The Artificial Intelligence and Data Act would have required AI companies to ensure its platforms are monitored for safety concerns and misuse, while enacting “proactive” measures to prevent real-world harm.
Solomon has promised to unveil a new federal AI strategy in the first quarter of this year, delaying its launch from late 2025.
In a speech last year, he said Ottawa would avoid “over-indexing on warnings and regulation,” reflecting the Carney government’s emphasis on AI’s economic benefits and speedy adoption of the technology.
A summary of public comments submitted during consultation on the forthcoming strategy showed Canadians are deeply skeptical of AI and want to see government regulation, particularly addressing online harms and mental health concerns.
While allies like the United Kingdom and European Union have moved to strengthen AI regulation, attempts to do so in the U.S. have been sporadic. U.S. President Donald Trump has ordered states not to pass regulations before a national strategy is in place, but that federal standard has yet to emerge.
Canada’s privacy legislation says private companies “may” — not must — disclose personal information to authorities or another organization if they believe there is a risk of significant harm or that a law will be broken.
Any further decision-making is up to the company itself, leading to internal thresholds like OpenAI’s “imminent” threat identification.
Solomon said Wednesday that work is underway to update the Personal Information Protection and Electronic Documents Act, but did not say when it will be tabled or offer further details.
Anandasangaree expressed confidence that the investigation into the shooting will yield answers, including from OpenAI.
“The number of issues arising around Tumbler Ridge concern me,” he told reporters after Wednesday’s caucus meeting.
“Yesterday’s meeting was a critical first step with OpenAI. There’s still a lot of unanswered questions, and there’s certainly a sense of frustration and, frankly, a sense that tech companies overall are not doing enough to address the issues around information that they hold.”
Solomon emphasized that the government wants to make sure what happened in Tumbler Ridge “does not happen again.”
“Of course a failure occurred here,” he said. “I mean, look what happened.”
—with files from Global’s Touria Izri






