
A pair of Ontario watchdogs is launching a new document to guide the responsible use of artificial intelligence in the province, leapfrogging the Ford government’s years-long effort to create an official AI framework.
Ontario’s Information and Privacy Commissioner (IPC) and Human Rights Commissioner (OHRC) issued a joint set of principles designed to help the Ontario government, broader public sector and private sector determine how to deploy AI and when to pull the plug.
The IPC said her office has already received a number of complaints and carried out investigations over the burgeoning use of AI in the province and the concerns that come along with it.
Students at one university, for example, raised concerns about AI-enabled online proctoring software being used to monitor them while they were writing an exam.
The complaint triggered an investigation and guidance from the privacy commissioner about using AI “appropriately and responsibly” in a way that balanced the rights of student privacy and ensured the information was accurate.
Similarly, the human rights commissioner said her office was concerned that, without guardrails, biases in AI-driven research could lead to “unintended consequences” and impact “historically marginalized individuals or groups.”
Get breaking National news
For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.
“We want the people of Ontario to benefit from AI,” said Patricia DeGuire, the Ontario human rights chief commissioner.
“But as a social justice oversight, we must take the lead in preparing citizens and institutions on the innovation, the monitoring, the implementation of these systems, because an ounce of prevention is better than a pound of cure.”
The report states that the use of AI in both public and private settings must be guided by principles: that the information derived from it is valid and reliable, that its use is transparent and accountable and that its application is affirming of human rights.
The document states that organizations should put the AI program through validity and reliability assessments before it is deployed and that it should be regularly assessed to confirm that the results are accurate.
The guidance also said institutions must ensure that AI systems do not “unduly target” people who participate in public protests or social movements or otherwise violate their Charter rights.
The commissioners also called for security measures to guard against unauthorized of personal information.
Perhaps, most importantly, the watchdogs said AI systems should be “temporarily or permanently turned off or decommissioned” if they become unsafe and systems should be in place to review negative impacts on individuals or groups.
The commissioners said their guidance was “urgent and pressing” because the Ford government’s AI regulations are still in progress.
In 2024, the government passed the Enhancing the Digital Security and Trust Act giving the province the power to regulate the use of artificial intelligence in the public sector.
“Artificial intelligence systems in the public sector should be used in a responsible, transparent, accountable and secure manner that benefits the people of Ontario while protecting privacy,” the legislation said.
Kosseim said that currently, the government only has “high-level principles” as part of an AI use framework, which applies directly to provincial ministries. The new regulations would clarify the rules for Crown agencies, hospitals, schools and the broader public sector.
“When those regulations are eventually adopted, we hope sooner rather than later, then we will have binding parameters to guide not only the provincial institutions, but all public institutions across the province,” Kosseim said.
The OHRC said the rules would also serve as an example to the private sector, which is now legally required to let jobseekers know if artificial intelligence is being used in the hiring process.
“The commission has flagged AI use in employment as a growing risk, citing the potential to indirect discrimination through algorithmic bias,” DeGuire said. “We’re looking for broader safeguards to the use of AI in hiring.”
Ultimately, the commissioners stressed, the goal was to ensure responsible use of a rapidly evolving technology “so that they benefit individuals and not serve to undermine public trust.”
© 2026 Global News, a division of Corus Entertainment Inc.

