Skip to content

Open this photo in gallery:

Storm clouds pass over the Peace Tower and Parliament Hill on Tuesday, August 18, 2020, in Ottawa. The Canadian Press/Adrian WildAdrian Wilde / The Canadian Press

Matt Malone is an assistant professor in the Faculty of Law at Thompson Rivers University in Kamloops.

Although many Canadians are concerned about the rise of AI, few believe the government will be up to the task of regulating it. Internal documents obtained through an access to information request reveal a stark reality: According to a recent survey conducted for the Privy Council Office, only 24 percent of the public believes the federal government is implementing effective policies to manage AI.

This shouldn’t come as much of a surprise.

Issues of accountability and transparency are at the forefront of the growing calls for AI regulation. But the federal government’s proposed AI law doesn’t even apply to the government itself. The government’s own experiments with AI in areas like law enforcement, immigration and border control have a lot to blame when things go wrong.

As for watchdogs such as the Privacy Commissioner, they are no longer fit for purpose. The Commissioner’s recent statement regarding the investigation into ChatGPT has managed to garner a lot of attention. But the commissioner’s investigation history makes it clear that the investigation of ChatGPT won’t be completed until next year, and probably not until 2025 — after OpenAI releases the next iteration of ChatGPT.

Last July, the commissioner announced an investigation into the AI-powered ArriveCAN app, which wrongly told more than 10,000 people they were excluded. Where is the report? The commissioner is still reeling from his challenge to Facebook in the wake of the Cambridge Analytica scandal, revealed five years ago.

Other guards are just as weak. AI is concerned with monopolizing and manipulating advertising – responsibilities that fall squarely within the competition bureau. But the bureau’s Monopolistic Practices Directorate, which is responsible for reviewing proposed mergers and investigating practices that could harm competition, has not issued a single administrative fine since 2015. In the meantime, however, big tech companies like Microsoft and Google have bought into AI. Literally hundreds of their competitors.

The Competition Bureau itself initiates investigations into less than 1% of the complaints it receives each year. The strategic vision for 2020-24 was to become “the world’s leading competition agency at the forefront of the digital economy”. This is not happening. Last year, the bureau recovered $3,846,967 in fines, penalties, settlements and investigative costs. Meanwhile, the budget was $59.5 million. The Competition Bureau costs Canadians more than it saves them.

The need for a more careful approach is urgent. According to Lena Kahn, chairwoman of the United States Federal Trade Commission, monopolistic control is one of the most important risks from AI. The unequaled resources of big tech players like Microsoft — whose market capitalization already exceeds Canada’s GDP — is crucial in developing these technologies and stifling competitors.

But monopoly is not the only problem. Precisely because AI technologies are attracting so much of our time, attention and trust, the Competition Bureau allows companies to release AI products with misleading advertising. There is nothing “open” about OpenAI. The non-profit vehicle, which aims to “build value for everyone rather than stocks”, now focuses on glorifying Hitler, teaching users how to make methamphetamine or bombs, and generating malware and phishing attacks. Tesla’s Autopilot still advertises itself in Canada as having “full self-driving capabilities” – a claim that is patently false.

Undermining companies that make unsubstantiated claims about AI should be a key avenue of government oversight and regulation. Advances in technology have eroded our right to privacy by waging a discursively smart battle (“engagement” instead of addiction, “cookies” instead of spyware). We need to examine accuracy and truth in the advertising of these technologies. Where there are health and safety concerns, we must ban the products or post their risks, as we do elsewhere.

But of course none of this is happening. While the government is happy to distance itself from accountability and inveterate watchdogs, it’s little wonder that few Canadians trust the government to implement — let alone enforce — effective policies to regulate AI.

[ad_2]