After Tumbler Ridge, the Carney government needs to lead on AI regulation

A few days ago, the Wall Street Journal reported that the Tumbler Ridge shooter sent messages about gun violence to OpenAI’s ChatGPT over the course of several days last June. The posts were flagged by OpenAI’s automated review system and roughly a dozen employees debated whether the posts indicated the possibility of real world violence. 

Ultimately, however, OpenAI leadership decided against contacting Canadian authorities and simply banned her account instead — just months before she went on to enact real-world violence similar to what she had messaged the chatbot about.

A spokeswoman for OpenAI told the Wall Street Journal that the posts in question didn’t meet the threshold for reporting to law enforcement because the posts were not considered to constitute a “credible and imminent risk of serious physical harm to others.”

The families of the eight dead victims as well as the twenty-seven others who were injured in the second worst mass shooting in Canada’s history may disagree with OpenAI’s assessment.

Shortly after the Wall Street Journal’s reporting became public, Minister of Artificial Intelligence and Digital Innovation Evan Solomon put out a statement saying he was “deeply disturbed” by the news that OpenAI did not alert law enforcement in a timely manner. Minister Solomon goes on to state, “Canadians expect online platforms, including OpenAI, to have robust safety protocols and escalation practices in place to protect online safety and ensure law enforcement are warned about potential violence.”

The minister is right – Canadians do expect large online platforms to have robust safety protocols on the books and to use them appropriately. In a recent poll conducted by Leger, 77 per cent of respondents stated they supported tougher regulations, including for AI tools. 

The problem, however, is that time and time again, Big Tech has shown itself either unwilling or unable to act in a manner that prioritizes user safety and well-being, opting instead for maximizing profits (or in the case of OpenAI, stemming its eye-popping losses). The other problem is that unlike the EU or the UK or Australia, Canada has no legislation or an adequate regulatory framework to deal with the real world harms that arise from our digital sphere. Canadians, including Canadian children, are less safe online than their counterparts in Europe, the UK or Australia.

AI Minister Evan Solomon has now confirmed that he has summoned OpenAI officials to speak with him in a private, closed-door conversation. While an open hearing before a parliamentary committee would be better, the summons is a good start.

Canadians need the Carney government to act and to introduce legislation aimed at holding tech companies — including AI companies with consumer-facing products — to account. While no government keen on being competitive in the global AI race wants to over-regulate the AI industry to the point of stifling innovation, all governments have a duty to protect their citizens from exploitative industries that prioritize profits over user safety.

Minister Solomon has now confirmed that he has summoned OpenAI officials to speak with him in a private, closed-door conversation. As an aside, an open hearing before a parliamentary committee would be the kind of thing a government would do if it were serious about getting to the root of the issues that led OpenAI not to alert authorities, but summoning OpenAI officials is certainly a good start. 

As much as we need to know what the shooter had said to ChatGPT that alerted OpenAI’s internal flagging system, we also need to know what ChatGPT was saying to the shooter, given AI chatbots, including ChatGPT, have been known to encourage self-harm and violence against others. 

Minister Solomon had previously qualified any forthcoming regulatory framework pertaining to AI as needing to be “light, tight, and right”. Ironically, those comments were made the same month OpenAI decided against alerting Canadian authorities. While there was no way for Minister Solomon to predict the enormity of the tragedy that would hit Canada less than a year after he made those comments, what should have been clear to the minister even eight months ago was that they were always sure to come back to haunt him in one way or another, given the litany of real world harms that we were already aware of when it came to AI. 

The Carney government often speaks of AI in ways that can lead one to think they have bought into the talking points coming from the most vapid of tech bros rather than a government that fully understands the impacts of the potential harms and disruption created by AI.

Minister Solomon can’t just be the Minister for AI Ribbon Cuttings. He needs to work alongside Heritage Minister Marc Miller to ensure that AI chatbots like OpenAI’s ChatGPT are folded into any forthcoming online harms legislation so that consumer-facing AI companies as well as large online platforms are subject to a duty to act responsibly. This would effectively ensure that these tech companies will have to minimize the harm of their products instead of the status quo of prioritizing profits over safety.

OpenAI’s refusal to alert Canadian authorities may very well have directly contributed to the deaths of eight innocent people. Nothing the Carney government does can bring those people back. What they can do, however, is ensure that something like this never happens again.


© National Observer