When Should AI Companies Alert Police? What the Tumbler Ridge Tragedy Reveals About Regulating Artificial Intelligence

Subhadarshi Tripathy

3/2/20264 min read

In the aftermath of the devastating school shooting in Tumbler Ridge, attention has turned beyond firearms and mental health — and toward artificial intelligence.

The tragedy, which left six children and two adults dead, has prompted urgent questions about what responsibility AI companies bear when users interact with chatbots in ways that raise red flags.

At the centre of the debate is OpenAI, the U.S.-based developer of ChatGPT. The company acknowledged that it detected and banned an account belonging to 18-year-old Jesse Van Rootselaar approximately six months before the February 10 shooting.

However, OpenAI said it did not alert law enforcement because the activity at the time did not meet its internal threshold for referral — a standard requiring signs of “imminent and credible risk” of serious physical harm.

Following the attack, the company discovered the teen had created a second account after being banned. OpenAI says it proactively contacted RCMP once it learned of the shooting.

What exactly was discussed in those chatbot conversations remains undisclosed.

Political Pressure Mounts

OpenAI’s admission has drawn sharp criticism from political leaders.

B.C. Premier David Eby suggested that earlier notification to police might have prevented the tragedy, fuelling calls for stricter oversight of AI platforms.

At the federal level, Artificial Intelligence Minister Evan Solomon said OpenAI’s recent commitments to update its safety policies do not go far enough. He has indicated that all regulatory options remain on the table and is seeking further clarification from OpenAI CEO Sam Altman in upcoming discussions.

But experts caution that determining when to report a chatbot user is far from simple.

No Clear Legal Obligation

Currently, Canada has no law specifically requiring AI companies to report potentially violent user behaviour to police.

While existing criminal and civil laws apply in certain contexts, there is no dedicated regulatory framework governing AI systems. Unlike the European Union, which passed its sweeping AI Act in 2024, Canada’s attempt at an online harms bill stalled before becoming law due to the 2025 federal election.

Alan Mackworth, professor emeritus of computer science at the University of British Columbia, argues voluntary reporting standards are insufficient.

“We can’t rely solely on companies to police themselves,” Mackworth said. “There needs to be public accountability through a regulatory agency with enforcement powers.”

He has proposed the idea of a “duty to report” for AI firms — similar to mandatory reporting requirements imposed on teachers and doctors when they suspect harm to a child.

The Privacy Dilemma

However, others warn that expanding reporting obligations could create new risks.

Moira Aikenhead, a lecturer at UBC’s Peter A. Allard School of Law, says tragedies often generate calls for immediate action — but policy crafted in moments of crisis can overreach.

“People understandably want answers,” Aikenhead said. “But when designing digital policy, we have to avoid knee-jerk reactions.”

Unlike social media posts, conversations with ChatGPT are private exchanges between a user and a company. If companies begin routinely reporting troubling queries, privacy advocates fear it could create widespread surveillance of Canadians’ thoughts and questions.

“You could have a child typing ‘How could I commit the perfect crime?’ purely out of curiosity,” Aikenhead said. “Would that put them on the RCMP’s radar?”

She argues that if reporting standards are expanded, they must be transparent, narrowly defined, and established through public law — not left to private corporations to interpret alone.

The Limits of AI Detection

Even with regulation, technical limitations remain.

Vered Shwartz, an assistant professor at UBC specializing in artificial intelligence, says distinguishing fantasy, curiosity, research, or fiction writing from genuine violent intent is extraordinarily difficult — especially at scale.

AI systems process millions of interactions daily. Determining which conversations reflect credible threats requires context that automated tools may struggle to interpret accurately.

“The question of reporting someone before something happens is extremely hard,” Shwartz said. “It’s similar to policing in the real world — you can’t arrest someone unless you have reasonable grounds to believe a crime will occur.”

She noted that false positives are inevitable. In one widely reported case, a father’s Google account was disabled after he sent a medical photo of his infant son to a doctor, only for it to be flagged as harmful content.

Corporate Policy Changes

In response to the Tumbler Ridge shooting, OpenAI has pledged several changes. These include creating a direct point of contact with Canadian law enforcement, upgrading its models to better detect potential misuse, and improving the ability to direct users toward local mental health resources when conversations suggest distress.

The company says that under its updated referral system, the shooter’s account — if detected today — would likely be referred to authorities.

Whether that assurance satisfies lawmakers remains uncertain.

A Broader Regulatory Reckoning

The tragedy has accelerated a broader debate about how Canada should regulate rapidly advancing AI technologies.

Supporters of stronger oversight argue that AI systems are now deeply embedded in daily life and capable of influencing behaviour in unpredictable ways. Critics counter that overregulation could stifle innovation while undermining privacy rights.

For grieving families in Tumbler Ridge, these policy debates feel painfully immediate. The memorials outside the high school serve as a stark reminder of what is at stake.

The central question now confronting policymakers is not simply whether AI companies should report troubling users — but how to define that threshold without eroding fundamental freedoms.

As Canada weighs new rules, the Tumbler Ridge tragedy may become a pivotal moment in determining where the balance between safety, privacy, and corporate responsibility ultimately lies.