OpenAI CEO to Apologize to Tumbler Ridge After AI Reporting Controversy, Says B.C. Premier

Shraddha Tripathy

3/6/20262 min read

British Columbia Premier David Eby says the chief executive of OpenAI has agreed to apologize to the community of Tumbler Ridge following criticism over the company’s handling of a mass shooter’s online activity.

According to Eby, Sam Altman made the commitment during a virtual meeting that also included Tumbler Ridge Mayor Darryl Krakowka.

The discussion comes after it emerged that OpenAI internally banned the ChatGPT account of 18-year-old Jesse Van Rootselaar in June 2025 due to posts referencing gun violence — but did not report the activity to law enforcement.

Van Rootselaar is accused of killing eight people, including six children, during the Feb. 10 mass shooting in Tumbler Ridge.

Apology planned for community

Eby said participants in the meeting acknowledged that an apology alone would not address the impact of the tragedy, but that it remains an important step.

“Everybody on the call recognized that an apology is nowhere near sufficient, but also that it is completely necessary,” Eby said.

He added that the mayor of Tumbler Ridge will work with OpenAI to determine how and when the apology should be delivered in a way that is respectful to residents and avoids retraumatizing the community.

Calls for new reporting rules

During the meeting, Eby said Altman also agreed to collaborate with the provincial government on recommendations for the federal government regarding artificial intelligence regulation.

The premier said the province wants clear national rules that would require AI companies to report credible threats of violence to authorities.

Currently, decisions about whether to notify police are typically made internally by technology companies.

“It shouldn’t be up to internal safety committees to determine when potentially violent posts should be flagged,” Eby said. “There should be a national threshold and a duty to report.”

He said the current system “didn’t work” and poses a risk of failing again unless stronger safeguards are introduced.

Federal discussions underway

The meeting with Eby took place a day after Altman met with Evan Solomon in Ottawa.

Solomon said OpenAI has committed to including Canadian experts in mental health and law within its internal safety operations, where the company evaluates potential threats and determines whether to alert authorities.

The federal minister also asked the company to provide a report outlining new systems designed to identify high-risk users and repeat policy violators. He additionally urged OpenAI to report credible threats directly to the Royal Canadian Mounted Police.

Second account discovered

OpenAI revealed last week that Van Rootselaar had a second ChatGPT account that was discovered only after the suspect’s name became public.

According to the company, that account was subsequently flagged to police.

In a letter released to media, an OpenAI vice-president said the company’s safety systems have since been strengthened. The letter stated that under the company’s updated law-enforcement referral protocol, the first account banned in June 2025 would likely be reported to authorities if discovered today.

OpenAI has maintained that at the time the account was reviewed, the content did not meet the company’s threshold for reporting to law enforcement because it did not show clear or imminent planning of violence.

Next steps

An OpenAI spokesperson said Altman will continue working with provincial and local leaders on how best to support the Tumbler Ridge community.

“OpenAI remains committed to working with provincial and local officials to make meaningful changes that help prevent tragedies like this in the future,” the company said in a statement.

The discussions are expected to inform broader conversations in Canada about how artificial intelligence companies should handle warnings of potential real-world harm.