Canada’s Artificial Intelligence Minister Evan Solomon has summoned OpenAI’s senior safety team to Ottawa for urgent discussions on safety protocols following revelations that the perpetrator in the Tumbler Ridge, B.C., mass shooting faced a ban from the company’s ChatGPT platform months earlier.
Tumbler Ridge Shooting Details
On February 10, Jesse Van Rootselaar, a teenager, killed his mother and half-brother before proceeding to the local secondary school, where he murdered five students and an educational assistant. He then took his own life.
Authorities later confirmed Van Rootselaar’s ChatGPT account was suspended in June due to flagged content, including scenarios depicting gun violence. OpenAI determined at the time that the activity did not indicate credible or imminent planning, falling short of the threshold for notifying law enforcement.
Minister’s Response and Upcoming Meeting
Solomon expressed deep concern over the matter during a press briefing on Monday. He reached out to the U.S.-based firm over the weekend to demand details and schedule an in-person meeting with its senior safety leaders, set for Tuesday.
“We will have a sit-down meeting to have an explanation of their safety protocols and their thresholds of escalation to police so we have a better understanding of what’s happening and what they do,” Solomon stated.
The minister declined to confirm plans for federal regulation of AI chatbots like ChatGPT but emphasized that all regulatory options remain under consideration.
OpenAI’s Position
An OpenAI spokesperson verified the visit, stating: “Senior leaders from our team are travelling to Ottawa to meet in person with government officials to discuss our overall approach to safety, safeguards we have in place and how we continuously work to strengthen them.”
The company also notified the Royal Canadian Mounted Police (RCMP) after the shooting occurred.
Expert Calls for Stricter Reporting Duties
Alan Mackworth, professor emeritus in the University of British Columbia’s computer science department and an AI safety specialist, advocates for mandatory reporting requirements. “Many professionals, such as teachers and doctors, have a ‘duty to report’ any suspected case of harm to or abuse of a minor. These obligations are enshrined in law and/or professional ethics. Similar obligations should be placed on social media and AI companies,” he said.

