OpenAI says it considered alerting the Royal Canadian Mounted Police about a user months before a deadly school attack in British Columbia.
The company flagged the account in June for possible “furtherance of violent activities”.
Staff reviewed whether the activity met the threshold for law-enforcement referral.
They concluded there was no credible or imminent plan for serious harm at the time.
Under its policy, OpenAI reports cases only when an immediate threat appears likely.
The user, Jesse Van Rootselaar, later carried out one of Canada’s worst school shootings.
Eight people were killed before the attacker died from a self-inflicted gunshot wound.
Police said the victims included a teaching assistant and five students aged 12 to 13.
After the attack, OpenAI contacted the RCMP and shared information about the account.
The company said it will continue to assist the investigation.
Authorities confirmed the suspect had previous mental-health-related contact with police.
The motive remains unclear in the remote community of Tumbler Ridge.
The case raises fresh questions about how tech companies assess risk and when they should alert authorities.
