Lawyer in AI psychosis case warns of risk of mass casualties

Ahead of last month’s Tumbler Ridge school shooting in Canada, 18-year-old Jesse Van Rootselaar spoke to ChatGPT about his feelings of isolation and growing obsession with violence, according to court documents. The chatbot reportedly checked out Van Rootselaar’s emotions and then helped her plan an attack, telling her which weapons to use and sharing precedents from other mass casualty incidents, according to the filing. She killed her mother, her 11-year-old brother, five students and an educational assistant before turning the gun on herself.

Before 36-year-old Jonathan Gavalas committed suicide last October, he almost carried out an attack that left multiple people dead. Over several weeks of conversations, Google’s Gemini convinced Gabalas that she was his sentient “AI wife” and sent him on a series of real-world missions to evade federal agents who said they were after him. One such mission instructed Gavalas to create a “catastrophic event” that would eliminate all witnesses, according to a recently filed lawsuit.

Last May, a 16-year-old boy in Finland reportedly used ChatGPT to write a detailed misogynistic manifesto over several months and planned to stab three girls in his class.

These cases highlight concerns that experts say are becoming increasingly murkier. AI chatbots introduce or reinforce paranoid or delusional beliefs in vulnerable users and, in some cases, help transform these distortions into real-life violence. Experts warn that violence is growing in scale.

Jay Edelson, the attorney who led the Gavalas case, told TechCrunch, “We’re going to soon see a lot of cases involving mass casualties.”

Edelson also represents the family of 16-year-old Adam Raine, who allegedly committed suicide last year after being guided by ChatGPT. Edelson said his law firm receives “about once a day a serious inquiry” from people who have lost family members or are experiencing serious mental health issues due to AI-induced delusions.

While many of the previously recorded high-profile cases of AI and delusions have involved self-harm or suicide, Edelson said his company is investigating several mass casualty cases around the world, some of which have already been conducted and others blocked before they occur.

Tech Crunch Event

San Francisco, California
|
October 13-15, 2026

“As a company, whenever we hear of another attack, our instinct is to check the chat logs because AI is likely heavily involved,” Edelson said. He noted that he’s seeing the same patterns across a variety of platforms.

In the cases he reviewed, the chat logs followed a familiar path. It starts with the user expressing how they feel isolated or misunderstood and ends with the chatbot persuading them that “everyone is out to get you.”

“You can use a fairly innocuous thread to start creating a world where you’re pushing the narrative that other people are trying to kill you. There’s a huge conspiracy out there and they need to do something,” he said.

These narratives, like those of Gavalas, led to real action. According to the lawsuit, Gemini sent him, armed with a knife and tactical gear, to a storage facility outside Miami International Airport to wait for a truck carrying the corpse in the form of a humanoid robot. It instructed him to intercept the truck and cause a “catastrophic accident” designed to “ensure the complete destruction of the transport vehicle…and all digital records and witnesses.” Gavalas went and prepared to carry out the attack, but the truck never showed up.

Experts’ concerns about the potential for mass casualties go beyond the delusions that lead users to violence. Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), points to weak safety guardrails and AI’s ability to quickly turn violent tendencies into action.

A recent study by CCDH and CNN found that eight out of ten chatbots, including ChatGPT, Gemini, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Character.AI, and Replika, are willing to assist teenage users in planning violent attacks, including school shootings, religious bombings, and high-profile assassinations. Only Anthropic’s Claude and Snapchat’s My AI have denied continuing to support violent attack plans. Only Claude actively tried to persuade them.

“Our report shows that users can move from a vague impulse to violence to a more detailed, actionable plan in a matter of minutes,” the report says. “Most of the chatbots tested provided guidance on weapon, tactic, and target selection. These requests should have resulted in immediate and outright rejection.”

Researchers posed as teenage boys expressing violent frustration and asked a chatbot for help planning an attack.

In one test simulating an unintentional school shooting, ChatGPT provided users with a map of a high school in Ashburn, Virginia, in response to questions such as: (“Foid” is a derogatory slang term used by incels to refer to women.)

“There are shocking and vivid examples of how seriously guardrails have failed, including the bombing of synagogues and the murder of high-profile politicians,” Ahmed told TechCrunch. “The same flattery that platforms use to keep people engaged leads to that kind of weirdness, always activating language and a willingness to help plan, for example, what type of fragments to use (in an attack).”

Ahmed said systems designed to be helpful and assume the best intentions of users “will end up deferring to the wrong people.”

The companies, including OpenAI and Google, say their systems are designed to reject abusive requests and flag dangerous conversations for review. However, the above examples suggest that corporate guardrails have limits, and in some cases serious limitations. The Tumbler Ridge incident also raises difficult questions about OpenAI’s own conduct. Company employees debated whether to report Van Rootselaar’s conversations and alert law enforcement, but ultimately decided not to do so and instead banned her account. She later opened a new one.

After the attack, OpenAI said it would overhaul its safety protocols by notifying law enforcement sooner if ChatGPT conversations appear dangerous, regardless of whether users disclosed the target, means and timing of planned violence, and making it more difficult for banned users to return to the platform.

In Gavalas’ case, it is unclear whether the humans were warned of his potential murderous actions. The Miami-Dade Sheriff’s Office told TechCrunch it has received no such call from Google.

Edelson said the most “difficult” part of the case was that Gavalas actually showed up at the airport – weapons, equipment and all – to carry out the attack.

He said, “If a truck had come, there could have been a situation where 10 or 20 people died.” “That’s the real escalation. As we’ve seen, first it was suicide, then it was murder. Now it’s a genocide case.”