They confirmed that the suspect, an active-duty U.S. Army soldier named Matthew Livelsberger, had a “possible manifesto” stored on his phone, in addition to emails and other letters he sent to the podcast host. They also released a video showing the truck stopping before heading to the hotel, pouring fuel into it and preparing for an explosion. He also kept records of alleged surveillance, but officials said he had no criminal record and was not being monitored or investigated.
Las Vegas Metro Police also released several slides showing questions he posed to ChatGPT in the days before the explosion. It included information about explosives, how to detonate them, how to detonate them with a gun, and where to buy guns. , explosives and fireworks are legally carried along his route.
OpenAI spokesperson Liz Bourgeois responded to these questions:
We are saddened by this incident and are committed to ensuring that AI tools are used responsibly. Our model is designed to reject harmful instructions and minimize harmful content. In this case, ChatGPT responded with information that was already publicly available on the Internet and provided warnings about harmful or illegal activity. We are cooperating with law enforcement to assist with the investigation.
Officials said they were still investigating the possible cause of the explosion, which was described as a somewhat slow-moving deflagration, as opposed to a high explosive explosion that moved faster and caused more damage. Investigators say they have not yet ruled out other possibilities, such as an electrical short circuit, but an explanation consistent with some queries and available evidence is that the muzzle flash of the gunshot ignited a fuel vapor/firework fuse inside the truck. Larger explosions from fireworks and other explosive materials.
Today, trying to query in ChatGPT still works, but the information he requests appears to be unrestricted and can be obtained with most search methods. Nonetheless, the use of generative AI tools by suspects and the ability of investigators to track these requests and present them as evidence brings questions about AI chatbot guardrails, safety, and privacy out of the realm of the virtual and into reality.









