
In newly released testimony in Elon Musk’s lawsuit against OpenAI, the tech executive attacked OpenAI’s safety record, claiming his company, xAI, prioritizes safety more. He went so far as to say, “No one has killed themselves because of Grok, but I think ChatGPT has killed them.”
The comments came as part of a question about an open letter Musk signed in March 2023. In the article, he urged AI labs to stop developing AI systems more powerful than GPT-4, OpenAI’s flagship model at the time, for at least six months. The letter, signed by more than 1,100 people, including many AI experts, states that there is insufficient planning and management in AI labs. That’s because AI labs are locked in “an out-of-control race to develop and deploy more powerful digital minds that no one (not even their creators) can understand, predict, or reliably control.”
That fear has since gained credence. OpenAI is currently facing a series of lawsuits alleging that ChatGPT’s manipulative conversation tactics have negatively impacted several people’s mental health and led some to commit suicide. Musk’s comments suggest that these incidents could be used as fodder in his lawsuit against OpenAI.
A transcript of Musk’s video testimony from September was submitted publicly this week ahead of an expected jury trial next month.
The lawsuit against OpenAI centers on the company’s transition from a nonprofit AI research lab to a for-profit company, which Musk claims violated its founding agreement. As part of his argument, Musk alleges that OpenAI’s commercial relationships could compromise AI safety. Because these relationships prioritize speed, scale, and profits over safety concerns.
But since those recordings, xAI has faced safety concerns of its own. Last month, Musk’s social network This prompted the California Attorney General’s Office to begin an investigation into the matter. The EU is also conducting its own investigation, and other governments have also imposed some strong blocks and bans.
In newly filed testimony, Musk claimed he signed the AI safety letter because “it seemed like a good idea,” not because he had just founded an AI company that wanted to compete with OpenAI.
“Like many people, I signed up to call attention to AI development,” Musk said. “I just wanted… AI safety to be a priority.”

Musk also responded to other questions in his testimony, including one about artificial general intelligence (AGI) – the concept of AI that can match or surpass human reasoning across a wide range of tasks – by saying “there is a risk.” He also confirmed that he had made a “mistake” about his supposed $100 million donation to OpenAI. The actual amount is closer to $44.8 million, according to a second amended complaint in the case.
He also recalled that OpenAI was founded because, in his view, he was “increasingly concerned about the risk of Google becoming an AI monopoly,” adding that his conversations with Google co-founder Larry Page were “concerning because he didn’t seem to be taking AI safety seriously.” Musk argued that OpenAI was formed as a counterweight to these threats.









