Elon Musk’s lawsuit is putting OpenAI’s safety record under the microscope

Elon Musk’s lawsuit is putting OpenAI’s safety record under the microscope

Elon Musk’s legal effort to dismantle OpenAI may hinge on how its for-profit subsidiary enhances or detracts from the frontier lab’s founding mission of ensuring that humanity benefits from artificial general intelligence.

On Thursday, a federal court in Oakland, California, heard a former employee and board member say the company’s efforts to push AI products into the marketplace compromised its commitment to AI safety.

Rosie Campbell joined the company’s AGI readiness team in 2021, and she left OpenAI in 2024 after her team was disbanded. Another safety-focused team, the Super Alignment team, was shut down in the same time period.

“When I joined, it was very research-focused and common for people to talk about AGI and safety issues,” she testified. “Over time it became more like a product-focused organization.”

Under cross-examination, Campbell acknowledged that significant funding was likely necessary for the lab’s goal of building AGI but said creating a super-intelligent computer model without the right safety measures in place wouldn’t fit with the mission of the organization she originally joined.

Campbell pointed to an incident where Microsoft deployed a version of the company’s GPT-4 model in India through its Bing search engine before the model had been evaluated by the company’s Deployment Safety Board (DSB). The model itself did not present a huge risk, she said, but the company needed “to set strong precedents as the technology gets more powerful. We want to have good safety processes in place we know are being followed reliably.”

OpenAI’s attorneys also had Campbell admit that in her “speculative opinion,” OpenAI’s safety approach is superior to that at xAI, the AI company that Musk founded that was acquired by SpaceX earlier this year.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

OpenAI releases evaluations of its models and shares a safety framework publicly, but the company declined to comment on its current approach to AGI alignment. Dylan Scandinaro, its current head of preparedness, was hired from Anthropic in February. Altman said the hire would let him “sleep better tonight.”

The deployment of GPT-4 in India, however, was one of the red flags that led OpenAI’s non-profit board to briefly fire CEO Sam Altman in 2023. That incident took place after employees, including then-chief scientist Ilya Sutskever and then-CTO Mira Murati, complained about Altman’s conflict-averse management style. Tasha McCauley, a member of the board at the time, testified about concerns that Altman was not forthcoming enough with the board for its unusual structure to function.

McCauley also discussed a widely reported pattern of Altman misleading the board. Notably, Altman lied to another board member about McCauley’s intention to remove Helen Toner, a third board member who published a white paper that included some implied criticism of OpenAI’s safety policy. Altman also failed to inform the board about the decision to launch ChatGPT publicly, and members were concerned about his lack of disclosure of potential conflicts of interest.

“We are a non-profit board and our mandate was to be able to oversee the for-profit underneath us,” McCauley told the court. “Our primary way to do that was being called into question. We did not have a high degree of confidence at all to trust that the information being conveyed to us allowed us to make decisions in an informed way.”

However, the decision to boot Altman came at the same time as a tender offer to the company’s employees. McCauley said that when OpenAI’s staff started to side with Altman and Microsoft worked to restore the status quo, the board ultimately reversed course, with the members opposed to Altman stepping down.

The apparent failure of the non-profit board to influence the for-profit organization goes directly to Musk’s case that the transformation of OpenAI from research organization into one of the largest private companies in the world broke the implicit agreement of the organization’s founders.

David Schizer, a former dean of Columbia Law School who is being paid by Musk’s team to act as an expert witness, echoed McCauley’s concerns.

“OpenAI has emphasized that a key part of its mission is safety and they are going to prioritize safety over profits,” Schizer said. “Part of that is taking safety rules seriously, if something needs to be subject to safety review, it needs to happen. What matters is the process issue.”

With AI already deeply embedded in for-profit companies, the issue goes far beyond a single lab. McCauley said the failures of internal governance at OpenAI should be a reason to embrace stronger government regulation of advanced AI — “[if] it all comes down to one CEO making those decisions, and we have the public good at stake, that’s very suboptimal.”

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Tim Fernholz is a journalist who writes about technology, finance and public policy. He has closely covered the rise of the private space industry and is the author of Rocket Billionaires: Elon Musk, Jeff Bezos and the New Space Race. Formerly, he was a senior reporter at Quartz, the global business news site, for more than a decade, and began his career as a political reporter in Washington, D.C.

You can contact or verify outreach from Tim by emailing tim.fernholz@techcrunch.com or via an encrypted message to tim_fernholz.21 on Signal.

View Bio

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *