OpenAI Trial: Sworn Testimony Pins Altman Firing on Trustworthiness
Day two of the Musk v. Altman trial produced sworn testimony that closes the loop on Sam Altman's November 2023 ouster from OpenAI. Former CTO Mira Murati testified that the firing "had nothing per se to do with AI safety" and "nothing to do with anything 'Ilya saw'" — it was, she stated, "entirely about Sam's lack of trustworthiness." Greg Brockman separately testified that OpenAI expects to spend $50 billion on compute in 2026, up from $30 million in 2017 — a 1,667× increase in under a decade.
What the Source Actually Says
Murati's sworn account dismantles the "Q*" narrative that spread widely in late 2023 — the claim that Ilya Sutskever had witnessed AGI-like behaviour that alarmed the board into action. The AGI conspiracy gave Altman political cover for his return and enabled him to consolidate power while those who challenged him were pushed out. Gary Marcus, who had speculated publicly that trustworthiness was the real driver, noted that his 2023 analysis "100% stood the test of time" — Sutskever saw his boss's conduct, not a transformative model milestone.
Helen Toner's deposition sharpens the portrait of Murati's own role. Toner testified that Murati was "totally uninterested in telling her team that her conversations with [the board] had been a significant factor" in the firing — and that Murati "was waiting to see which way the wind would blow and she didn't realize that she was the wind." NYT reporter Mike Isaac, whose original ouster story met "intense pushback at all levels of the company," observed that "people speak very differently when under the threat of perjury."
A notable feature of the trial: witnesses have largely agreed on the underlying facts. The contested question is not what happened but whether OpenAI's conduct was acceptable — a framing that shifts the legal and reputational terrain considerably.
Strategic Take
The trial record now shows that a governance crisis can be credibly obscured by attaching it to a safety narrative — in this case, shaping public perception of AI progress for over two years. For organisations evaluating AI partners, distinguishing between publicly stated safety rationales and internal governance reality has become a practical due-diligence concern, not an abstract one.

