Sam Altman posted five new guiding principles for OpenAI on Sunday, April 26, roughly 12 hours before jury selection began in Elon Musk's $134 billion lawsuit against him.
The new post focuses on democratization and universal prosperity. But the 2018 charter OpenAI was built on is still live on the company's website. Three of its core safety commitments did not make the jump to Altman's new version, pushing conflicting versions of the company's legal obligations.
Here are the three commitments left behind.
1. The promise to step aside for a safer competitor
OpenAI's 2018 charter contained an enforceable pledge: if another safety-focused AI lab got meaningfully closer to building AGI first, OpenAI would stop competing and collaborate.
The 2026 principles drop this promise entirely. The removal happens just as rival Anthropic, founded by former OpenAI safety researchers, reportedly hits a $1 trillion secondary market valuation and a $30 billion revenue run-rate. Under the old rules, OpenAI would have to support its most legitimate competitor. Under the new rules, it fights them.
2. "Our primary fiduciary duty is to humanity"
That exact phrase anchored the 2018 charter. It created direct obligations for OpenAI using words like "we will" and "we commit."
The 2026 blog post shifts the burden. Altman's new principles argue governments should build new economic models and society must handle AI together. When an $852 billion tech company distributes its fiduciary responsibility to the entire world, it effectively answers to no one.
3. AGI as the central purpose
The 2018 charter referenced artificial general intelligence 12 times. The new principles mention it twice.
Achieving AGI for humanity was the entire stated reason for creating OpenAI. The 2026 post replaces that goal with a broad focus on deploying AI infrastructure today. In his response to a massive New Yorker investigation published earlier this month, Altman claimed AGI carries a "ring of power" quality that makes people act erratically. That caution didn't make it into Sunday's principles either.

The $134 Billion New Yorker Fallout
Ronan Farrow and Andrew Marantz’s mid-April New Yorker investigation forms the immediate backdrop to this trial. After 18 months of reporting and interviews with over 100 sources, the piece quoted an anonymous board member saying Altman possesses "a sociopathic lack of concern for the consequences that may come from deceiving someone."
Altman publicly called the article "incendiary" but admitted to being "conflict-averse," which he said caused pain for OpenAI. Two weeks later, he published his new principles.
Musk’s lawyers are currently arguing in federal court that OpenAI's pivot from nonprofit to a capped-profit entity backed by $13 billion from Microsoft is a "long con." Altman’s Sunday post establishes a counter-narrative on the eve of the trial, framing OpenAI as a champion of decentralized power rather than a monopoly.
Why Musk's Lawsuit Threatens OpenAI's IPO
The trial in Oakland will last about four weeks. It features testimony from Musk, Altman, and Microsoft CEO Satya Nadella. Musk is aggressively pursuing breach of charitable trust. He wants $134 billion in damages redirected to OpenAI's charitable arm, the removal of Altman and Greg Brockman, and a full reversal of OpenAI's for-profit structure.
If Musk wins, OpenAI loses its leadership and its corporate structure just months before a projected October 2026 IPO. If Altman wins, he leaves the courthouse with his Sunday principles successfully established as the company's new reality.
