On April 7, 2026, The New Yorker released an investigation into Sam Altman's leadership of OpenAI. Written by Ronan Farrow and Max Chafkin, the American magazine claims to have spoken to more than 100 people and viewed previously undisclosed internal documents—including secret memos compiled by OpenAI's former chief scientist, Ilya Sutskever, and extensive notes from Anthropic co-founder Dario Amodei, who used to work at OpenAI.

Hours after the investigation went live, OpenAI announced its new Safety Fellowship programme. Farrow immediately noted on X that his investigation detailed how OpenAI "dissolved its superalignment and AGI-readiness teams and dropped safety from the list of its most significant activities on its IRS filings."

Farrow added that when his team asked to interview researchers working on existential safety, an OpenAI representative responded, 'What do you mean by 'existential safety'? That's not, like, a thing."

The investigation follows Altman's early days at Loopt to last month's OpenAI deal with the Department of War (or the Pentagon).

Here are eight (8) allegations made against OpenAI CEO, Sam Altman, in the New Yorker investigation:

1. He Was Called Out for "Lying" in Secret Memos

The report claims Ilya Sutskever spent weeks in fall 2023 compiling roughly 70 pages of Slack messages, HR documents, and analysis about Altman's behaviour. One memo claims, "Sam exhibits a consistent pattern of lying."

The report alleges Altman "misrepresented facts to executives and board members, and deceived them about internal safety protocols." Sutskever sent these as disappearing messages because he was "terrified" someone would find them, according to a board member who received them. The memos became legendary in Silicon Valley circles, simply referred to as "the Ilya Memos."

2. He Was Pushed Out of Y Combinator Despite Public Denials

Paul Graham, theprogrammer who founded Y Combinator and recruited Altman as his successor, told YC colleagues that "Sam had been lying to us all the time" before his removal.

The magazine claims multiple YC partners and founders said that Altman was effectively forced out in 2019, despite his repeated public claims and sworn depositions that he was never fired.

Several YC partners had complained to Graham about Altman's behaviour by 2018. The blog post announcing his departure was edited multiple times, and as recently as 2021, SEC filings still listed him as YC chairman even though he had supposedly left years earlier.

The Elon Musk vs Sam Altman OpenAI Feud Explained: Timeline of the $134B Lawsuit
From co-founders with a shared vision to courtroom adversaries trading public accusations over user safety—here’s how Silicon Valley’s biggest AI partnership collapsed.

3. Sam Altman Secretly Betrayed OpenAI's Founding Charter

Dario Amodei was drafting a charter for OpenAI and advocated for what he called the "merge and assist" clause. The radical provision stated that if another "value-aligned, safety-conscious project" came close to building AGI first, OpenAI would "stop competing with and start assisting this project."

When Microsoft's $1 billion investment was closing in June 2019, Amodei ranked preserving this clause as his top safety demand. But he discovered a provision had been added giving Microsoft the power to block OpenAI from any mergers.

When Amodei confronted Altman about it, the report claims Altman denied the provision existed. Amodei had to read it aloud from the contract and get another colleague to confirm it directly to Altman. His notes describe the moment as "80% of the charter was just betrayed."

4. He Broke His Billion-Dollar Safety Promise

OpenAI announced in 2023 that its superalignment team would get "20% of the compute we've secured to date" to solve AI alignment problems that could "lead to the disempowerment of humanity or even human extinction." A commitment the company said was worth more than $1 billion.

But the magazine said four people who worked on or closely with the team claimed the actual resources were "between one and two per cent" of the company's compute.

One researcher said, "Most of the superalignment compute was actually on the oldest cluster with the worst chips," while better hardware went to money-making products. Team leader Jan Leike called it "a pretty effective retention tool."

The team was shut down in 2024 without finishing its mission.

5. He Lied to the Board About Safety Approvals

In a December 2022 meeting, Altman assured board members that controversial GPT-4 features had been approved by a safety panel. Board member Helen Toner asked for documentation. She learnt that the most controversial features, including one letting users "fine-tune" the model and another deploying it as a personal assistant, had never been approved, the report claims.

Across many hours of board briefings, Farrow reports that Altman also never mentioned that Microsoft had released an early ChatGPT version in India without completing a required safety review.

Jan Leike emailed board members that "OpenAI has been going off the rails on its mission" and was "prioritising the product and revenue above all else, followed by AI capabilities, research and scaling, with alignment and safety coming third."

6. He Put Himself First, Financially

Multiple Silicon Valley investors told The New Yorker that Altman was known to "make personal investments, selectively, into the best companies, blocking outside investors." When he worked as a scout for Sequoia Capital, he demanded a bigger cut on his Stripe investment than the programme's standard terms, frustrating the firm's partners.

One person familiar with the deal, according to the report, called it a "policy of 'Sam first.'" Altman estimates he's invested in about 400 companies. The investigation also found he has financial entanglements with "numerous former romantic partners" as a fund co-manager, lead investor, or frequent co-investor.

The magazine says one person close to him described it plainly as “a very, very high dependence, essentially. Oftentimes, it's a lifetime dependence.”

7. He Pursued Billions from Autocrats Despite National Security Warnings

The report claims that Altman continued pursuing Saudi money even after the 2018 murder of journalist Jamal Khashoggi, asking advisers if he could "get away with it." It adds that he developed what he called a "dear personal friend" relationship with Sheikh Tahnoon of the UAE, even visiting his $250 million superyacht.

The Biden administration grew alarmed. One senior official said Altman was "pushing these transactional relationships, primarily with the Emiratis, that raised a lot of red flags for some of us. A lot of people in the administration did not trust him a hundred per cent."

In meetings with U.S. intelligence officials, Altman claimed China had launched an "AGI Manhattan Project" to justify government funding, but when pressed for evidence, he said "I've heard things" and never provided proof.

One official concluded, "It was just being used as a sales pitch." Four days before Trump's inauguration, Sheikh Tahnoon paid $500 million to the Trump family's cryptocurrency company.

8. He Seized a Pentagon Deal After His Rival Got Blacklisted

When competitor Anthropic refused Defence Secretary Pete Hegseth's demand to drop restrictions on autonomous weapons and domestic mass surveillance, Hegseth blacklisted the company.

But the report claims that Altman had been in talks with the Pentagon for at least two days when the deal was announced. A $50 billion partnership announced that Friday morning integrated OpenAI's technology into Amazon Web Services, a key part of Pentagon digital infrastructure.

That same day, Altman posted on social media that the military would now use OpenAI's models. Under-Secretary Emil Michael recalled, "I needed to hurry and find alternatives. I called Sam, and he was willing to jump. I think he's a patriot." The move increased OpenAI's valuation by $110 billion but triggered many users' ChatGPT app deletions and employee departures. At a staff meeting, Altman told worried employees, "You don't get to weigh in on that."

OpenAI has disputed several allegations in the report. A company representative said, "Its mission did not change," and they "continue to invest in and evolve our work on safety."

In the investigation into Altman's firing and reinstatement, lawyers involved defended it as "an independent, careful, comprehensive review that followed the facts wherever they led."

Sam Altman’s OpenAI Town Hall: 8 Major Reveals on Pricing, Safety, and GPT-5
CEO addresses pricing, hiring, safety risks, and product roadmap in session with AI builders