Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
OpenAI partners with Broadcom to build its own AI chips
Photo by Solen Feyissa / Unsplash

OpenAI partners with Broadcom to build its own AI chips

By designing custom processors with Broadcom, OpenAI is cutting its reliance on Nvidia and reshaping how the next era of AI power is built.

Oyinebiladou Omemu profile image
by Oyinebiladou Omemu

OpenAI is no longer just training AI models. It's now building the machines that will power them. Its new partnership with Broadcom marks a bold step away from dependence on Nvidia and other chipmakers.

Instead of relying on outside suppliers, OpenAI will design its own AI processors, while Broadcom handles manufacturing and deployment beginning in late 2026, scaling to a full 10-gigawatt capacity by 2029.

To put that into perspective, building just one gigawatt of AI data-center capacity costs between $50 billion and $60 billion, according to Nvidia’s CEO. OpenAI wants ten of them. That scale puts this project alongside the company’s rumored $500 billion Stargate initiative—a long-term plan often compared to the Apollo program of artificial intelligence, except this time, the destination is compute.

Inside OpenAI’s trillion-dollar web of deals keeping the AI industry afloat
From Nvidia to AMD, a handful of tech giants are fueling each other’s growth in a cycle that could either sustain the AI boom or break it.

What does OpenAI’s Broadcom deal mean for Nvidia?

The Broadcom collaboration fits neatly into OpenAI’s growing network of chip partnerships. Just weeks before, the company signed a 6-gigawatt supply deal with AMD, while Nvidia itself is reportedly preparing to invest up to $100 billion in OpenAI and provide an additional 10 gigawatts of data-center capacity.

That tells you everything about OpenAI's strategy. It doesn’t want to depend on anyone ever again. Nvidia may dominate AI chips today, but OpenAI wants to make sure the future runs, at least partly, on hardware it controls.

But it doesn’t mean Nvidia is losing ground. Building custom chips is notoriously complex, and even tech giants like Microsoft and Meta have struggled to match Nvidia’s performance. But OpenAI’s aim isn’t to win benchmark tests. It’s to guarantee long-term access to compute, the most valuable resource in artificial intelligence.

Investors clearly have already taken note as Broadcom’s stock rose more than 10% after the announcement. That's a sign that Wall Street views the partnership as a pivotal moment in the AI hardware race.

How this move could redefine AI infrastructure

OpenAI’s new systems with Broadcom will rely on Ethernet networking instead of Nvidia’s proprietary InfiniBand. It’s a technical choice that signals something larger like a move toward open, modular infrastructure rather than one locked inside a single ecosystem.

The timing is also telling. With over 800 million weekly active users, OpenAI has already proven global demand for its products. The real bottleneck now is hardware. By building its own, OpenAI isn’t just increasing capacity, but shaping how the world’s next wave of AI systems will be powered and connected.

Ultimately, OpenAI’s deal with Broadcom isn’t about competition but control. The company that builds the chips will shape the direction of artificial intelligence, and OpenAI is positioning itself to be that company.

OpenAI Becomes World’s Most Valuable Private Company
AI is the new hotbed of investment and valuation growth
Oyinebiladou Omemu profile image
by Oyinebiladou Omemu

Subscribe to Techloy.com

Get the latest information about companies, products, careers, and funding in the technology industry across emerging markets globally.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More