Google's $93 Billion Gambit to Break Nvidia's Dominance in Cloud Computing
Google unveils its most powerful custom AI chip yet, joining tech giants in a battle to control the future of artificial intelligence infrastructure.
Google just announced Ironwood, its seventh-generation Tensor Processing Unit, which is more than four times faster than its predecessor chip.
The chip, built entirely in-house, is designed to handle everything from training large models to powering real-time chatbots and AI agents. In connecting up to 9,216 chips in a single pod, Google says the new Ironwood TPUs eliminate data bottlenecks for the most demanding models.
What makes this particularly compelling is the scale at which customers are already committing. AI startup Anthropic plans to use up to 1 million of the new TPUs to run its Claude model, putting serious weight behind Google's tech.
The business rationale is crystal clear in Google's financial commitment. To meet soaring demand, Google upped the high end of its forecast for capital spending this year to $93 billion from $85 billion. The numbers tell their own story. Google reported third-quarter cloud revenue of $15.15 billion, a 33% increase from the same period a year earlier, and signed more billion-dollar cloud deals in the first nine months of 2025 than in the previous two years combined.
But Google isn't just competing against Nvidia. It's competing in a three-way cloud battle where infrastructure has become the ultimate differentiator. While the majority of AI workloads have relied on Nvidia's graphics processing units, Google's TPUs fall into the category of custom silicon, which can offer advantages on price, performance, and efficiency.
This fits into a broader industry pattern. Amazon has been building custom chips through its Annapurna Labs division for years, with its Inferentia and Trainium chips offering AWS customers alternatives to Nvidia's expensive GPUs. Microsoft unveiled its Maia 100 chip, aiming to compete with both Nvidia's AI GPUs and Intel's processors in 2023, while Meta is developing its own silicon with its Meta Training and Inference Accelerator chip.
The competitive dynamics reveal something crucial about the AI economy. Nvidia isn't losing its dominance because its chips are inferior. It's facing pressure because its biggest customers are also its biggest competitors, and they're all doing the math on what it costs to rely on a single supplier. Nvidia's AI chips cost up to $40,000 each, and tens of thousands may be required for a single data center. When you're operating at the scale of these tech giants, even small efficiency gains translate to billions in savings.

