Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
Google's $93 Billion Gambit to Break Nvidia's Dominance in Cloud Computing
Photo by appshunter.io / Unsplash

Google's $93 Billion Gambit to Break Nvidia's Dominance in Cloud Computing

Google unveils its most powerful custom AI chip yet, joining tech giants in a battle to control the future of artificial intelligence infrastructure.

Oyinebiladou Omemu profile image
by Oyinebiladou Omemu

Google just announced Ironwood, its seventh-generation Tensor Processing Unit, which is more than four times faster than its predecessor chip.

The chip, built entirely in-house, is designed to handle everything from training large models to powering real-time chatbots and AI agents. In connecting up to 9,216 chips in a single pod, Google says the new Ironwood TPUs eliminate data bottlenecks for the most demanding models.

Apple Nears $1 Billion-a-Year Deal to Use Google’s Gemini AI for Siri
Apple is reportedly set to pay Google $1 billion a year to power Siri with Gemini AI, a rare move that shows Apple is buying time to catch up in the AI race.

What makes this particularly compelling is the scale at which customers are already committing. AI startup Anthropic plans to use up to 1 million of the new TPUs to run its Claude model, putting serious weight behind Google's tech.

The business rationale is crystal clear in Google's financial commitment. To meet soaring demand, Google upped the high end of its forecast for capital spending this year to $93 billion from $85 billion. The numbers tell their own story. Google reported third-quarter cloud revenue of $15.15 billion, a 33% increase from the same period a year earlier, and signed more billion-dollar cloud deals in the first nine months of 2025 than in the previous two years combined.

But Google isn't just competing against Nvidia. It's competing in a three-way cloud battle where infrastructure has become the ultimate differentiator. While the majority of AI workloads have relied on Nvidia's graphics processing units, Google's TPUs fall into the category of custom silicon, which can offer advantages on price, performance, and efficiency.

This fits into a broader industry pattern. Amazon has been building custom chips through its Annapurna Labs division for years, with its Inferentia and Trainium chips offering AWS customers alternatives to Nvidia's expensive GPUs. Microsoft unveiled its Maia 100 chip, aiming to compete with both Nvidia's AI GPUs and Intel's processors in 2023, while Meta is developing its own silicon with its Meta Training and Inference Accelerator chip.

OpenAI signs $38 billion cloud deal with Amazon to power its next-gen AI models
This marks its first major move beyond Microsoft as it expands its AI cloud footprint.

The competitive dynamics reveal something crucial about the AI economy. Nvidia isn't losing its dominance because its chips are inferior. It's facing pressure because its biggest customers are also its biggest competitors, and they're all doing the math on what it costs to rely on a single supplier. Nvidia's AI chips cost up to $40,000 each, and tens of thousands may be required for a single data center. When you're operating at the scale of these tech giants, even small efficiency gains translate to billions in savings.

Oyinebiladou Omemu profile image
by Oyinebiladou Omemu

Subscribe to Techloy.com

Get the latest information about companies, products, careers, and funding in the technology industry across emerging markets globally.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More