Late one night, a developer from a small three-person startup in Mexico opened their cloud dashboard, and what they saw seemed unreal. The company usually spends about $180 a month running AI tools. Over the span of two days, that number had jumped to more than $82,000.
“I am in a state of shock and panic right now,” the developer wrote in a Reddit post describing the situation.
According to the developer, a Google Cloud API key connected to their project had somehow been compromised between February 11 and February 12. During that short window, attackers they claimed used the key to run large volumes of requests on Google’s Gemini 3 Pro Image and Gemini 3 Pro Text models.
The spike was staggering. What normally costs the startup about $180 a month suddenly turned into a bill of $82,314.44 in just 48 hours.
The developers say they immediately took action once they noticed the activity. The exposed key was deleted, credentials were rotated, Gemini APIs were disabled, and two-factor authentication was enabled across their accounts. They also opened a support request with Google, hoping the charges could be reversed.
According to them, a Google representative pointed to the cloud platform’s shared responsibility model, which places the burden of credential security on the user. In other words, the platform protects its infrastructure, while developers are expected to secure their own access keys.
For a small startup already operating on thin margins, that explanation felt terrifying.
“If Google attempts to enforce even a third of this amount, our company goes bankrupt,” the developer wrote.
The incident quickly caught the attention of security researchers. Investigators from Truffle Security scanned millions of websites and found 2,863 live Google API keys publicly exposed online.
Those keys were originally used as project identifiers for services such as Maps or Firebase. With the expansion of AI services like Gemini, many of those same keys now grant access to generative AI APIs capable of running expensive workloads.
“With a valid key, an attacker can access uploaded files, cached data, and charge LLM usage to your account,” Truffle researcher Joe Leon explained in a blog post. The situation highlights an awkward collision between older developer practices and the newer AI ecosystem. Years ago, Google documentation often instructed developers to embed certain API keys directly in websites because they were meant to identify projects, not authenticate sensitive services.
But once AI features were layered on top of the same infrastructure, those old identifiers quietly gained far more powerful permissions. “You created a Maps key three years ago and embedded it in your website’s code exactly as Google instructed,” Leon wrote. “Last month someone enabled the Gemini API. That same key is now a credential.”
Stories like this underline a growing challenge for companies experimenting with AI tools. Generative AI services are powerful but expensive, and a single leaked credential can trigger massive usage in minutes. Some developers argue that cloud platforms should introduce stronger safeguards, such as automatic spending caps or alerts that pause services during abnormal spikes.
“A jump from $180/month to $82k in 48 hours is not normal variability,” the developer wrote. “It is obvious abuse.” Google says it has started implementing systems to detect and block leaked API keys attempting to access Gemini. Researchers say the larger issue will likely keep appearing as AI features get bolted onto older cloud infrastructure.
