Experts have warned against the blind use of AI in coding. Hardcoding secrets is a concept viewed as one of those concerns, as it involves directly embedding sensitive data—like API keys, passwords, or encryption tokens—into source code or configuration files. You’d expect AI-powered apps, built on modern cloud stacks and marketed as cutting-edge, to know better. New research suggests many still don’t.
Cybernews analysed AI apps on the Google Play Store and found that "many are leaking hardcoded secrets and cloud endpoints, putting users at risk or, in some cases, even potentially allowing attackers to empty their digital wallets."
According to the report: "72% of the analyzed apps contained at least one hardcoded secret. On average, an AI app leaks 5.1 secrets, and 81.14% of the detected secrets were related to Google Cloud Project identifiers, endpoints, and API keys."
Cybernews found that hundreds of AI apps had already been breached. The researchers identified 285 Firebase instances with no authentication at all, meaning anyone could access them. Those databases alone leaked 1.1GB of user data.
In 42% of cases, researchers found database tables explicitly labelled “poc”, shorthand for proof of concept. Some databases contained admin accounts with emails like attacker@evil.com. “Detected indicators of compromise show a widespread issue of automated exploits against misconfigured Firebase databases,” the Cybernews team said, adding that many of these systems appeared “largely unmonitored.”
Cloud storage exposure was even larger in scale. Misconfigured Google Cloud Storage buckets linked to AI apps exposed over 200 million files, totalling nearly 730TB of data. On average, each exposed bucket contained 1.55 million files and 5.5TB of data.
Not all leaked secrets pose the same level of danger, but some clearly cross that line. Researchers found credentials tied to messaging and engagement platforms like Twitter, Intercom, and Braze, which could allow attackers to impersonate apps or interact directly with users. Analytics and monitoring APIs exposed internal logs and performance data.
The most serious cases involved financial infrastructure. Exposed keys linked to payment and rewards systems could be abused to manipulate transactions or loyalty balances. At the highest risk level were Stripe live secret keys, which provide full control over payment backends, including charging users or rerouting funds.
One surprising finding was what researchers didn’t see much of. Despite the focus on AI, LLM API keys were relatively rare. Where they did appear, the impact was limited. As the researchers noted, leaked LLM keys typically allow attackers to submit new requests, but not access past conversations or stored prompts.
There was also a subtler issue running through the dataset: poor cleanup. Researchers detected 26,424 hardcoded Google Cloud endpoints, but nearly two-thirds pointed to resources that no longer existed. Deleted projects and abandoned buckets don’t leak data directly, but they signal weak security hygiene and create noise that attackers can still exploit.
Importantly, this isn’t an Android-only problem. Cybernews previously scanned 156,000 iOS apps and found nearly identical patterns, with 70.9% containing hardcoded secrets and hundreds of terabytes of exposed data.
