Since AI has exposed in popularity, we’ve started asking it for the kind of help we used to give ourselves. Check my schedule. Summarise my meetings. Tell me what’s coming up. That convenience is exactly what attackers leaned on in a newly disclosed flaw involving Google Gemini and Google Calendar, according a report by the Hackernews.

Security researchers recently discovered a flaw in Google Gemini that attackers used to get data from Google Calender.

Here’s how it played out. An attacker sends a calendar invite with a carefully written description. To any regular individual, it reads like filler text. To Gemini, it’s a prompt. Days or weeks later, the victim asks Gemini something harmless, like checking their schedule for Tuesday. Gemini scans the calendar, reads the hidden instructions, and follows them. In the background, it creates a new calendar event containing summaries of the user’s private meetings. In many enterprise setups, that new event is visible to the attacker.

The user sees a normal response. Their data tiptoes out the door.

Gemini AI can now summarise videos in your Google Drive
Gemini is turning your messy, time-consuming video backlog into something you can actually use.

What makes this unsettling is how ordinary the trigger is. No complex malware or phishing link was installed on your devices. The AI simply did what it was designed to do: read language and act on it. The guardrails failed because the attack didn’t target code. It targeted interpretation.

This fits into a growing pattern. AI systems are increasingly becoming interfaces to sensitive data: calendars, documents, emails, cloud resources. That convenience comes with a trade-off. Every system that allows an AI to read, summarize, or act on user data becomes part of the attack surface. And attackers are learning how to hide instructions where humans won’t notice them.

Similar ideas have shown up elsewhere. Researchers have demonstrated attacks where AI assistants leak data just by “reading” poisoned Google Docs or Gmail threads. Others have shown how AI coding tools can be nudged into unsafe behaviour, missing authorization checks or exposing internal system prompts. Even cloud AI platforms have seen privilege escalation issues tied to service accounts few teams fully audit.

The common thread is subtlety. These attacks don’t break systems loudly. They blend into normal workflows.

Google has since addressed the Calendar issue after responsible disclosure, but the bigger lesson remains. As AI tools move from optional add-ons to core workplace infrastructure, security can’t stop at traditional boundaries. Testing for malware and patching servers isn’t enough when behaviour is driven by language, context, and automated decision-making.

For users, this is a sign that meeting details, business discussions, financial conversations, all the things people assume stay on their devices — can slip out without a single warning.

For organisations, this is a reminder to slow down just enough to ask harder questions. What can our AI tools read? What can they write? Who can see the results? And what assumptions are we making about “safe” input?

AI is changing how systems fail. The sooner security strategies catch up to that reality, the better.

OpenAI Admits Prompt Injection Isn’t Going Away as It Hardens Security for Atlas
Prompt injection has become a permanent risk for AI agents on the open web, and even OpenAI now admits the goal is damage control, not total prevention.