Artificial intelligence was supposed to make work easier. It turns out it’s also making cyber-espionage faster.
In a new research paper, Google says a North Korea-linked threat group used its Gemini AI model to research and profile potential targets. The group, tracked as UNC2970 and linked to the broader Lazarus ecosystem, reportedly used Gemini to “synthesize OSINT and profile high-value targets to support campaign planning and reconnaissance.”
In simple terms: they used AI to gather background intelligence before launching attacks.
Google’s Threat Intelligence Group (GTIG) said the actors searched for “information on major cybersecurity and defense companies and mapping specific technical job roles and salary information.” That kind of profiling can help attackers craft highly convincing phishing messages , for example, posing as recruiters offering well-paid roles in aerospace or defense.
UNC2970 is already known for “Operation Dream Job,” a campaign that lures victims with fake job offers before delivering malware. Gemini appears to have helped speed up the research phase. GTIG described the activity as a “blurring of boundaries between what constitutes routine professional research and malicious reconnaissance.”
And North Korea isn’t alone.
Google says multiple threat actors from China and Iran have also experimented with Gemini — using it to automate vulnerability analysis, debug exploit code, develop web shells, and craft social engineering personas. One recurring tactic involves pretending to be a security researcher or CTF participant to trick the model into generating restricted outputs.
“Google is always working to improve our safety systems,” said Steve Miller, AI threat lead at GTIG. “As adversaries experience friction in misusing our systems, they begin to experiment with new ways to bypass the safeguards.” He added that Gemini is getting better at recognizing these persona-based tricks.
The misuse goes beyond research. Google identified malware called HONESTCUE that uses the Gemini API to generate C# code for follow-on attacks. Instead of carrying full malicious functionality, the malware asks Gemini to write part of it on demand. That code is then compiled in memory, leaving little trace on disk.
Another campaign, COINBAIT, reportedly used AI tools to build a fake cryptocurrency exchange for credential harvesting. Google also said it blocked large-scale “model extraction” attempts, where attackers sent more than 100,000 prompts trying to replicate Gemini’s reasoning patterns.
In a report from The Hacker News, security researchers warned that API access alone can expose a model’s behavior. “Behavior is the model,” one researcher noted, arguing that repeated query-response pairs can train replicas.
The bigger picture is clear: AI is becoming a standard tool in cyber operations. Attackers are using it to move faster and scale their efforts. Google argues defenders must respond in kind. “They are using AI routinely,” Miller said. “Defenders need to prepare for the future and make similar investments in AI.” Artificial intelligence was supposed to make work easier. It turns out it’s also making cyber-espionage faster.
In a new research paper, Google says a North Korea-linked threat group used its Gemini AI model to research and profile potential targets. The group, tracked as UNC2970 and linked to the broader Lazarus ecosystem, reportedly used Gemini to “synthesize OSINT and profile high-value targets to support campaign planning and reconnaissance.”
In simple terms: they used AI to gather background intelligence before launching attacks.
Google’s Threat Intelligence Group (GTIG) said the actors searched for “information on major cybersecurity and defense companies and mapping specific technical job roles and salary information.” That kind of profiling can help attackers craft highly convincing phishing messages , for example, posing as recruiters offering well-paid roles in aerospace or defense.
UNC2970 is already known for “Operation Dream Job,” a campaign that lures victims with fake job offers before delivering malware. Gemini appears to have helped speed up the research phase. GTIG described the activity as a “blurring of boundaries between what constitutes routine professional research and malicious reconnaissance.”
And North Korea isn’t alone.
Google says multiple threat actors from China and Iran have also experimented with Gemini, using it to automate vulnerability analysis, debug exploit code, develop web shells, and craft social engineering personas. One recurring tactic involves pretending to be a security researcher or CTF participant to trick the model into generating restricted outputs.
“Google is always working to improve our safety systems,” said Steve Miller, AI threat lead at GTIG. “As adversaries experience friction in misusing our systems, they begin to experiment with new ways to bypass the safeguards.” He added that Gemini is getting better at recognizing these persona-based tricks.
The misuse goes beyond research. Google identified malware called HONESTCUE that uses the Gemini API to generate C# code for follow-on attacks. Instead of carrying full malicious functionality, the malware asks Gemini to write part of it on demand. That code is then compiled in memory, leaving little trace on disk.
Another campaign, COINBAIT, reportedly used AI tools to build a fake cryptocurrency exchange for credential harvesting. Google also said it blocked large-scale “model extraction” attempts, where attackers sent more than 100,000 prompts trying to replicate Gemini’s reasoning patterns.
In a report from The Hacker News, security researchers warned that API access alone can expose a model’s behavior. “Behavior is the model,” one researcher noted, arguing that repeated query-response pairs can train replicas.
The bigger picture is clear: AI is becoming a standard tool in cyber operations. Attackers are using it to move faster and scale their efforts. Google argues defenders must respond in kind. “They are using AI routinely,” Miller said. “Defenders need to prepare for the future and make similar investments in AI.”
