
Last month, Google warned that Gmail users could be victims of a new form of attack from hackers wishing to get their hands on their data.
Hackers using the internet to trick people into handing over their data and passwords isn’t anything new, but latest trends show bad actors getting more creative.
In a blog post last month on June 13, Google emphasized the seriousness of these attempts to get people’s data.
In the post it read: “With the rapid adoption of generative AI, a new wave of threats is emerging across the industry with the aim of manipulating the AI systems themselves.
Advert
“One such emerging attack vector is indirect prompt injections.”
Ultimately, hackers are using Google Gemini, the built in AI tool in Gmail and Workplace, against users, tricking the AI into helping them extract information from users.

Hackers are sending emails with hidden instructions that prompt Gemini to generate fake phishing warnings that appear to come from Google, tricking users into sharing their account password or visiting malicious sites.
Advert
Mozilla's 0din security team found proof of one of the attacks last week.
Their report showed how the AI could be fooled into displaying a fake security alert. It would claim that the user’s passwords or account had been compromised, ultimately prompting the user to take action and inevitably handing over their passwords.
The trick works by embedding the prompt in white text that blends into the email background. So, if a user was to click ‘summarize this email’ Gemini would process the hidden message, not just the visible text.
Odin advisors have said that Google’s 1.8 billion Gmail users need to ignore any Google warnings within AI summaries as this is now how Google issues user warning.
Advert

In the Google blog post, the company explained how it is constantly making improvements to its technologies to better fight against these attacks and hackers.
The blog read: “Google has taken a layered security approach introducing security measures designed for each stage of the prompt lifecycle.
“From Gemini 2.5 model hardening, to purpose-built machine learning (ML) models detecting malicious instructions, to system-level safeguards, we are meaningfully elevating the difficulty, expense, and complexity faced by an attacker. This approach compels adversaries to resort to methods that are either more easily identified or demand greater resources.
Advert
“This layered approach to our security strategy strengthens the overall security framework for Gemini – throughout the prompt lifecycle and across diverse attack techniques.”
Topics: Google, Technology, Artificial Intelligence, News