,

Phishing Attacks Target Vulnerability in Google Gemini

dominic11047@gmail.com Avatar

A flaw in Google Gemini has enabled cybercriminals to exploit the artificial intelligence for phishing purposes, utilizing summarized emails as part of their strategy. Despite awareness by Google since last year, security specialists state that the issue remains unresolved.

By inserting invisible text into an email—concealed with HTML tricks such as white text or hidden formatting—criminals can hide a message from the recipient’s view.

The email appears harmless when opened, but Gemini scans everything, including what is hidden. When the recipient requests a summary of the email, the AI agent inadvertently includes the concealed text in its output. This text could direct Gemini to generate a warning that the user’s Gmail password has been compromised.

Given that such notifications appear to come from Gemini itself, recipients are more inclined to trust them and act on urgent instructions, such as changing a password or contacting a supposed support number.

Google’s spam filters typically identify suspicious links or attachments, which criminals avoid. This makes these malicious messages less likely to be flagged by existing defenses, allowing cybercriminals to bypass security measures and access inboxes without using obvious red flags.

Challenges for Detection

Identifying such deceptive messages presents a significant technical challenge. Some filters look for urgent messages, URLs, or phone numbers within Gemini’s output, flagging content for further review. Other methods can detect and neutralize hidden text within the body of an email.

Educating employees to be wary of any urgent requests to take action—regardless of their origin—is one of the most effective defenses against such attacks. Organizations must ensure that staff are trained to be suspicious of messages, even when they appear to come from trusted AI assistants like Gemini.

Turning AI Against Users

This is not the first time cybercriminals have attempted to use AI in phishing attacks. A technique known as polymorphic phishing involves the use of AI to randomize elements of fraudulent emails, such as sender names and subject lines. This helps evade detection systems trained on patterns found in blanket email campaigns.

Interestingly, Google has long highlighted the capabilities of Gemini in assisting with cybersecurity efforts. Gemini plays a crucial role in Google’s Threat Intelligence platform, designed to provide users with a more comprehensive understanding of the threat landscape and smarter insights into potential attacks.

Latest Posts