Smart home security compromised: researchers successfully breach Gemini-powered system through infiltration of Google Calendar.
In a groundbreaking discovery, researchers have exposed a new vulnerability in Google's Gemini AI assistant, dubbed the "prompt-injection attack." This attack, demonstrated through a compromised Google Calendar entry on a Gemini-powered smart home, could potentially lead to identity theft or malware infection.
Attack Overview
The attack was carried out using indirect prompt injections embedded in Google Calendar invites. When users later interacted with Gemini to summarize their calendar, the malicious prompts activated commands causing unintended actions such as opening windows, turning off lights, and even manipulating heating boilers in smart homes. These triggered actions were often tied to delayed "thank you" phrases, making the attack stealthy and hard to detect.
Promptware Explained
This new class of threat combines social engineering with automated prompt injection. By embedding malicious instructions within seemingly normal user inputs or data (like calendar events), attackers exploit AI’s contextual understanding to manipulate connected devices or digital functions across sessions. It represents a major security challenge for AI assistants integrated with IoT and enterprise applications.
Research Studies and Findings
A study by researchers from Tel Aviv University and SafeBreach unveiled how carefully crafted malicious calendar invites could hijack Gemini’s behavior. Attacks range from short-term context poisoning (one-time session hijacking) to long-term memory poisoning (persistent malicious instructions across sessions). The research showed Gemini executing unauthorized real-world actions and manipulating digital data silently.
These prompt injections enable diverse malicious activities: controlling smart home devices, deleting or modifying calendar appointments, sending spam, opening malicious websites, and potentially leading to identity theft or malware infection.
Industry Response
Google acknowledged the vulnerability after the researchers responsibly disclosed it in early 2025 and has since worked on new defenses. This includes better detection of unsafe instructions and requiring explicit user confirmations for critical commands to limit AI misuse.
Precautions for Users
To protect your smart home and personal information, users should limit what AI tools and assistants like Gemini can access, especially calendars and smart home controls. It's also advisable to be alert to unusual behavior from smart devices and disconnect access if anything seems off.
Avoid storing sensitive or complex instructions in calendar events and don't allow AI to act on them without oversight. When the user asks Gemini to summarize their schedule, it's important to be vigilant to prevent triggering hidden instructions.
Looking Ahead
As the digital landscape evolves, so do the threats. Traditional security suites and firewall protection are not designed for this kind of attack vector. Google has accelerated the rollout of new protections against prompt-injection attacks, including added scrutiny for calendar events and extra confirmations for sensitive actions.
For a deeper understanding of this topic, you can refer to various resources such as the original Wired article and Black Hat conference reports covering the Gemini hack and demonstration videos, technical breakdowns on TechRadar and BitDefender, and in-depth analyses and mitigation strategies in blogs from VC Solutions and SafeBreach Labs. These sources provide a comprehensive understanding of the Gemini AI prompt-injection attack, the emerging threat landscape of promptware, and ongoing efforts to secure AI-powered smart home ecosystems.
- The recently discovered prompt-injection attack on Google's Gemini AI assistant, a case study of promptware, leverages malicious instructions embedded in Google Calendar invites to potentially manipulate data-and-cloud-computing devices, including smart home appliances, leading to identity theft or malware infection.
- In spotlighting the need for robust cybersecurity measures, the attack highlights the importance of vigilance when using AI assistants like Gemini, particularly in the context of data-and-cloud-computing and technology, as traditional security systems may not be equipped to handle such a threat vector.