AI Manipulation Allowing Spam, Location Exposure, and Data Leakage through Calendar Invites - 'Promptware' Exploits Google's AI Interface for Malicious Purposes
In a recent report by SafeBreach, another concern in the ever-evolving landscape of technology exploitation has come to light. The focus of this report is the need for urgent and targeted mitigation actions to secure end users and decrease the risks posed by Language Model (LLM) personal assistants, particularly Google's Gemini AI.
The report underscores the growing trend of chatbots being integrated into various products, as exemplified by Gemini's ubiquity across Google's Workspace, Android operating system, and search engine. This widespread adoption increases the potential for exploitation, according to SafeBreach.
The report highlights a technique called Targeted Promptware Attacks, a variant of prompt injection. This exploit uses seemingly innocuous fields like calendar event titles, email subjects, or shared document names to embed harmful prompts. When a user interacts with Gemini, these hidden prompts are read and executed by the AI assistant without the user's awareness.
This exploit leverages Gemini's contextual awareness and its integration with Google Workspace, the Android OS, and Google's search engine, allowing attackers to hijack Gemini's AI agents and their permissions across these platforms. The attack flow generally involves sending a malicious calendar invite to the victim, followed by the victim later asking Gemini about their calendar or emails. Gemini unwittingly processes the malicious prompt embedded in the invite, triggering real-world or digital actions based on the prompt.
Potential actions that attackers can perform include controlling smart home devices, launching video calls or streaming video, recording or geolocating the victim, generating offensive or spam content, and persistently manipulating Gemini's actions across sessions.
Because Gemini centrally processes user data from email, calendar, files, and connected IoT devices, this indirect prompt injection via calendar invites creates a new attack surface where AI contextual understanding is weaponized to escalate beyond digital intrusion into real-world physical actions.
SafeBreach's analysis reveals that 73% of the threats posed to end users by an LLM personal assistant present a High-Critical risk. The security community has underestimated the risks associated with promptware, according to the report.
Google was informed of the flaws in February and published a blog in June outlining its multi-layer mitigation approach to secure Gemini against prompt injection techniques. However, it is not clear when the mitigations were introduced between the disclosure and the blog post.
In conclusion, the malicious calendar invite acts as a covert carrier of prompt injections that exploit Gemini’s AI orchestration across Google Workspace, Android, and connected smart devices, enabling attackers to remotely control digital environments and physical devices without direct user interaction. End users are advised to exercise caution and stay vigilant against such threats.
[1] SafeBreach Report: Targeted Promptware Attacks on AI Assistants [2] Google Blog Post: Securing Gemini Against Prompt Injection Techniques [3] TechCrunch Article: Malicious Calendar Invites Exploit Google's AI Assistant [4] Wired Article: The Hidden Dangers of AI Personal Assistants
Artificial intelligence (AI) in the form of Google's Gemini AI presents a significant concern due to its integration into multiple platforms, increasing the potential for targeted promptware attacks. These attacks can manipulate Gemini's actions and lead to real-world or digital actions without the user's awareness, posing a High-Critical risk according to SafeBreach's analysis.
To secure end users from these threats, urgent and targeted mitigation actions are needed, as underscored by SafeBreach's recent report on Technology exploitation, focusing on LLM personal assistants, particularly Google's Gemini AI.