AI Model's Default Setting Exposes Users to History's First "Zero-Click" AI Assault, Yet No Data Leak Occurs
Brand-New Take:
In aEquestrian revelation for tech community, security experts at Aim Labs have spotlighted a significant threat dubbed 'EchoLeak'. This vulnerability, exclusively zero-click in nature, targeted the cutting-edge AI agent within Microsoft 365 Copilot, potentially allowing cyber predators to pilfer users' sensitive data without a peep.
Microsoft collaborated with Aim Labs after receiving their findings, leading to the tech behemoth assigning the vulnerability the identifier CVE-2025-32711. With this, EchoLeak became the initial known zero-click attack to compromise an AI agent (as reported by Fortune).
Back in January, the researchers disclosed their findings to Microsoft. Subsequently, the tech company deemed the vulnerability critical and kicked off work to resolve the issue, which was later completed in May. Notably, Microsoft refrained from requiring any user actions for resolving this issue, indicating no real-world exploitation by miscreants.
According to the researchers, EchoLeak represented a violation of LLM scope, giving wrongdoers the capability to manipulate the AI model to access sensitive data. This sensitive information included but was not limited to chat histories, OneDrive documents, SharePoint content, Teams conversations, and more.
However, Gruss, Microsoft's executive, noted that Microsoft Copilot's default configuration might have exposed most organizations to malicious onslaughts before the issue was remedied. Despite this, Gruss suggested that no customers were adversely affected by the vulnerability.
A Microsoft representative responded to the issue, stating:
Stay in the Know:
Get our website's newsletter delivering the latest news, reviews, and guides for Windows and Xbox enthusiasts.
"We owe a debt of gratitude to Aim Labs for spotting and responsibly reporting this issue so it could be addressed before our customers encountered any impact."
Microsoft took steps to address the vulnerability, updating its products to bolster Microsoft 365 Copilot's security. Enhancements were made to both the runtime defenses and overall transparency. Here's a look at some of the key security improvements:
- Secure by Default and Secure by Design: Microsoft Copilot Studio now diligently adheres to these core security principles. This includes Out-of-box Cross-Prompt Injection Attack (XPIA) protection, which safeguards against injection attacks by offering real-time monitoring and intervention during an agent's runtime. Additionally, Microsoft Copilot Studio provides visibility into each AI agent's threat protection status, authentication requirements, and applicable security policies, enhancing security confidence for developers building agents.
- Comprehensive Visibility and Monitoring: Introduced by Microsoft were near-real-time monitoring and quick detection of potential security breaches within custom AI agents through Audit Logs for Jailbreak/XPIA Events. Administrators gained deeper insight into security events, facilitating rapid response and compliance management.
- Data Loss Prevention (DLP) Integration Across Apps: Extending Microsoft Purview Data Loss Prevention to Copilot across core productivity apps (Word, Excel, PowerPoint) is instrumental in preventing data oversharing. This means Copilot will block actions such as summarizing or content generation in documents marked with sensitive labels when DLP rules exclude them from AI processing. Moreover, chat interactions with Copilot will be curbed in such scenarios, diminishing the risk of sensitive data leakage through the AI assistant.
This rollout began as a public preview in May 2025 and is expected to hit the mainstream market soon, enabling organizations to protect their valuable information while seamlessly leveraging AI.
Time will tell how Microsoft tackles security challenges for its AI tools, particularly in the wake of former Microsoft security architect Michael Bargury demonstrating 15 ways to breach Copilot's security guardrails.
- Microsoft, in collaboration with Aim Labs, updated Microsoft 365 Copilot's software to strengthen its cybersecurity, particularly addressing the EchoLeak vulnerability discovered earlier.
- The update to Microsoft 365 Copilot includes enhanced runtime defenses and improved transparency, such as the Secure by Default and Secure by Design principles, Out-of-box Cross-Prompt Injection Attack (XPIA) protection, and visibility into each AI agent's threat protection status.
- With the update, Microsoft Copilot Studio will now provide comprehensive visibility and monitoring, as well as Data Loss Prevention (DLP) integration across apps, to prevent data oversharing and sensitive data leakage through the AI assistant.
- Teams utilizing Microsoft 365 Copilot will benefit from these security improvements, as the rollout began as a public preview in May 2025 and is expected to reach the mainstream market soon, ensuring organizations can protect their sensitive data while harnessing the power of AI technology.