AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit

Update: 2025-06-12 17:04 IST
AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit
  • whatsapp icon

In a major first for the AI security landscape, researchers have identified a critical vulnerability in Microsoft 365 Copilot that could have allowed hackers to steal sensitive user data—without the user ever clicking a link or opening an attachment. Known as EchoLeak, this zero-click flaw revealed how deeply embedded AI assistants can be exploited through subtle prompts hidden in regular-looking emails.

The vulnerability was discovered by Aim Labs in January 2025 and promptly reported to Microsoft. It was fixed server-side in May, meaning users didn’t need to take any action themselves. Microsoft emphasized that no customers were affected, and there's no evidence that the flaw was exploited in real-world scenarios.

Still, the discovery marks a historic moment, as EchoLeak is believed to be the first-ever zero-click vulnerability targeting a large language model (LLM)-based assistant.

How EchoLeak Worked

Microsoft 365 Copilot integrates across Office applications like Word, Excel, Outlook, and Teams. It utilizes AI, powered by OpenAI’s models and Microsoft Graph, to help users by analyzing data and generating content based on internal emails, documents, and chats.

EchoLeak took advantage of this feature. Here’s a breakdown of the exploit process:

  • A malicious email is crafted to look legitimate but contains a hidden prompt embedded in the message.
  • When a user later asks Copilot a related question, the AI, using Retrieval-Augmented Generation (RAG), pulls in the malicious email thinking it’s relevant.
  • The concealed prompt is then activated, instructing Copilot to leak internal data through a link or image.
  • As the email is displayed, the link is automatically accessed by the browser, silently transferring internal data to the attacker’s server.

Researchers noted that certain markdown image formats used in the email could trigger browsers to send automatic requests, enabling the leak. While Microsoft’s Content Security Policies (CSP) block most unknown web requests, services like Teams and SharePoint are considered trusted by default—offering a way in for attackers.

The Bigger Concern: LLM Scope Violations

The vulnerability isn’t just a technical bug—it signals the emergence of a new category of threats called LLM Scope Violations. These occur when language models unintentionally expose data through their internal processing mechanisms, even without direct user commands.

“This attack chain showcases a new exploitation technique... by leveraging internal model mechanics,” Aim Labs stated in their report. They also cautioned that similar risks could be present in other RAG-based AI systems, not just Microsoft Copilot.

Microsoft assigned the flaw the ID CVE-2025-32711 and categorized it as critical. The company reassured users that the issue has been resolved and that there were no known incidents involving the vulnerability.

Despite the fix, the warning from researchers is clear: "The increasing complexity and deeper integration of LLM applications into business workflows are already overwhelming traditional defences,” their report concludes.

As AI agents become more integrated into enterprise systems, EchoLeak is a stark reminder that security in the age of intelligent software needs to evolve just as fast as the technology itself.

Tags:    

Similar News