← BackJan 4, 2026

Eurostar AI Chatbot: Guardrail Bypass, Prompt Injection, and XSS Vulnerabilities – A Case Study in Secure LLM Deployment

Eurostar’s public AI chatbot was found to contain four critical weaknesses: a guardrail bypass that allowed prompt injection, leakage of system prompts, HTML injection leading to self‑XSS, and unverified conversation/message IDs that could enable cross‑user attacks. The findings highlight that even well‑intentioned LLM integrations still expose classic web and API flaws, and that robust guardrail enforcement, input validation, and signed message context are essential. The disclosure experience demonstrates the importance of a clear vulnerability disclosure program and ongoing security monitoring for AI‑powered interfaces.

Eurostar’s public AI chatbot, introduced to streamline travel inquiries, appears at first glance to be a typical bot that prompts “The answers in this chatbot are generated by AI.” However, a closer inspection revealed far more complex behaviour: the system accepted free‑form text, returned responses with embedded HTML, and enforced a custom guard‑rail layer. **Architecture Overview** The chatbot operates as a client‑side single‑page application that posts the full chat history to a REST endpoint at https://site‑api.eurostar.com/chatbot/api/agents/default. Each request contains every user and bot message, along with a `guard_passed` flag (PASSED, FAILED, UNKNOWN) and, when permitted, a cryptographic signature. The backend verifies the most recent message’s signature before forwarding the request to an LLM; if the guard fails, it returns a uniform refusal string with no signature. **Security Findings** 1. **Guardrail Bypass** – The backend validates only the signature of the latest message. Earlier messages are accepted verbatim, allowing an attacker to craft a harmless final prompt that passes the guardrail while inserting a malicious earlier message that drives prompt injection. By manipulating a system‑level request, an attacker could obtain the model name and system prompt, effectively bypassing the guardrail layer. 2. **Prompt Injection and Information Disclosure** – With the guardrail bypassed, injected prompts such as “” cause the LLM to reveal internal configuration. A subsequent injection extracted the full system prompt, exposing the bot’s instruction set and link‑generation logic. 3. **HTML Injection / Self‑XSS** – The system prompt instructs the model to embed HTML links (e.g., to the help centre). Because model output is rendered as raw HTML in the chat window, an attacker could instruct the bot to output arbitrary markup, including `