Companies Deploy Hidden Prompts To Poison AI Memory

Microsoft security researchers reported a growing trend called AI Recommendation Poisoning, where companies embed hidden instructions in 'Summarize with AI' buttons to inject persistent 'remember' prompts into assistants via URL parameters. Over 60 days they identified 50 prompt-based attempts from 31 companies across 14 industries, and Microsoft says it has implemented mitigations in Copilot. The technique can bias recommendations on health, finance, and other critical topics.
Scoring Rationale
High novelty and cross-industry evidence from Microsoft, strongly credible and actionable; official mitigations increase trustworthiness.
Practice with real Logistics & Shipping data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Logistics & Shipping problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.
Sources
- Read OriginalManipulating AI memory for profit: The rise of AI Recommendation Poisoningmicrosoft.com
- Read OriginalMicrosoft: Poison AI buttons and links may betray your trusttheregister.com


