Researchagentic systemssafety evaluationsoftware engineering
MIT Study Finds Agentic AI Safety Gaps
8.3
Relevance Score
MIT-led researchers analyze 67 deployed agentic AI systems and find widespread gaps in safety disclosure. Around 70% provide documentation and nearly half publish code, but only about 19% disclose formal safety policies and fewer than 10% report external safety evaluations. The authors warn that as agents gain autonomy and handle emails, files and transactions, public transparency about testing and guardrails has not kept pace.


