The Trusted Document Problem: Why Indirect Prompt Injection Is Now Your AI Agent's #1 Security Risk
On April 1, 2026, the Center for Internet Security published a formal report titled Prompt Injections: The Inherent Threat to Generative AI, warning organizations that prompt injection is a serious...

Source: DEV Community
On April 1, 2026, the Center for Internet Security published a formal report titled Prompt Injections: The Inherent Threat to Generative AI, warning organizations that prompt injection is a serious and growing attack vector against any system that routes external content into an LLM. Two weeks earlier, China's CNCERT issued a public advisory about the OpenClaw AI agent, which was found vulnerable to indirect prompt injection attacks capable of silently exfiltrating API keys and private conversation logs — with researchers identifying more than 21,000 publicly exposed vulnerable instances as of January 2026, and no malicious user interaction required to trigger the attack. The attack vector was not a jailbreak in a chat window. It was instructions hidden inside documents the agent was asked to process. These are not isolated events. They are the leading edge of a threat pattern that has been accelerating since AI agents gained tool access at scale. Indirect prompt injection is a class o