SafeBrowse: A Trust Layer for AI Browser Agents (Prevent Prompt Injection & Data Exfiltration)
If your agent can browse the web, download files, connect tools, and write memory, a stronger model is helpful, but it is not enough. I built SafeBrowse to sit on the action path between an agent a...

Source: DEV Community
If your agent can browse the web, download files, connect tools, and write memory, a stronger model is helpful, but it is not enough. I built SafeBrowse to sit on the action path between an agent and risky browser-adjacent surfaces. It does not replace the planner or the model. Instead, it evaluates what the agent is trying to do and returns typed verdicts like ALLOW, BLOCK, QUARANTINE_ARTIFACT, or USER_CONFIRM. The short version: Your model decides what it wants to do. SafeBrowse decides what it is allowed to do. Today, the Python client is live on PyPI as safebrowse-client, and the full project is here: GitHub: https://github.com/RobKang1234/safebrowse-sdk PyPI: https://pypi.org/project/safebrowse-client/ Why I built this A lot of agent safety discussion still sounds like "just use a better model" or "add more prompt instructions." That helps, but it does not solve the actual runtime problem. A browsing agent can still get into trouble through: prompt injection hidden in normal web p