jFriedensreich

If i read this correctly its completely absurd. secrets can never even touch an agents sandbox, not as file not as env var not as anything. Agents can only be allowed to reach services via proxies that handle secrets and do permissions and auditing completely transparently and agents do not even get secrets to access these but authenticate as their identity eg with client certificates. I am not aware of any other method that could work. The proxies obviously also cannot be reachable outside the direct connection, so if agents exfiltrate their identity and proxy setup somehow the usefulness outside is zero.

jossclimb

This sounds like the approach the nono project took: it injects a phantom token, so the sandboxed agent never gets to see the real key, it has a session scoped, time limited dummy key https://nono.sh/docs/cli/features/credential-injection

rossjudson

Can create security risk "if you're not careful?"

The security risk is created if you're careful or not. The best you can do is reduce the size of the fresh attack surface you're creating.

https://infisical.com/blog/secure-secrets-management-for-cur...

tanbablack

This is a really important area to tackle. secret management for AI agents is something most teams are ignoring right now.

One adjacent risk worth noting: the URLs these agents visit during research. Even with proper secret management, if an agent browses a poisoned page during research, the injected instructions could override its behavior before secrets ever come into play.

show comments