Notes on using generative AI securely
These claims are not new, but some seem to me under-discussed.
- You often do better restricting what Claude1 can do by means other than manipulating settings directly. I made a separate AWS profile, with custom rules, for Claude.2 Restricting Claude this way is cleaner, easier, and more accurate than curating a Bash allowlist within Claude's settings.
- More generally, thinking in terms of the broader system (not just Claude's intrinsic abilities and settings) will help you find the right security posture with Claude.
- This includes monitoring, observability, and recovery systems. It makes sense to invest a bit more in these at earlier stages of a project: you can recover more easily if Claude overreaches, and Claude makes it much easier to set up these systems.
- Human employees provide useful analogies. Many companies give robust permissions to employees even when they know that mistakes happen (or even that some are bad actors).
- But this comes with, again, serious investment in permissions, monitoring, and recovery systems. Do not let Claude make unrestricted HTTP requests just because your employer lets you use the Internet.
- Human alert fatigue is a security risk in any human-Claude collaboration. A robust, sensible allowlist minimizes this.