Nate Meyvis

Notes on using generative AI securely

These claims are not new, but some seem to me under-discussed.

  1. You often do better restricting what Claude1 can do by means other than manipulating settings directly. I made a separate AWS profile, with custom rules, for Claude.2 Restricting Claude this way is cleaner, easier, and more accurate than curating a Bash allowlist within Claude's settings.
  2. More generally, thinking in terms of the broader system (not just Claude's intrinsic abilities and settings) will help you find the right security posture with Claude.
  3. This includes monitoring, observability, and recovery systems. It makes sense to invest a bit more in these at earlier stages of a project: you can recover more easily if Claude overreaches, and Claude makes it much easier to set up these systems.
  4. Human employees provide useful analogies. Many companies give robust permissions to employees even when they know that mistakes happen (or even that some are bad actors).
  5. But this comes with, again, serious investment in permissions, monitoring, and recovery systems. Do not let Claude make unrestricted HTTP requests just because your employer lets you use the Internet.
  6. Human alert fatigue is a security risk in any human-Claude collaboration. A robust, sensible allowlist minimizes this.
  1. Claude is my tool of choice, and "Claude" is easy to type and read. Most of this applies to any analogous tool.

  2. AWS supports this brilliantly, but many relevant environments have fine-grained permissions systems.

#Claude #generative AI #psychology of software #security #software