Frequent corrections I make to LLM code
Here I'm collecting some tendencies I often correct in LLM-generated code.
First, some caveats:
- LLM-generated code changes a lot over time, between users, and (especially) across environments. I've seen these enough to have flagged them as patterns, but I can only guess at why I've seen them and whether others will.
- This isn't intended as a criticism of LLM-generated code. Insofar as these tendencies reflect mistakes, the mistakes are primarily mine, for not guiding the LLM better.
Now, the list:
- Overreliance on mocks in tests.
- Over-eagerness to construct elaborate fallbacks.1
- Over-eagerness to write brittle, one-off scripts to test changes that should already be testable.
- Generating lots of one-off types for entities that are already expressible with existing types (or that do not require type definitions at all).
- Importing things in the middle of functions, especially tests, even when the import is cheap.
So, these are good candidates for nudges in AGENTS.md, CLAUDE.md, or whatever other mechanism your coding environment provides.
I like Postel's Law as much as (actually, much more than) the next person, but anticipating too many different kinds of inputs inflates the code and often introduces subtle bugs. When you control the code that's calling the module in question, it's often better to fail loudly and fix the callers.↩