The case for worrying about AI-specific cognitive debt
I like to divide the space of opinions about cognitive debt like this:
- Agentic coding leads to significantly more cognitive debt than traditional coding, and that's a problem.
- Agentic coding leads to significantly more cognitive debt than traditional coding, but that's not a problem.
- Agentic coding and traditional coding have basically analogous cognitive-debt profiles.
This is a very rough division, and you could clarify and subdivide these endlessly. But that's the division I come back to.
I'm on Team 3, for reasons I explain here and here.1 But there's no wrong time to look at all sides of an important question, especially when things are unusually new and uncertain.
So: what are the likeliest reasons that "Team 1" or "cognitive debt doomerism" turns out to be more correct than my view?
- Even as AI makes us more competent, it pushes us to work at the edges of our competence. The areas in which we're barely competent (or, perhaps, just competent enough to be dangerous) are different, but they still exist. Arguably, conditional on someone's being above minimum competence in a certain domain, that domain-competence combination will be lower in an AI-first world. Doing lots of work in lower-competence areas makes for cognitive debt.
- Cognitive-debt optimists ("Team 2" and "Team 3" above, in different ways) like to say that AI will also give us tools to overcome cognitive debt and in time encourage us to work in ways that minimize technical debt. But experience tells us that the industry, overall, does not easily adopt cognitive-debt-mitigation techniques, even when they're well-known and relatively inexpensive. As a matter of organizational behavior, it will be much easier to scale the accumulation of cognitive debt than its mitigation.
- The same as above, but replacing organizational behavior with individual psychology. The impulse to make more and more, faster and faster, is simply too strong. The impulse to scrutinize our work, and to really want to find mistakes in it, is simply too weak.
- LLM outputs are much better on small scales; their attempts at macro-level coherence are just superficial unless they rely on structures as rigid as, say, those in a math proof. Better models and tooling won't much change these fundamental structural facts. Codebases are more like gardens or works of art than math proofs, so agentically engineered codebases will be fundamentally less intelligible. This will lead to more cognitive debt.
- Tooling and process improvements alone can't mitigate cognitive debt; this requires a combination of training, mentorship, long experience with the code. AI will push us away from those and cause a shortage of engineers who can possibly understand codebases, let alone actually understand some specific codebase.
To be clear: I think those arguments are full of false claims and don't think any combination of them gives good reason to be a cognitive-debt pessimist. But that's my current best attempt at a suite of pessimistic arguments.