Cognitive debt and optimism
Even after trying to make a case for being cognitive debt doomerism, I'm still very much not a doomer, for reasons I lay out here and here. I've tried to argue against the doomerist view with maximal charity, but there are some less-charitable-sounding impressions that seem to me both (i) true and (ii) important for understanding the conversation.
Ordinarily I keep my grouchier-sounding opinions to myself, but in this case something important seems to me to be missing from the commentary. I fear that early-career programmers, for example, will get a distorted view of things (and underestimate their own potential).
So, here are a few of my grouchy-sounding views:
- The divide between cognitive-debt optimism and doomerism seems to correlate (not perfectly, but positively) with one's tendency to think that humans can accomplish things with software. Software deficiencies (including cognitive debt) are a problem; we are in the business of working as hard as we can to solve such problems; and we now have utterly magical new tools at our disposal. I simply cannot believe that we cannot and will not, collectively, make wonderful cognitive-debt-mitigation tools. (Have you seen modern MacBooks? Have you used uv and ruff? Do you understand how good SQLite is? The best things we make are jaw-droppingly great.1 Are we really just going to throw up our hands at the problem of cognitive debt and not make ourselves better tools for handling it?) Warnings about cognitive debt very often strike me as expressions of not just pessimism but complacency: with all our knowledge and skill, and a large quota of near-magic, can we really not do anything about this problem?
- A lot of cognitive-debt fearmongering rests, explicitly or implicitly, on the idea that you can evaluate AI by counting the problems in AI-assisted codebases per unit time (whether that's wall-clock time or development time). This is simply a mistake: we're getting much more progress and more results per unit time. That's not to say that there's no conversation to be had here: living software systems include non-software components that are more time-sensitive (e.g., humans). But a lot of cognitive-debt doomerism explicitly or implicitly rests on the naive-counting method.
- As far as I can tell, there's an inverse correlation between cognitive-debt doomerism and both the desire and the propensity to actually make things with software. Again: the correlation is not perfect, and I am emphatically not saying that the doomers are all just wagging their fingers at us from the sidelines. But it is hard not to notice that the doomer voices are disproportionately those who, independently of (and in many cases prior to) AI, are not in the everyday business of actually making the software. We need critics, and we also need to know how AI is affecting our non-programming colleagues, and I genuinely want to know how the technical-debt situation looks to them. But it's striking to me that the most severe warnings come disproportionately (not always!) from people whose daily work is not the hands-on, direct making of things.
- Many doomerist arguments have compared the technical-debt situation in AI-assisted and non-AI-assisted codebases. In many cases, the comparisons they're making seem fictional. Fictions are often useful, and in fact I like the idea of holding our AI-assisted selves to higher standards by semi-fictional means. But, to take one example, pre-AI code review simply was not (in general) a lost paradise of robust, artisanal code improvement.
- It can be easier to sound wise and careful when the point you're making is pessimistic. This is particularly true if you're writing about technology in 2026.
I characterized those views as "grouchy-sounding," but they are primarily optimistic. I look at the industry and see a ton of talent, vision, tenacity, and capacity for making great software, including great software for making software better. I do want the cognitive-debt doomers to keep writing, especially insofar as their views are intellectually honest and at least potentially constructive. But I hope readers will consider that the future of software development, judged against relevant standards and against the real history of the profession, might be quite a lot better than the discourse would have them believe.
These examples are just off the top of my head. If you think other stuff is better, the point stands. (And if you think I don't know about it, please tell me!)↩