YAGNI in 2026
YAGNI, the principle that you shouldn't pre-emptively build things you might need later, has long served me well.1 Here are my best guesses for why:
- We're usually wrong when we anticipate future uses of software. (By "future use" I don't mean, e.g., "this system is designed to process orders but the integration isn't turned on yet." I mean "maybe we'll need to process refunds one day, so let's build the order-ingestion system to handle refunds too.")
- Even when we're right about what the future uses are, there are often so many small details you don't know in advance that pre-emptive implementations tend to be worse than useless.
- If you encapsulate and modularize your code properly, building the thing later will not be much harder, and will often be easier, than building it now.
- Handling more things in a system often trades off against handling fewer things more faithfully. It's usually wrong to do current, real things worse in order to do imagined, future things better.
- Building things you don't need yet often distracts you from the hard and necessary work of modeling the current, real use cases.
I suspect YAGNI applies even better to work in 2026 than it did before:
- A reasonable argument for building things you don't yet need is that your developers are much better equipped to work in the relevant part of the codebase now. (Later, they're likely to be focused on something else, and it can be hard to re-learn the relevant context.) AI-assisted coding, however, eliminates most of the cost of re-learning a codebase.
- The marginal cost of building the new thing later is, thanks to AI, much lower than it used to be...
- ...and AI tools are improving so fast that you're likely to get a better, faster implementation later than you would now.
- AI tools do worse when they're building anticipated future things (or so it seems to me). Put another way: the human tendency to do better work on immediate, real problems than imagined future ones is not mitigated, and is probably amplified, by AI assistance.
There are plenty of real, current problems in our software--we don't need speculative work to keep us busy. The case for deferring what you don't yet need is, I think, stronger than ever.
That's not to say I know it to be true. Dogmatism has not served me well.↩