A few days ago I wrote that my experimental spaced-repetition software uses a very primitive scheduler. I was and am resisting the urge to improve it, even though there are several clear routes to doing so.
My reasons for that are more general than I suggested. The step from the absolute minimal implementation to a barely-more-complicated one is dangerous. It reminds me of Billy Collins claiming that going from the title to the first lines of a poem is like stepping in a canoe: "a lot of things can go wrong."
Why is the seemingly innocuous process of going from the minimum viable algorithm to the next-most-intricate one so hazardous?
- It's that much harder to debug.
- Readers of the code mistake what's part of the essential core and what's incremental.
- The best solution is likely to require a totally different approach, and the first implementation is liable to blind you to it. (This effect is a lot more powerful than I suspected.)
- Someone is the person who implemented the barely-more-complicated thing, and the code gets associated with that person.
- Dependencies get pulled in. Like kudzu.
- Accidental features of the implementation get encoded as success metrics for the implementation.
- It's not encapsulated or documented properly (especially if it's done in a rush, as these things tend to be), so it's hard to understand or change.
- Given two things, one slightly more complicated than the other, data gathered about the more complicated one is often much harder to interpret.
I've read that when you start a hobby, the first gear you buy should be the cheapest reasonable option, and your next purchase (should the hobby stick) should be the best you can afford. Most of my hobbies don't require gear like that, but in my limited use of that advice, it's served me well. Algorithms and subsystems seem to be like that, too.