Agentic coding and microservices
Here's a brief argument from Ben Borgers that AI pushes us to use micro-services, and that this is mostly1 a good thing. I'd summarize the argument thus:
- LLMs work vastly better with strong encapsulation;
- Microservices are the surest way to get strong encapsulation;
- So LLMs push us that way.
LLMs surely do work better with strong encapsulation. I've never met a person, agent, or team that didn't work better with proper encapsulation. As I've written about many times (e.g., here), encapsulation is still the most underrated concept in software.
Precisely because benefiting from encapsulation is not new and not specific to LLMs, I doubt that LLMs either do or should push us toward micro-services. In fact, I've recently been struck by how AI was pushing me to make several of my projects more monolithic, for a few reasons:
- Microservices have always had problems when different services need access to the same data. To extend Borgers' central example: an image-generation microservice might start with dedicated computational resources and image-related data, but eventually you'll want to share data across services. So, for example, you'll want to access user data during the request ("I don't know whether I can do this unless I know what jurisdiction the user is in and whether they're a minor") or give other services access to this service's data ("image search will work 10x as efficiently if it uses information in your database").2 Using AI does not prevent these problems, and indeed can confront you with those problems more quickly.3
- It's usually faster to deploy a monolith faster than it is to deploy all the microservices, and rapid AI-assisted development benefits from faster deployments.4
- You can get all sorts of coupling, some of them quite subtle, even if you use micro-services; conversely, you can enforce tight encapsulation boundaries even in a monolith. That a boundary is a service-to-service boundary is neither necessary nor sufficient to achieve encapsulation.
- LLMs are quite happy--at least in my experience--to add more functionality to a service; they quite rarely say "let's define a new service for this." Moreover, a strongly worded
AGENTS.md(and frequent reminders) is often enough to get well-encapsulated subsystems with AI, whether or not you have microservices.
So, to my surprise, I've found myself working more monolithically, if anything, in an AI-first world. Or, at least, my projects are more monolithic than I expected them to be two months ago.
For further reading, here is the always-excellent Dan Luu on monoliths vs. microservices (from the pre-LLM era, but still quite relevant).
I could be wrong about exactly what Borgers thinks is good and bad: most of the piece discusses the benefits of microservices in an AI world, and there's a paragraph about what goes wrong when you have "dozens" of them.↩
There are cases where you can avoid these: e.g., by passing in relevant user information with every request. You cannot, however, permanently and in general avoid this problem with this technique. I hope to write more about this in a future post, but you can get the flavor of the problem by thinking about how (i) this requires you to fetch and pass information for every request when you only need it sometimes, and (ii) if you do this at every interface you can end up with cyclic dependencies (what if the user service information needs to know something about the generated images?).↩
This is one of the jarring feelings of AI-assisted engineering: doing the sorts of refactors that have usually been prompted by years of pain, but after just a few weeks of development.↩
It is often faster to deploy one microservice than a whole monolith, but precisely because microservices usually are not, and cannot be, as independent as we initially fantasize them to be, we often need to deploy more than just the one microservice.↩