A first guide to building APIs with AI
Simon Willison's Agentic Engineering Patterns project makes me want to compile notes on building APIs in a AI-first world.
(I'm writing as someone who has maintained many APIs and who is optimistic about building usable APIs quickly with AI. I have only basic working-professional knowledge of building APIs at scale, of securing APIs, and related subjects.)
- Most importantly: build the API, or at least consider it. It's a bit like migrations in that it's plenty of work, but work that AI is good at. And it gets easier every time.
- Expose the documentation for the API programmatically: the easier it is for AI to find the documentation, the better. Telling the AI to call
/api/helpis in general better than putting all that information into another document, where it threatens to clutter a context window when it's not helping. - Relatedly, don't clutter up the documentation. Humans often need more examples than AI does, and human-facing API documentation often includes identical text in the descriptions of many endpoints. Do document the API completely, but remember that AI consumers pay a cost for everything you put there.
- Consider building a non-destructive API intended for safe AI usage. Note that "safe AI usage" can include writes: these writes can be "candidates," to be reviewed by a human (or another AI!) before they're committed. (I mean "committed" in a general sense; it'll be different by domain. For the recipe site, it just means "added to your recipe collection.")
- Your AI tools know a lot about rate limiting, monitoring, alarming, and so on. In my experience, however, they're very fallback-happy, which can be terrible in this context. Do review this code carefully. Check explicitly for fallbacks that disguise bugs or introduce security holes. (Prompting the same or a different AI to comb through the code for bad fallbacks is well worth doing.1)
Perhaps by the time you read this, AI tools will be better about distinguishing good and bad defensive code. As I write this, they are far too eager to implement fallbacks in a wide variety of situations. (But that's another post.)↩