Nate Meyvis

Humane, adaptive AI bootstrapping

In the AI near-future, we'll be able to make solid tools of modest scope quickly. The promise and potential of AI has many other aspects, but "please make an X" is a relatively obvious and predictable one. I care about making the "solid" as solid as possible, the "quickly" as quick as possible, and about making this kind of bootstrapping feel calm and humane.

Here's how I've been trying to improve my bootstrapping. It's extremely naive, but naive algorithms are often the right place to start.

  1. Locate a couple of my favorite projects and think about what I like about them.
  2. Ask AI to create a bootstrapper skill that makes a project with good standard tools that uses my favorite patterns and other things I like about those projects. Note that this includes separate test and production environments, automated deployments with GitHub actions, and full-blooded authentication. If I can add these in a reliable enough way, so that they're unlikely to break or over-complicate the project, there is no reason not to have them from the beginning.
  3. Use the bootstrapper skill to make some projects. Keep notes about what I like and don't like about the projects and the process.
  4. Factor out things like authentication (some projects don't need it) into separate skills or separate Markdown files within the main skill in order to make skills more intelligible to you and to the AI.
  5. Load up the best model you can and tell it to look at (i) the bootstrapper and all related skills, (ii) all the source code from the projects, (iii) the whole git history of those projects, and (iv) the whole GitHub deployment history of those projects. Tell it to "superthink" about what has gone wrong and been frustrating and to improve the bootstrapper accordingly. Encourage it to ask you questions. Remind it that bloated SKILL.md files are a constant temptation and that it should factor out skills and other .md files as appropriate.
  6. Use the new bootstrapper, see how it does, and repeat.
  7. Make a bootstrapping-improvement skill, with instructions and permissions to fetch the relevant artifacts and notes on common mistakes.

I've been at this for a while and can feel the flywheel accelerating. Here are some miscellaneous notes on the process:

  1. Encapsulation, and more generally the management of complexity, is still the heart of software engineering. This takes a very different form with AI, and it still feels strange to spend so much time handling Markdown decomposition and imports. But wrangling and cross-referencing English the way we wrangle and cross-reference code feels like the future, or at least some significant part of it.
  2. There must be a limit to the utility of meta-bootstrapping. The bootstrapping-improvement skill feels useful, and perhaps there's room for a bootstrapping-improvement-improvement skill, but I don't think further recursion would be useful.
  3. It's important, though, to summon the stamina and discipline to keep trying to make new skills and help the AI understand things you find yourself repeating or failing at. Not doing this is sometimes caused by our correctly understanding that it wouldn't be useful, but it's also likely to be a mistake in judgment or a simple lack of effort. I'm struck by how many successful users of AI are saying that their superpower is "just go make yet another skill."
  4. The removal of overhead is different from, and often more important than, full automation. My skill to have the AI help me request and use SSL certificates doesn't attempt to automate much; it's more of a hand-holding tool intended to keep me from getting confused or frustrated. I don't register so many subdomains that saving 10 seconds here or there will matter much (yet!). Keeping me un-stuck and happy is much more important than automating this.

#Claude #bootstrapping #future of work #generative AI #psychology of coding