Accountability and AI
Here is the FLUX Collective on accountability and AI. Their view (as I understand it) is that AI outputs are incompatible with our concept of responsibility. Organizations will therefore have a hard time handling the mistakes and successes of AIs--not just in accounting for them, but even in their being collectively intelligible, or intelligible as mistakes, in the first place.
I'm not so sure that our collectives lack such a conceptual scheme. The authors must be right, however, that we treat AI outputs differently: not just because differences of speed, cost, or quality make us apply a different calculus, but because we think of those results as belonging to a different ontological category.
In my experience so far, with AI-generated code, we are:
- Less likely to excuse inextensibility;
- More likely to demand not just testing but meaningful testing;
- More likely to demand systemic remediation for errors;
- Faster to condemn the addition of superfluous code; and
- Less likely to attribute system failures to unpredictable emergent phenomena of complex systems.
These are, at least mostly, reasonable practices that make systems better. Think of self-driving cars or automated medicine: we hold non-human agents to higher standards, possibly because the conceptual schemes we apply to the outputs differ.
I'm optimistic that we already have most of what we need to resolve the issues the FLUX team raises. We expect our plumbing to work and our lightswitches to activate lights. Arguably, there's even quasi-agentic software we already treat this way: our compilers "do things," and many of us think of them as infrastructure that should have very low error rates.1 We might conduct a blameless retrospective for a larger system in which a lightbulb or pipe is embedded, but we insist--directly or indirectly--that these items be demonstrably reliable.
We have often been right, pragmatically and ethically, to retrospect blamelessly, to interpret our colleagues' code generously, and so on. Insofar as we don't do that with AI-generated software, however, we might be better off.
Dear everyone who's ever worked on compilers: thank you.↩