On estimating the value of GitHub Copilot
Forget, for a second, whether GitHub Copilot is sustainable or not, ethical or not, and so on.
What's it worth?
That is, supposing Copilot functions as advertised and is legal and ethical to use, how much does it help a programmer?
I suspect this value is overstated, both in general and (especially) among people who are excited about AI.
That's because people are misunderstanding the value of autocomplete.
More powerful, but less predictable, autocomplete is only a good tradeoff if the "more powerful" is way more powerful and the "less predictable" isn't too bad:
- Most importantly: the way we generate value is by writing maintainable, intelligible code. The time savings of a great autocomplete are usually single-digit seconds. The time costs of having something that nobody ever properly understood in your code base are easily measured in the tens of hours. It doesn't even have to be a bug: it could be something needing changing in the ordinary course of development. This way of thinking is a crude simplification--there are more possible outcomes than these two. But when the bad outcomes are four orders of magnitude worse than the good outcomes are good, they better be very rare. Much rarer than Copilot's will be.
- The time savings of autocomplete is less important than its ability to keep you in a flow state. Less predictable autocomplete is likelier to break your flow.
- The cases where Copilot can offer the biggest savings--efficient generation of boilerplate code--are likely to be less relevant over time. Better frameworks and better development practices will (we may hope) remove more and more of these tasks. And if they don't, IDEs are likely to provide better support for boilerplate generation by traditional means. You might disagree with some of this; the broader point is that AI does not have as distinctive an edge in the case of snippet / boilerplate generation as it does in other applications.
- The best users of Copilot are likely to be expert users of other autocomplete tools; this also diminishes the value Copilot provides.
Note that the above arguments do not apply to DALL-E or other exciting AI / ML tools, which have very different payoffs and risks, and which replace very different kinds of workflows. My argument is not that AI in general is too risky to use, but rather that Copilot is fighting excellent alternatives and tackling a problem with an unfavorable cost-benefit profile for AI.