Listening notes: Jason Strasser on the Risk of Ruin podcast

Jason was on the Risk of Ruin podcast recently. He was characteristically thought-provoking and eloquent. So, for example, he notes that a lot of practical poker knowledge--or, at least, his practical poker knowledge--consisted in being able to describe situations to himself in real time and have a game plan for those situations. There's an art to carving up the space of possibilities in a way that's both real enough to be useful and coarse-grained enough to be intelligible. This is an underrated facet of poker thinking and also useful across disciplines.

Mostly, though, I was struck by a software parallel to something Jason said about risk.

Jason notes that risk profile by itself is meaningless, in the sense that (e.g.) the fact that a strategy wins X times for every Y times it loses says nothing about whether it is good for you to use that strategy. That is primarily because it offers no guarantee that the strategy has any edge in the first place. (It does you no good to win a lot if the losses, when they occur, are too large.)

This reminds me of excitement in software around type systems, formal verification, and elaborate testing principles. All of those have their place; all provide guarantees of desirable formal properties, given the existence of certain background relationships between what they are verifying and the real world; all are silent on the question of whether those background relationships actually hold. So, in software, you might have some type system and static verification system that guarantees that a system will behave "correctly," in the sense of provably transforming certain inputs in certain ways.

That's a great guarantee to have, but only if the right correspondence holds between the code you're verifying and the world. One reason--obvious, but very important--for this is that provably well-behaved code often does not accomplish any real-world result anyone cares to bring about. A less obvious reason has to do with what Jimmy Koppel calls the three levels of software:

Providing some guarantee about some implementation of something says, in general, rather little about whether a software system can correctly represent something over time. Exactly how and why this disconnect happens is a tricky metaphysical question. But that the disconnect happens is indisputable. You can see this in a lot of codebases that use strong typing, various kinds of formal verification, or similar: the most common and obvious symptom is that all sorts of things are annotated as Any (or some analogoue of Any in the system at hand). This is usually because the system had to be modified to handle cases that the carefully constructed types could not faithfully represent. And that is usually because those types were not correctly representing anything at all. If you are designing a wicker cabinet shipping system, all the formal verification in the world will not help you if the objects in your code do not properly represent a wicker cabinet, a shipping container, or whatever else.

A further analogy between Jason's comments and the software phenomena I've described is psychological. An "attractive risk-reward profile" (on the trading side) or heavy-duty formalism (in software) often provides a sense of entitlement. This causes people to attribute failures either to bad luck or an unfair system. They're really not so different, the gambler whose martingale strategy failed him and the software engineer lamenting that his beautiful type system was vitiated by the "surprising" new requirement that it be able to express the fact that not all wicker is made of bamboo or rattan.

Of course, I love testing and use way more type annotations than most programmers who do what I do. (And Jason loves not going broke every time a trade goes wrong.) There are just a lot of ways to misuse these tools. The human capacity for error and self-delusion is always there, and it's almost never a mistake to worry about it a bit more.


Home page