My post on Bryan Caplan's model drew a lot of feedback. Some expansions and clarifications:
What I most remember from Prof. Caplan's Conversation with Tyler is what you might call either an interdisciplinary tendency or a willingness to consider a question itself as opposed to the question as discussed in a single literature. This raises my confidence that he will revisit his modeling methods--or at least that he, of all people, should.
You can think of a Caplan-style Excel model in at least two ways: a traditional academic artifact or a way of making claims about the best inferences to make from the data, all things considered. If you take it to be the latter, your work will or should be judged against the state-of-the-art techniques in managing data. Do you get from data to results responsibly? Are the computations doing what you think you're doing? Can you prove it? Are your data versioned and managed responsibly?
Whether or not you were trained that way, that's the pond you're swimming in.
Aside: When I worked in bioinformatics, this kind of situation came up all the time. Someone would present a model and get asked some question like: Why didn't you use machine learning? Why use the 2011 data and not the 2014 data? How did you handle edge cases? What happens if you tweak the age cutoffs? And there were, broadly, two kinds of answers: (1) "Because that's how I made my model" / "Kindly go ask my grad students" and (2) something that shows that those questions really do concern them. (Imagine if Caplan's model needed to work on data that are eight orders of magnitude bigger.)
There are truly hard questions here about division of intellectual labor here. It's hard to imagine reading Reviel Netz but with the quantitative work not done by Reviel Netz. There will be cases where the costs of communication and synchronization across people will mean that the best solution really is for the Caplan / Netz figure to use whatever tools they know.
Which is another argument for T-shaped competence. If there really is no good alternative to Caplan doing the model himself (though I think there is), then the value in turning this into something better is enormous. And the unit of energy Caplan spent researching the fourth-most-relevant academic discipline to his education research probably would have been better spent improving this side of things.
And this is also brutal evidence for context being that which is scarce (here this plays out very literally!). I'm not surprised at the connection between T-shaped competence and context.
An important lens on this episode is that of Bryan trying attract the best talent to his Case Against Education project. In some respects he succeeded, but on this quantitative side of things he (by his own lights) did not. (See my original post for more on the market for attention in this domain.) This helps me understand why Tyler Cowen is thinking so deeply about context while he's also thinking about cultivating and finding talent.
Keep in mind that part of Caplan's point is not just that nobody paid attention to his spreadsheets but that people did pay attention to his conclusions, and that the disparity is troubling. Surely he's right about that.
A constant lesson of the Internet age (and the pandemic era) is that it's hard to reason about, prepare for, and adjust to growth. A lot of human capital is stored in people--basically, all of us--who in some way need to do things differently if we want to optimize our intellectual output to the current world (and the future). Whatever the right solution here is, it certainly involves a lot of kindness and intellectual charity.