- Some people divide education into the accumulation of human capital, on the one hand, and signaling, on the other. This distinction isn’t perfect, especially for grad students (e.g., because there’s also compensated labor, which doesn’t fit neatly into either category). But it’s still useful. Wherever feasible, choose human capital accumulation over signaling. So, weigh the costs carefully before you spend any real effort publishing or going to conferences. You could be reading, writing, and thinking.
- If your reaction to the previous paragraph was: “But publishing just is writing plus thinking, plus a few extra logistical things”—either ignore the publishing process for a while or learn more about its realities ASAP.
- You will occasionally have ideas that aren’t suitable for a full paper (if only because you don’t have time to write the paper right now). Take 30-60 minutes to sketch the idea anyway; write a 3-page memo to yourself. These memos will be invaluable. Also, some of the ideas won’t survive this process, and you’ll know to abandon them.
- Choose a committee that will cause you to do your best work. Any causal path is OK. The intentional impartation of wisdom from them to you (by telling you things they know, commenting brilliantly on your drafts, etc.) is an important path, but not the only one. Consider also: motivation by fear; failing to scare you into a stunned lack of productivity; leaving you alone to do productive work; being an exemplar (“moving as an object of desire,” in Aristotelian terms); and so on.
- Read more primary literature.
- Read a lot of secondary literature, too, but don’t get stuck in your subfield. Some fraction of the literature in your subfield is essential to you; you’d be irresponsible not to read it. If a given article isn’t in that category, ask yourself: is this really the best thing I can be reading? What other disciplines discuss my questions in different ways? What other subfields in my discipline do this? I worked in ancient philosophy. I studied more classics papers than most of my peers, I think, and I only regret not having done this more.
- Papers and research questions that come to you as you’re reading primary literature tend to be much better than those that come from secondary literature. (This is one reason to read more primary literature!)
- Writing is a deeply personal, creative process wherein the operation of your intellect is by far the most important input. But it’s also a mechanical process involving mechanical tools. So learn some tools well. I know way too many students who lost significant work because of computer crashes, electronic disorganization, etc. There is no excuse for this when there are many reliable, free cloud services. For most of us, avoiding big mistakes in this area (e.g., hours of lost work) matters much more than the differences between tools, so just pick some you’re comfortable with and be sure you have a reliable system of saving and storing your work.
- You’ll take wrong turns sometimes. For this reason, it’s nice to have old versions of your work available. OneDrive and Google Docs do versioning automatically, so there’s no need to use cutting-edge version-control software (e.g., git or Subversion) for this. Note that many otherwise tech-savvy people are uninformed about the modern state of Microsoft software. Don’t be ashamed to use Word. (I make my living in software, and I do most of my serious writing with Word + OneDrive.)
- Don’t worry if your work doesn’t look like everyone else’s. Do worry if your work looks a lot like everyone else’s.
NFL "survivor" pools are an ideal form of gambling. A bunch of people enter, everyone picks a team every week, and if your team loses, you're out. You can't pick the same team twice. The last person standing wins.
A quick Web search suggests that a lot of people begin by asking the first question I asked: Given some interval of the NFL season, which set of picks gives you the best chance of getting through that interval without losing a game?
A compressed version of my work on this question is:
- Recognize that this is a graph problem (there are different ways to approach this, but I've found it easiest to treat every permissible prefix of games--that is, every way of picking the first N games, for N from 1 to the length of the interval--as a node). The goal is to find a path to the node corresponding to the highest cumulative probability, among all nodes representing full pick sets (prefixes as long as the interval).
- Recall that problems where we need to maximize products can be converted into problems where we need to maximize sums (just take the logarithm of each factor and exploit the fact that the sum of logarithms is the logarithm of the product; see, e.g., section 6.1.7 of the second edition of Skeina's The Algorithm Design Manual, or use a slide rule until this is intuitive).
- Figure out, either by estimation or by trying it, that brute forcing this problem will make you look at way too many nodes.
- Decide to implement A* search, which requires an admissible heuristic.
- Implement a first, very naive heuristic: put a lower bound on the distance between a node and the goal by calculating the path cost if, from that node, there were a path including only games with the biggest possible favorite (in the NFL these are something like 15-1 to 20-1 favorites).
- When the search is still too slow, try a stronger heuristic: for each node, calculate the cost of a path corresponding to the best games for each remaining week in the interval, but excluding games in which you've already chosen the favorite. That is, pretend that, at any node, you are barred in the future from picking any team you've already picked, but you don't have to worry about whether those future picks conflict with each other. This takes a bit longer to calculate, but it turns out to be far more effective: it runs through the whole season in roughly half a second on my computer, and that's in Python and without taking much care to optimize anything.
This makes for a fun and straightforward project. The goal is easy to define and the problem can be solved by combining a few fundamental ideas, finding a happy "middle ground" heuristic that is easy to calculate but also sufficiently informative, and taking care of some implementation details. (For example, we can choose any node corresponding to a full pick set, so there's not a single goal node we know in advance if we describe the graph as above. One way to get around this problem is to calculate shortest paths to every node, but this graph is too big for that to be a good idea. There are several alternatives here, but my preferred one is to add a dummy Week 18 in which there is only one game, in which the Browns or a fake team are huge favorites, and to call any node including that game a goal.)
What's most important, though, is not to fool yourself into mistaking the easily defined question for the right one. If this is your whole analysis, you will be making very big mistakes in any survivor pool! There are many reasons for this:
- You don't know in advance how many games you need to pick correctly to win.
- Your estimates of win probabilities will change between now and future weeks (a whole bunch of the lines will move significantly, even if you don't know which direction they're going to move in).
- Most importantly, there is value in picking games that other people don't pick: when an upset wipes out a lot of the field but you picked a different game, that's a big windfall for you. The process I described above entirely ignores this effect, which drives correct strategy in survivor pools.
Because of this, I've come to think of this as a nerd sniping scenario. (In the metaphor, you are you, the resistance problem is the probability-maximizing problem, and the truck is the consequences of making gambling decisions without considering the effects of your opponents' choices.)
The A* search technique is still useful: It helps you estimate the value of picking one week correctly (how much does having picked that team handicap your ability to choose high-value paths after that?); it is useful in endgame scenarios (though in those cases, brute-force search will do fine); and it can be useful for estimating how many weeks the pool is likely to take. As so often happens, when you're answering a hard question ("Which team do I pick in the survivor pool?"), it can be useful to answer a related and simpler one ("Which team allows me the best chance at sweeping the season?"), but only if that doesn't cause you to make mistakes that matter more than the answer you find.
Many poker players will have a natural interest in insurance, which is in some sense the most popular form of gambling we have. On this subject a recent Andreessen Horowitz podcast episode is worth checking out. A few points of particular interest, greatly simplified, are that:
(1) Insurers and the insured have a common interest in promoting the well-being of the insured; the insurer doesn't want to pay out, and the insured doesn't want the bad thing triggering the payout to happen.
(2) Insurers are more able to learn about their clients than they used to be, especially insofar as they can get access to their clients' smartphone data.
(3) We should expect future data-rich and data-savvy insurers to use their data in two ways: First, they will offer different prices and different products to different clients. Less risky clients will no longer have to suffer so much of a penalty by virtue of their not being able to distinguish themselves form more risky clients, and more risky clients will be less able to hide in large, undifferentiated populations. Second, because of (1), they will bundle insurance with various kinds of assistance. There are already examples of this (such as health insurers subsidizing gym memberships), but we will see more diverse, nuanced, and personalized kinds of assistance.
One might think that people will object to handing over data to insurance companies for reasons of privacy. I don't think this would prevent insurance arrangements involving frequent data transfers from coming into being. People seem very willing to trade lots of privacy for a little bit of money, especially when it comes with the prospect of being demonstrated superior to one's neighbors by a big company.
So far, these are mostly hypothetical suggestions, but there are non-hypothetical models that can help us think about these future scenarios and their likelihood. The poker staking marketplace is, I think, one such model. Its evolution seems to me to suggest that (3)-type scenarios are likely to become the status quo.
There are many kinds of staking arrangements, but they generally involve a backer putting up the money for a player (the "horse") to buy in with; in exchange, the backer is entitled to some fraction of the player's winnings over a certain time period. There are many possible benefits to the horse of such an arrangement, and among them is being shielded off from the worst effects of a big downswing; it's not his own money he's lost. In this way, backing arrangements, although they have functions other than insurance, share important features with insurance.
I have been watching these deals play out since 2003 or so, and it's striking to me how much the market has shifted to include arrangements such as are predicted in (3) above. The most common form of this is bundling staking and coaching; many of the biggest and most successful "stables" require horses to take coaching along with the backing. This takes many forms, some of which are semi-personalized (PowerPoint lectures from a teacher to students of a certain type) and others of which are highly personalized (one-on-one hand history reviews, close observation of a horse by a backer, and other kinds of one-on-one coaching).
Stables also tend to track and audit their horses more diligently than was common ten years ago. So, for example, they are more likely to require tournament receipts, submission of online hand histories, or good answers to text-message spot checks. This encourages horses not to get sloppy with their play and not to play drunk, and it makes it harder for them to lose their backers' money at blackjack while pretending they are suffering bad luck at the poker tables.
These arrangements tend to play out in ways that we would predict. First, insuring only horses who are willing to participate in coaching allows backers to keep from insuring the laziest and most stubborn players. Second, the coaching and other monitoring allows backers to get rid of horses who don't improve fast enough or who simply don't play well enough. Third, getting high-quality coaching along with backing induces horses to agree to terms that are otherwise less favorable to them. In this way, both the backers and the horses benefit from economies of scale in offering coaching that are available to the backers.
Fourth, extra data and better record-keeping allow for mutually beneficial dissolutions of staking deals: Occasionally a stable will be able to, with the horse's permission, sell the rights to an underperforming horse's action. This usually happens when there is a risk of moral hazard if the horse continues to play under the current arrangement, when the horse could use a metaphorical change of scenery, or when the environment of a different stable is better for the horse. It seems to me that this sort of thing is more common now than it used to be, and that the increased availability of and reliance on data keeps the horse-transfer market from being a pure "market for lemons."
In this way, the market seems to me to have evolved in ways consistent with what Frank Chen and Mike Paulus's suggestions on the podcast. While I don't have access to backers' results, I strongly suspect that backers who have been innovative about bundling insurance with other products and who have been diligent about collecting and managing data have done very well, compared to their competitors.
Listeners and prospective podcasters often ask Andrew Brokos and me about the tools we use for the Thinking Poker Podcast. Happily, podcasting requires plenty of effort but little capital: The necessary software is free or cheap, and there's no need to use a fancy computer in any part of the process.These tools have served us well:
Recording. We record our interviews with Skype. I record the Skype calls with Audio Hijack. (I use a Mac, so many Skype-related programs are not available to me.)
Editing. We used Audacity when we edited the show ourselves. It was not too hard for an amateur to learn, and its features are (I'm told) robust enough that experts should be satisfied with it also. It's free. Now we hire a freelance editor, whom we're happy with.
Equipment. Andrew and I each have a Yeti Blue microphone. For a long time, the $100 I spent on it was the only money I'd put into the show.
Distribution. Andrew handles this, so the details could have changed without my knowing about it, but I believe he uploads the show to thinkingpoker.net. It was not difficult to get our show listed in the iTunes directory, but it was harder to get them to list it in the correct category.
Although I’m a baseball fan first and a fantasy baseball player second, I do play fantasy vigorously. As I’ve transitioned into being a professional computer/data guy, I’ve used baseball as a way to learn tools and practice “data science” skills. One nice feature for a beginner is that there’s enough data-cleaning work to give me a small taste of what it’s like, but not nearly as much as in other domains. With all the excellent data sets out there, I can get to the fun stuff quickly without completely neglecting data collection.
Valuing fantasy baseball players divides, as a first approximation, into two projects: predicting players’ outputs and using those outputs to generate fantasy values. I know many serious fantasy players do their own projections, but at least for now I’m guessing I can help myself more by working on the second step than on the first. I’ll make some obvious adjustments for injuries and similar news, but that’s about it for now.
That second step, generating fantasy values from projected statistics, is complicated, but my league’s (fairly common) structure lets me make certain simplifications. It’s a head-to-head league in which each of twelve categories is worth a point, and whoever gets the most points wins.
The suppositions I’ve chosen to make so far are:
(1) That a player’s fantasy value is, for each category, proportional to the sum of the probabilities that his contributions to a given category cause a fantasy team to win that category.
(2) That this value is proportional to the standard deviation of the distribution of team scores in that category in the fantasy league.
(3) That a player’s output in a category can be represented by his average projected weekly output in that category.
(4) That the means and standard deviations I’ve derived from my league’s last two years of results are a fair guide to how the league will play this year.
One could criticize these suppositions in many ways. The last one, for example, is particularly suspect. It might be better to create a model based on 2014 projections and estimate the values that way, but there are some advantages to my method. One advantage is that it was great fun to scrape all the results from ESPN.com. (Many complications arose here, but BeautifulSoup and Requests were enough for the bulk of the work.)
(2) and (3) are similarly approximate. Supposition (2) might cause me to overvalue Billy Hamilton, for example, who will get many of his stolen bases when his fantasy teams have already locked up the category for the week. Using supposition (3) leads me to ignore differences in variance of outcomes between players, which is probably an oversimplification. And criticisms like these could be multiplied.
All that said, I think that a very useful first approximation results from this kind of strategy: use the projections to guess at what a player will contribute every week, use standard deviations to estimate the number of category wins that those contributions will add, and add those category-wins up.
Nothing I’ve described is statistically or technologically difficult, and I’ve chosen a similarly simple-but-reasonable method for adjusting player values at each position. (Estimate how many players n will be drafted at each position and set the n+1th-best player at that position’s value to zero, adjusting everyone else accordingly.)
My results are already surprising: they differ significantly from Baseball Prospectus’ Player Forecast Manager. What’s surprising about this is that I’m using the same PECOTA projections that BP’s tool is, so that any differences in our fantasy valuations are not due to differences in projections, only to differences in deriving fantasy value from projections. (I’ve also configured the tool to use the same categories that my league uses.) Even differences in determining positional value can’t account for too much of the differences, because even filtering each set of results by position leaves wide disparities. (I can’t give many examples here, because PECOTA and PFM data are subscriber-only, but two of the many claims that result from my method are that BP is--for my fantasy league--overvaluing Starlin Castro and undervaluing Jason Kipnis.)
Perhaps metarationality requires me to defer to BP completely here; it seems clear that I at least ought to partially adjust my beliefs once I see that a company of very smart people, plenty of whom get hired away by MLB teams, disagree with me. However I decide to weight my own results and PFM's come draft day, right now I’m left with a disjunctive conclusion. I think one of these must be true:
(1) BP’s calculations are way better than mine. This would be interesting as an example of a situation where a sensible basic method diverges enormously from the best method.
(2) The only major league feature that PFM does not explicitly account for--that my league is head-to-head--makes a big difference to player values. This would be interesting as a fact about fantasy baseball.
(3) BP’s calculations are not way better than mine. This would be interesting for many reasons. Although I’m not at all sure that my algorithm is better than or even roughly as good as theirs, this is a possibility worth at least considering. If it were true, it might mean that my league is idiosyncratic and that I’m doing a good job of adjusting to it; it might also mean that I got lucky to choose a different and better basic approach from PFM’s.
There’s much more to say here, but I’ll say it better after I work harder to find patterns in the differences between my results and PFM’s, and after I work to improve my algorithms. (The answer to the title question is currently: I don't know.) For now I’ll simply suggest that serious fantasy players might do better to spend less time projecting players and more time valuing players based on projections.
Some good poker books are ones I'm glad I've read; others are ones I think my opponents would benefit from reading. Those categories don't always overlap: I got better by working to understand the examples from Mathematics of Poker, but I don't spend much time worrying that my opponents will have read it. Meanwhile, the various Harrington books were probably valuable to other players, but only the first tournament book affected my game much. Ed Miller's Poker's 1% fits both categories. I think I'm already a better player for having read it, and working through the book's suggested training regimen will certainly help much more. Meanwhile, I can easily think of dozens of friends and opponents whom I think would benefit even more than I have.
With that said, this is one of the most unusual poker books I've ever read, and working with it will for most readers require a leap of faith. The book can be read as an extended argument for structuring your thinking about poker in a certain way. Ed's approach is roughly this:
(1) Identify a property of ideal poker strategies: namely, that they obey certain principles about bluffing frequencies and street-to-street folding (and not-folding) frequencies;
(2) Estimate these frequencies with a combination of Mathematics of Poker-style theory, informal argument, and good old-fashioned guesswork (that's where the leap of faith needs to come);
(3) Explain how one goes about instantiating this strategy and identifying exceptions to it;
(4) Give lots of examples of the strategy.
The result might be Ed's best book, and that's saying something. It is, I think, his most epistemically explicit: he is admirably clear about what is deduction, what is guesswork, and what is approximation. And the book would have been much worse otherwise. Though Ed has obviously done his theoretical homework, which equally obviously informs his frequency recommendations, not much is proven. It is to Ed's credit that he makes it clear that, however theoretically motivated his recommendations are, only some of them are given airtight arguments. That doesn't stop Poker's 1% from being a very good book, although Ed's most motivated readers will want to read it in tandem with Mathematics of Poker in order to solidify their theoretical understanding of such subjects as street-to-street folding frequencies.
Much excellent work in mathematics and philosophy (and indeed every other subject with which I'm familiar) begins with a fairly simple, easily justified claim and gets as much mileage out of that claim as possible. This book can be read in the same way. It's informative and fun to let Ed begin with something every student of the game already believes--once you're in the pot, you can't give up too often--and draw out consequence after consequence of that claim. You probably know that "fit or fold" is a bad way to play, but Ed will show you just how bad it is, and how to take a broader and more useful view of "fitting" a board. You probably know that elite players look "hyper-aggressive" and "sticky," and you'll get a better sense of why they play that way. You probably know that when the board hasn't been particularly helpful to either player, both players have to fight for it, but Ed will help you figure out how much to fight, and how hard.
The examples Ed gives deserve careful study. They are thoroughly explained, usefully surprising, and accompanied by nice graphics. If you read the book you might be tempted to skip them to get on to the next surprising bit of general advice. Resist that temptation.
Although Poker's 1% has plenty to say about how to continue in a hand, it focuses more on how often to continue in a hand. I found myself wanting more material on check-calling vs. double-barreling, raising vs. calling, and so on. A book can only contain so much, though, and what Ed says on these subjects is more than enough to give me a nice extra advantage against my usual $2-5 and $5-10 opponents.
I imagine some of my opponents will get much better by reading this book, but others might get worse. I think this book will be widely misread, mostly by people who skim it once, never do the requisite study the book recommends, and interpret Ed as advocating maniacal play. Others will, I think, feel out of control at the table, not knowing when to replace basic methods of hand-reading with "frequency-based" considerations. Others will have trouble dividing up their "continuing" ranges (as I mentioned in the paragraph above). Still others face the risk of playing more big pots than they're used to and giving off tells in them (due to nerves and situation-specific inexperience).
Though readers will risk being led astray, careful ones will benefit. I agree with Ed that this book contains a basic plan for reorienting oneself to the game and adopting patterns of thought like those of the best players. Time and careful study will bring me to a more refined view of just how good Ed's various approximations are, of whether Ed does enough to describe exceptions to his various rules, and so on. Even now, however, it's clear that Ed has produced a unique, exciting, and thoroughly readable book that will reward its most careful students tremendously (much more than the $50 on the sticker). It might be the anti-Ace on the River: Barry Greenstein claimed that his book would help any player move up one level, and I suspect that the readers of Poker's 1% will--depending on their aptitude and the care with which they read and study--improve either not at all or a whole lot.