(from the intro to a paper I’m writing)

The dealer calmly dealt two cards to each of the eight players who sat around the table. I look at my cards and see a pair of queens. My first impulse is to fold, as pocket pairs seldom win hands in Texas Holdem, but they’re face cards so I pay the blind and play the hand. No one raises the ante, and the flop reveals two more queens. I have four of a kind, the best hand I’ve ever had and am certain that I will win this round. Yet, try as I may, I am unable to slow-roll anyone into raising my bets and ultimately the best hand of cards I’ve ever had nets little more than the antes of the players who chose to stay in.

A few hands later I’m dealt an ace and a queen. The flop reveals another ace and queen. I’ve got the top pair with good odds of winning the hand. The turn card is of no help to anyone, and I’m successful in slow-rolling another player into investing into the pot. The river card reveals a king. The odds are well in my favor that the other player doesn’t have the cards to beat my two pair, so I go all in and he calls. Thinking I’ve just won a big pot, he turns over an ace and a king. He beat me on the river card.

Two important lessons to be learned here: the best hand does not equate to the best payoff, and if you play the odds rather than the player, you lose.

I account myself a pretty good poker player, having participated in numerous tournaments, and even won a few. I have several strategies I use as guidelines to play, and they serve me fairly well. But there’s one thing I know with certainty: no matter how much I study the game, the player I own today may own me tomorrow. Poker is a game of psychology so much more than it is a game of chance, and to win you have to play the player, not the game. Thus, despite having calculable odds and bounded rules, poker is an unbounded game, where the best hand doesn’t always win, and the very best hands often net poor payoffs.

What does this have to do with prediction in PolMil you ask? The analogy of poker is, in fact, very applicable. With poker, as in PolMil, I have numerous indicators about my opponent’s likely choices. I can observe his play, discern his patterns of cautious vs. aggressive play, observe how he bets when he has a good hand, or bluffs, or I can simply watch for tells. All of the observations give me insights into what he is likely to do in any given situation. So in some limited respect, I can attempt to predict what he will do and adjust my play to suit. Given enough observations about a single opponent, I can even build a mathematical model that will predict his play in any given situation.

But there’s a problem. That problem is that he is doing the same thing to me. It’s not a one-way system, but rather a complex interchange of observation and adjustment to suit what we each believe the other is likely to do. That model I’ve constructed tells me what he will do in the aggregate. It does not tell me what he is likely to do relative to me and my particular style of play. What’s more, if I adapt my play to rigidly adhere to my model’s predictions, I am certain to lose, as my play will become, in turn, predictable. Give me an opponent with a deterministic (read numeric) view of play any day; I will get rich off of him in short order. To defeat an opponent who believes they’ve predicted my behavior, I need do little more than roll dice.

The key notion to understand is that politics, like poker, is an activity in which the ones who are most successful are the players who are the most adaptable to any given situation, and, most importantly, understand their own vulnerabilities. Stated simply, players whose actions are predictable lose (and players who strictly play the odds are always predictable).

In the example above where the hand was lost by paired aces and kings, I was playing the odds. But here’s the thing: so was the other guy. He knew he’d paired the ace, so his chances already looked pretty good. When the king hit the table on the river, he knew the odds were very high in his favor (just like me). This is black swan country. From our individual perspectives, we viewed the probability of the other guy having a hand better than ours as a very low one. For him the probability paid off. For me, I was bitten by that shady swan, as the low probability event took the entirety of my chip stash. Thus, another reflection of reality is revealed. Despite the fact that the odds of particular hands appearing have very defined probabilities, those are modified by the fact that players are interacting and making conscious decisions about risk due to necessarily incomplete information. So while the math may look very well behaved, the reality is that the tails are, in fact, very fat.

So the overall point of this lengthy preamble is two things. First, where we intend to interact with an opponent and that opponent is anticipatory and adaptive, accurate prediction is simply not possible. Second, if we fool ourselves into believing that it is possible, we add more vulnerability to our portfolio. In more scientific parlance, the problem is not generalizable, and no amount of data makes it tractable. This isn’t to say that there aren’t some very good ways to model specific situations or anticipate the actions of an opponent (anticipate is not the same as predict), but it is to say that the traditional thought methodology of hypothesis testing is very likely to be misleading due to the afore mentioned lack of generalizability.

Nice beginning, though I don’t understand the “fat tails” reference – perhaps this means I should never play cards with you. You are of course right in that “anticipate” is not the same as “predict”; I hope to see the day when everyone sees this. I’d be interested to see what solutions you present in the paper. (PS: have you dropped your Acrasian blog completely?)

As for the Acrasian blog, I’ll get back to it at some point.

“Fat Tails” is a reference to the normal distribution, in which things that are more than a few standard deviations from mean are highly unlikely to occur, when in reality, things have decidedly more kurtosis than that, or rather, have fatter tails, thus higher likelihoods of events occurring more SDs away from mean.

Good post, Jon. A couple of additional thoughts:

1) For professional gaming, the implication (and I think many of us would agree here) is that a game should highlight possibilities, but conversely put much less weight on claiming to predict probable outcomes. The game functions more as a heuristic device than a predictive model.

2) The intel community, on the other hand, needs to make predictions, even if those are caveated, or consist of a range of likely outcomes. We could have an interesting discussion as to how well they do that, and whether the existing methodologies for assessing predictive accuracy (many of which show quite good results) are really valid and reliable.

3) Your argument about the need to incorporate odds-based assessment, psychological assessment, and a healthy respect for the messiness of known and unknown unknowns tends to reinforce what Tetlock (2005) has shown about cognitive styles and predictive accuracy. It might also explain why research (Green 2002) suggests that role players tend to be better predictors than game theorists (he didn’t test poker-players, alas, although I suspect they would have done well).

Some previous thoughts on this at http://paxsims.wordpress.com/2010/10/31/game-theory-role-playing-and-forecasting/

Role-playing is actually great training for playing poker.

Excellent post Jon! Amen. My wife’s Uncle Tommy Tierney (who is a regular in Hold ’em tournaments at Foxwoods) would say Poker IS role playing.

Many references to “fat tails” and related “long tails” refer to forms of exponential and related distributions

http://upload.wikimedia.org/wikipedia/commons/e/ec/Exponential_pdf.svg

(The inverse Gaussian is my current “pet distribution” to represent things like analytic decision times)

where considerably more of the distribution is in the tails (fat) or the tails extend with significant probability to much higher values (long) than a normal distribution.

Part of our problem understanding “the real world” is that we are typically taught that we can use a normal distribution when we don’t know what type of distribution we can use. This is true of linear systems where errors are accumulated arithmetically. In truth, errors in non-linear systems are multiplicative (because of feedback) leading to exponential distributions of results vice normal.

One of the projects i just did was a queuing model that demonstrates the difference between assuming times to make decisions are normal and inverse Gaussian. ( I presented on this at MORS this year). The result is that when you use normal distributions, and systems with server sytems set up based on “average of averages” the system VERY rarely ever fails (<1%) you use the inverse gaussian and suddenly your system is seeing balking and reneging over 5% of the time. This has HUGE implications on system engineering. The system you think will handle loads over 99% of the time ACTUALLY fails over 5% of the time, In a particularly bad case, you go from a 98% success rate to an 82% success rate In effect going from playing Russian Roulette with a 50 round magazine almost to a 5 round magazine.

The rampant extent to which this sort of innumeracy plagues our senior decision-makers and lack of intestinal fortitude on the part of the analytic community to "fess up" and stop simply telling seniors what they want to hear is a major factor in why we have failed to come to grips with any of this 10 years on.

That last paragraph is the primary motivation for writing this paper.

Here is an interesting paper that is making the rounds –

Conclusion to the abstract: “…we propose policy makers face a dilemma: prevent terrorism using normative methods that incorporate the likelihood of attack, or prevent blame by preventing terrorist attacks the public find most blameworthy.”

Hi Jon,

Thanks for this article, and the interesting discussion that followed it.

I would like to comment on one sentence, ” I need do little more than roll dice.”

I agree that semi-random patterns of actions can be a successful strategy when one is playing poker, or any game where one mind is fully in control against other minds. But with large organizations and policy, its hard to imagine that they will do random things just to throw the opponent off – even if that is what they really should be doing. (Crazy like a fox might work for people like Ahmadinejad. Every time he says something preposterous the price of oil goes up, which benefits him, and makes it very difficult to really know what is going on inside his head. But many governments have to answer to their constituents.)

I guess this does relate to the paper that Brian shared. When policy makers do things, they need to make them ‘defensible.’ I agree with your conclusion, and I think from that definitely follows Rex’s point 1. Determining the range of possible outcomes, and keeping them acceptable, if possible, is about all we can do.

When I say I need do little more than roll dice, what I’m actually saying, far too cleverly, is that I need apply mixed strategies. All successful competitors do this, particularly where the principle payoff is survival. While policy makers need make their policies defensible, it does not necessarily follow that they do not apply deception or misdirection, which are forms of mixed strategy. Under conditions of imperfect information, it is not possible to predict opponent behavior, and I further claim that it is dangerous to think that such a thing can be done. But getting to Rex’s point 1, it is appropriate (and possible) to anticipate what mixed strategies an opponent might apply and to mitigate for them through preparation. Wargames, I believe, can be highly effective at this.

Thanks Jon!

I understand now. But part of my understanding comes from the fact that I have taken an excellent course (via ‘The Teaching Company’) on Game Theory. Otherwise, I would not understand what ‘mixed strategies’ are.

I think this implies that most role players (and honestly I think most people) benefit from a course on Game Theory. With that, I am in complete agreement.

Best,

Skip