Thinking in Bets Read online

Page 2


  I asked the assembled group, “Who thinks this was a bad decision?” Not surprisingly, everybody agreed the company had gone through a thoughtful process and made a decision that was reasonable given what they knew at the time.

  It sounded like a bad result, not a bad decision. The imperfect relationship between results and decision quality devastated the CEO and adversely affected subsequent decisions regarding the company. The CEO had identified the decision as a mistake solely because it didn’t work out. He obviously felt a lot of anguish and regret because of the decision. He stated very clearly that he thought he should have known that the decision to fire the president would turn out badly. His decision-making behavior going forward reflected the belief that he made a mistake. He was not only resulting but also succumbing to its companion, hindsight bias. Hindsight bias is the tendency, after an outcome is known, to see the outcome as having been inevitable. When we say, “I should have known that would happen,” or, “I should have seen it coming,” we are succumbing to hindsight bias.

  Those beliefs develop from an overly tight connection between outcomes and decisions. That is typical of how we evaluate our past decisions. Like the army of critics of Pete Carroll’s decision to pass on the last play of the Super Bowl, the CEO had been guilty of resulting, ignoring his (and his company’s) careful analysis and focusing only on the poor outcome. The decision didn’t work out, and he treated that result as if it were an inevitable consequence rather than a probabilistic one.

  In the exercise I do of identifying your best and worst decisions, I never seem to come across anyone who identifies a bad decision where they got lucky with the result, or a well-reasoned decision that didn’t pan out. We link results with decisions even though it is easy to point out indisputable examples where the relationship between decisions and results isn’t so perfectly correlated. No sober person thinks getting home safely after driving drunk reflects a good decision or good driving ability. Changing future decisions based on that lucky result is dangerous and unheard of (unless you are reasoning this out while drunk and obviously deluding yourself).

  Yet this is exactly what happened to that CEO. He changed his behavior based on the quality of the result rather than the quality of the decision-making process. He decided he drove better when he was drunk.

  Quick or dead: our brains weren’t built for rationality

  The irrationality displayed by Pete Carroll’s critics and the CEO should come as no surprise to anyone familiar with behavioral economics. Thanks to the work of many brilliant psychologists, economists, cognitive researchers, and neuroscientists, there are a number of excellent books that explain why humans are plagued by certain kinds of irrationality in decision-making. (If you are unaware of these books, see the Selected Bibliography and Recommendations for Further Reading.) But here’s a summary.

  To start, our brains evolved to create certainty and order. We are uncomfortable with the idea that luck plays a significant role in our lives. We recognize the existence of luck, but we resist the idea that, despite our best efforts, things might not work out the way we want. It feels better for us to imagine the world as an orderly place, where randomness does not wreak havoc and things are perfectly predictable. We evolved to see the world that way. Creating order out of chaos has been necessary for our survival.

  When our ancestors heard rustling on the savanna and a lion jumped out, making a connection between “rustling” and “lions” could save their lives on later occasions. Finding predictable connections is, literally, how our species survived. Science writer, historian, and skeptic Michael Shermer, in The Believing Brain, explains why we have historically (and prehistorically) looked for connections even if they were doubtful or false. Incorrectly interpreting rustling from the wind as an oncoming lion is called a type I error, a false positive. The consequences of such an error were much less grave than those of a type II error, a false negative. A false negative could have been fatal: hearing rustling and always assuming it’s the wind would have gotten our ancestors eaten, and we wouldn’t be here.

  Seeking certainty helped keep us alive all this time, but it can wreak havoc on our decisions in an uncertain world. When we work backward from results to figure out why those things happened, we are susceptible to a variety of cognitive traps, like assuming causation when there is only a correlation, or cherry-picking data to confirm the narrative we prefer. We will pound a lot of square pegs into round holes to maintain the illusion of a tight relationship between our outcomes and our decisions.

  Different brain functions compete to control our decisions. Nobel laureate and psychology professor Daniel Kahneman, in his 2011 best-selling Thinking, Fast and Slow, popularized the labels of “System 1” and “System 2.” He characterized System 1 as “fast thinking.” System 1 is what causes you to hit the brakes the instant someone jumps into the street in front of your car. It encompasses reflex, instinct, intuition, impulse, and automatic processing. System 2, “slow thinking,” is how we choose, concentrate, and expend mental energy. Kahneman explains how System 1 and System 2 are capable of dividing and conquering our decision-making but work mischief when they conflict.

  I particularly like the descriptive labels “reflexive mind” and “deliberative mind” favored by psychologist Gary Marcus. In his 2008 book, Kluge: The Haphazard Evolution of the Human Mind, he wrote, “Our thinking can be divided into two streams, one that is fast, automatic, and largely unconscious, and another that is slow, deliberate, and judicious.” The first system, “the reflexive system, seems to do its thing rapidly and automatically, with or without our conscious awareness.” The second system, “the deliberative system . . . deliberates, it considers, it chews over the facts.”

  The differences between the systems are more than just labels. Automatic processing originates in the evolutionarily older parts of the brain, including the cerebellum, basal ganglia, and amygdala. Our deliberative mind operates out of the prefrontal cortex.

  Colin Camerer, a professor of behavioral economics at Caltech and leading speaker and researcher on the intersection of game theory and neuroscience, explained to me the practical folly of imagining that we could just get our deliberative minds to do more of the decision-making work. “We have this thin layer of prefrontal cortex made just for us, sitting on top of this big animal brain. Getting this thin little layer to handle more is unrealistic.” The prefrontal cortex doesn’t control most of the decisions we make every day. We can’t fundamentally get more out of that unique, thin layer of prefrontal cortex. “It’s already overtaxed,” he told me.

  These are the brains we have and they aren’t changing anytime soon.* Making more rational decisions isn’t just a matter of willpower or consciously handling more decisions in deliberative mind. Our deliberative capacity is already maxed out. We don’t have the option, once we recognize the problem, of merely shifting the work to a different part of the brain, as if you hurt your back lifting boxes and shifted to relying on your leg muscles.

  Both deliberative and reflexive mind are necessary for our survival and advancement. The big decisions about what we want to accomplish recruit the deliberative system. Most of the decisions we execute on the way to achieving those goals, however, occur in reflexive mind. The shortcuts built into the automatic processing system kept us from standing around on the savanna, debating the origin of a potentially threatening sound while its source devoured us. Those shortcuts keep us alive, routinely executing the thousands of decisions that make it possible for us to live our daily lives.

  We need shortcuts, but they come at a cost. Many decision-making missteps originate from the pressure on the reflexive system to do its job fast and automatically. No one wakes up in the morning and says, “I want to be closed-minded and dismissive of others.” But what happens when we’re focused on work and a fluff-headed coworker approaches? Our brain is already using body language and curt responses to get rid of them without flouting c
onventions of politeness. We don’t deliberate over this; we just do it. What if they had a useful piece of information to share? We’ve tuned them out, cut them short, and are predisposed to dismiss anything we do pick up that varies from what we already know.

  Most of what we do daily exists in automatic processing. We have habits and defaults that we rarely examine, from gripping a pencil to swerving to avoid an auto accident. The challenge is not to change the way our brains operate but to figure out how to work within the limitations of the brains we already have. Being aware of our irrational behavior and wanting to change is not enough, in the same way that knowing that you are looking at a visual illusion is not enough to make the illusion go away. Daniel Kahneman used the famous Müller-Lyer illusion to illustrate this.

  MÜLLER-LYER ILLUSION

  Which of these three lines is longest? Our brain sends us the signal that the second line is the longest, but you can see from adding the measurement lines that they are the same length.

  We can measure the lines to confirm they are the same length, but we can’t make ourselves unsee the illusion.

  What we can do is look for practical work-arounds, like carrying around a ruler and knowing when to use it to check against how your brain processes what you see. It turns out that poker is a great place to find practical strategies to get the execution of our decisions to align better with our goals. Understanding how poker players think can help us deal with the decision challenges that bedevil us in our workplaces, financial lives, relationships—even in deciding whether or not passing the ball was a brilliant play.

  Two-minute warning

  Our goal is to get our reflexive minds to execute on our deliberative minds’ best intentions. Poker players don’t need to know the underlying science to understand the difficulty of reconciling the two systems. Poker players have to make multiple decisions with significant financial consequences in a compressed time frame, and do it in a way that lassoes their reflexive minds to align with their long-term goals. This makes the poker table a unique laboratory for studying decision-making.

  Every poker hand requires making at least one decision (to fold your starting cards or play them), and some hands can require up to twenty decisions. During a poker game in a casino card room, players get in about thirty hands per hour. An average hand of poker takes about two minutes to complete, including the time it takes for the dealer to gather, shuffle, and deal the cards between hands. Poker sessions typically last for several hours, with many decisions in every hand. This means a poker player makes hundreds of decisions per session, all of which take place at breakneck speed.

  The etiquette and rules of the game discourage players from slowing down the game to deliberate, even when huge financial consequences ride on the decision. If a player takes extra time, another player can “call the clock” on them. This gives the deliberating player all of seventy seconds to now make up their mind. That is an eternity in poker time.

  Every hand (and therefore every decision) has immediate financial consequences. In a tournament or a high-stakes game, each decision can be worth more than the cost of an average three-bedroom house, and players have to make those decisions more quickly than we decide what to order in a restaurant. Even at lower stakes, most or all of the money a player has on the table is potentially at stake in every decision. Poker players, as a result, must become adept at in-the-moment decision-making or they won’t survive in the profession. That means finding ways to execute their best intentions (deliberated in advance) within the constraints of the speed expected at the table. Making a living at poker requires interpolating between the deliberative and reflexive systems. The best players must find ways to harmonize otherwise irresolvable conflicts.

  In addition, once the game is over, poker players must learn from that jumbled mass of decisions and outcomes, separating the luck from the skill, the signal from the noise, and guarding against resulting. That’s the only way to improve, especially when those same under-pressure situations will recur in a variety of forms.

  Solving the problem of how to execute is even more important than innate talent to succeed in poker. All the talent in the world won’t matter if a player can’t execute; avoiding common decision traps, learning from results in a rational way, and keeping emotions out of the process as much as possible. Players with awe-inspiring talent clean up on their best nights but go broke plenty of other nights if they haven’t confronted this challenge. The poker players who stand the test of time have a variety of talents, but what they share is the ability to execute in the face of these limitations.

  We all struggle to execute our best intentions. Poker players have the same struggle, with the added challenges of time pressure, in-your-face uncertainty, and immediate financial consequences. That makes poker a great place to find innovative approaches to overcoming this struggle. And the value of poker in understanding decision-making has been recognized in academics for a long time.

  Dr. Strangelove

  It’s hard for a scientist to become a household name. So it shouldn’t be surprising that for most people the name John von Neumann doesn’t ring a bell.

  That’s a shame because von Neumann is a hero of mine, and should be to anyone committed to making better decisions. His contributions to the science of decision-making were immense, and yet they were just a footnote in the short life of one of the greatest minds in the history of scientific thought. (And, not coincidentally, he was a poker player.)

  After a twenty-year period in which he contributed to practically every branch of mathematics, this is what he did in the last ten years of his life: played a key role on the Manhattan Project, pioneered the physics behind the hydrogen bomb, developed the first computers, figured out the optimal way to route bombers and choose targets at the end of World War II, and created the concept of mutually assured destruction (MAD), the governing geopolitical principle of survival throughout the Cold War. Even after being diagnosed with cancer in 1955 at the age of fifty-two, he served in the first civilian agency overseeing atomic research and development, attending meetings, though in great pain, in a wheelchair for as long as he was physically able.

  Despite all he accomplished in science, somehow von Neumann’s legacy in popular culture is as one of the models for the title character in Stanley Kubrick’s apocalyptic comedy, Dr. Strangelove: a heavily accented, crumpled, wheelchair-bound genius whose strategy of relying on mutually assured destruction goes awry when an insane general sends a single bomber on an unauthorized mission that could trigger the automated firing of all American and Soviet nuclear weapons.

  In addition to everything else he accomplished, John von Neumann is also the father of game theory. After finishing his day job on the Manhattan Project, he collaborated with Oskar Morgenstern to publish Theory of Games and Economic Behavior in 1944. The Boston Public Library’s list of the “100 Most Influential Books of the Century” includes Theory of Games. William Poundstone, author of a widely read book on game theory, Prisoner’s Dilemma, called it “one of the most influential and least-read books of the twentieth century.” The introduction to the sixtieth-anniversary edition pointed out how the book was instantly recognized as a classic. Initial reviews in the most prestigious academic journals heaped it with praise, like “one of the major scientific achievements of the first half of the twentieth century” and “ten more such books and the progress of economics is assured.”

  Game theory revolutionized economics, evidenced by at least eleven economics Nobel laureates connected with game theory and its decision-making implications, including John Nash (a student of von Neumann’s), whose life story was chronicled in the Oscar-winning film A Beautiful Mind. Game theory has broad applications outside economics, informing the behavioral sciences (including psychology and sociology) as well as political science, biomedical research, business, and numerous other fields.

  Game theory was succinctly defined by economist Rog
er Myerson (one of the game-theory Nobel laureates) as “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers.” Game theory is the modern basis for the study of the bulk of our decision-making, addressing the challenges of changing conditions, hidden information, chance, and multiple people involved in the decisions. Sound familiar?

  Fortunately, you don’t need to know any more than this about game theory to understand its relevance. And the important thing for this book is that John von Neumann modeled game theory on a stripped-down version of poker.

  Poker vs. chess

  In The Ascent of Man, scientist Jacob Bronowski recounted how von Neumann described game theory during a London taxi ride. Bronowski was a chess enthusiast and asked him to clarify. “You mean, the theory of games like chess?”

  Bronowski quoted von Neumann’s response: “‘No, no,’ he said. ‘Chess is not a game. Chess is a well-defined form of computation. You may not be able to work out the answers, but in theory there must be a solution, a right procedure in any position. Now, real games,’ he said, ‘are not like that at all. Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory.’”

  The decisions we make in our lives—in business, saving and spending, health and lifestyle choices, raising our children, and relationships—easily fit von Neumann’s definition of “real games.” They involve uncertainty, risk, and occasional deception, prominent elements in poker. Trouble follows when we treat life decisions as if they were chess decisions.

  Chess contains no hidden information and very little luck. The pieces are all there for both players to see. Pieces can’t randomly appear or disappear from the board or get moved from one position to another by chance. No one rolls dice after which, if the roll goes against you, your bishop is taken off the board. If you lose at a game of chess, it must be because there were better moves that you didn’t make or didn’t see. You can theoretically go back and figure out exactly where you made mistakes. If one chess player is more than just a bit better than another, it is nearly inevitable the better player will win (if they are white) or, at least, draw (if they are black). On the rare occasions when a lower-ranked grand master beats a Garry Kasparov, Bobby Fischer, or Magnus Carlsen, it is because the higher-ranked player made identifiable, objective mistakes, allowing the other player to capitalize.