Free Novel Read

Think Twice: Harnessing the Power of Counterintuition Page 4


  Also keep an eye out for systems where small perturbations can lead to large-scale change. Since cause and effect are difficult to pin down in these systems, drawing on past experiences is more difficult. Businesses driven by hit products, like movies or books, are good examples. Producers and publishers have a notoriously difficult time anticipating results, because success and failure is based largely on social influence, an inherently unpredictable phenomenon.

  3. Make a prediction. With the data from your reference class in hand, including an awareness of the distribution of outcomes, you are in a position to make a forecast. The idea is to estimate your chances of success and failure. For all the reasons that I’ve discussed, the chances are good that your prediction will be too optimistic.

  Sometimes when you find the right reference class, you see the success rate is not very high. So to improve your chance of success, you have to do something different than everyone else. One example is the play calling of National Football League coaches in critical game situations including fourth downs, kickoffs, and two-point conversion attempts. As in many other sports, conventional ways to decide about these situations are handed down from one generation of coaches to the next. But this stale decision-making process means scoring fewer points and winning fewer games.

  Chuck Bower, an astrophysicist at Indiana University, and Frank Frigo, a former world backgammon champion, created a computer program called Zeus to assess the play-calling decisions of pro football coaches. Zeus uses the same modeling techniques that have succeeded in backgammon and chess programs, and the creators loaded it with statistics and the behavioral traits of coaches. Bower and Frigo found that only four teams in the thirty-two-team league made crucial decisions that agreed with Zeus over one-half of the time, and that nine teams made decisions that concurred less than one-quarter of the time. Zeus estimates that these poor decisions can cost a team more than one victory per year, a large toll in a sixteen-game season.

  Most coaches stick to the conventional wisdom, because that is what they have learned and they are averse to the perceived negative consequences of breaking from past practice. But Zeus shows that the outside view can lead to more wins for the coach willing to break with tradition. This is an opportunity for coaches willing to think twice.23

  4. Assess the reliability of your prediction and fine-tune. How good we are at making decisions depends a great deal on what we are trying to predict. Weather forecasters, for instance, do a pretty good job of predicting what the temperature will be tomorrow. Book publishers, on the other hand, are poor at picking winners, with the exception of those books from a handful of best-selling authors. The worse the record of successful prediction is, the more you should adjust your prediction toward the mean (or other relevant statistical measure). When cause and effect is clear, you can have more confidence in your forecast.

  The main lesson from the inside-outside view is that while decision makers tend to dwell on uniqueness, the best decisions often derive from sameness. Don’t get me wrong. I’m not advocating for bland, unimaginative, imitative, or risk-free decisions. I am saying there is a wealth of useful information based on situations that are similar to the ones that we face every day. We ignore that information to our own detriment. Paying attention to that wealth of information will help you make more effective decisions. Remember this discussion the next time a contender for the Triple Crown goes off at highly optimistic odds.

  CHAPTER TWO

  Open to Options

  How Your Telephone Number Can

  Influence Your Decisions

  DANIEL KAHNEMAN’S significant contributions to our understanding of how people think and act should be a staple of any professional’s training. During one meeting I had with him, his comment about the anchoring-and-adjustment heuristic really stuck with me. Here’s an example of how this heuristic works, based on an exercise I did with my students at Columbia Business School. I gave them a form requesting two numbers.1 If you have never done this exercise, take a moment and jot down your responses.

  1. The last four digits of your phone number:

  _____________________

  2. An estimate of the number of doctors in New York City’s Manhattan borough:

  _____________________

  The anchoring-and-adjustment heuristic has a bias, which predicts that the phone numbers will influence the doctor estimates. In my class, the students with phone numbers ending in 0000–2999 guessed an average of 16,531, while those with 7000–9999 reckoned 29,143, higher by 75 percent. Kahneman reported a similar pattern when he administered the test to his students. (As best as I can tell, there are approximately 20,000 doctors in Manhattan.)

  Of course, individuals know that the last four digits of their phone number have nothing to do with the population of doctors in Manhattan, but the act of thinking about an arbitrary sum prior to making an estimate unleashes the powerful bias. What’s also obvious is that the students would have most assuredly given a different estimate if I had reversed the order of the questions.

  In deciding, people often start with a specific piece of information or trait (anchor) and adjust as necessary to come up with a final answer. The bias is for people to make insufficient adjustments from the anchor, leading to off-the-mark responses. Systematically, the final answer leans too close to the anchor, whether or not the anchor is sensible.2

  But the point that Kahneman emphasized was that even if you explain anchoring to a group, it does not sink in. You can run an experiment right after a discussion of the concept and still see the bias in action. The main reason, psychologists believe, is that anchoring is predominantly subconscious.

  Mental Models Rule Your World

  Anchoring is symptomatic of this chapter’s broader decision mistake: an insufficient consideration of alternatives. To be blunter, you can call it tunnel vision. Failure to entertain options or possibilities can lead to dire consequences, from a missed medical diagnosis to unwarranted confidence in a financial model. So what’s going on in our heads that causes us to focus too narrowly?

  One of my favorite explanations comes from Phillip Johnson-Laird, a psychologist known for his theory of mental models. Johnson-Laird argues that when we reason, “We use perception, the meanings of words and sentences, the significance of the propositions that they express, and our knowledge. Indeed, we use everything we’ve got to think of possibilities, and we represent each possibility in a mental model of the world.”3

  A few facets of Johnson-Laird’s description bear emphasis. First, people reason from a set of premises and only consider compatible possibilities. As a result, people fail to consider what they believe is false. Consider a hand of cards, about which only one of the following three statements is true:

  • It contains a king, an ace, or both.

  • It contains a queen, an ace, or both.

  • It contains a jack, a ten, or both.

  Given these statements, can the hand contain an ace?

  Johnson-Laird has presented this problem to many bright people, and most believe the answer is yes. But that is wrong. If there were an ace in the hand, the first two statements would be true, violating the condition that only one of the statements is true.4 You can think of the premises and their alternatives as a beam of light that shines only on perceived possible outcomes, leaving lots of viable alternatives in the dark.

  Second, and related, is the point that how a person sees a problem—how it’s described to him, how he feels about it, and his individual knowledge—shapes how he reasons about it. Since we are poor logicians, a problem’s presentation strongly influences how we choose. Prospect theory’s findings over the last four decades, including common heuristics and associated biases, substantiate this point. We’ll see a number of these biases in our tunnel-vision mistakes.

  Last, a mental model is an internal representation of an external reality, an incomplete representation that trades detail for speed.5 Once formed, mental models replace more cumbersome rea
soning processes, but are only as good as their ability to match reality. An ill-suited mental model will lead to a decision-making fiasco.6

  Our minds are just trying to get an answer—the proper diagnosis for a sick patient, the right price for an acquisition, what will happen next in a novel—and have routines to get the answer quickly and often efficiently. But getting the right solution expeditiously means homing in on what seems to us to be the most likely outcomes and leaving out a lot of what could be. For most of our evolutionary past, this worked well. But the causal patterns that worked in a natural environment tens of thousands of years ago often do not hold in today’s technological world. So when the stakes are sufficiently high, we must slow down and swing the light over the full range of possible outcomes.

  Content with the Plausible

  Tunnel vision is the source of a slew of mistakes, and we need only look as far as the anchoring-and-adjustment heuristic, and its related bias, to see the first. Why don’t people make sufficient adjustments from an anchor to come up with accurate estimates? Work by Nicholas Epley, a psychologist at the University of Chicago Business School, and Thomas Gilovich, a psychologist at Cornell University, suggests that we start with an anchor and then move toward the right answer. But most of us stop adjusting once we reach a value we deem plausible or acceptable.

  In one experiment, the psychologists asked subjects to answer six questions that had natural anchors. For instance, they asked the participants to estimate the freezing point (degrees Fahrenheit) for vodka, where the natural anchor is thirty-two degrees, the freezing point for water. They then asked the subjects for a range specifying their highest and lowest plausible estimates. For the vodka question, the mean estimate was twelve degrees, and the range of values was from twenty-three to minus seven degrees (vodka freezes at minus twenty degrees). According to Epley and Gilovich, these results suggest that the adjustment from the anchor “entails a search for a plausible estimate” and that the subjects terminate the adjustment once they reach what they believe is a reasonable answer.7

  You can also see the consequence of anchoring and adjustment in negotiation. Gregory Northcraft and Margaret Neale, psychologists who study negotiation tactics, presented a group of real estate agents identical background material on a specific house—its size, amenities, and recent comparable-house transactions. To measure the anchoring effect, the researchers gave some agents different listing prices for the same house. Sure enough, the agents who saw a high listing price appraised the house for substantially more than those who saw a low price (see figure 2–1). Notable, too, is that less than 20 percent of the agents reported using the listing price data in their appraisal, insisting instead their assessment was independent. This bias is pernicious in large part because we are so unaware of it.8

  FIGURE 2-1

  Real estate brokers subconsciously anchor on given values

  Source: Adapted from Gregory B. Northcraft and Margaret A. Neale, “Experts, Amateurs, and Real Estate: An Anchoring-and-Adjustment Perspective on Property Pricing Decisions,” Organizational Behavior and Human Decision Processes 39, no. 1 (1987): 84–97.

  Anchoring is relevant in high-stakes political or business negotiations. In situations with limited information or uncertainty, anchors can strongly influence the outcome. For instance, studies show that the party that makes the first offer can benefit from a strong anchoring effect in ambiguous situations. Developing and recognizing a full range of outcomes is the best protection against the anchoring effect if you are sitting on the other side of the negotiating table.9

  Judging Books by Their Covers

  In his book How Doctors Think, Dr. Jerome Groopman describes a trim and fit forest ranger who found himself in a hospital emergency room with chest pains. The doctor on duty listened carefully to the ranger’s symptoms, reviewed a checklist for heart disease, and ordered some standard tests. All came out fine. The results, along with the man’s healthy look, prompted the doctor to assure the patient there was an “about zero” chance his heart was the source of the problem.

  The next day, the forest ranger came back in with a heart attack. Fortunately, he survived. But the doctor who had seen him the previous day was beside himself. On reflection, the doctor realized he had fallen prey to a bias that arises from the representativeness heuristic. This bias, the second of our decision mistakes, says we often rush to conclusions based on representative categories in our mind, neglecting possible alternatives. The well-worn aphorism “don’t judge a book by its cover” speaks to this bias, encouraging us to remain open to options even as our mind seeks to shut them down. In this case, the doctor’s error was to rule out a heart attack because the patient appeared to be a model of health and fitness. “You have to be prepared in your mind for the atypical and not so quickly reassure yourself, and the patient, that everything is okay,” the doctor later mused.10

  The availability heuristic, judging the frequency or probability of an event based on what is readily available in memory, poses a related challenge. We tend to give too much weight to the probability of something if we have seen it recently or if it is vivid in our mind. Groopman tells of a woman who came to the hospital suffering from a low-grade fever and a high respiratory rate. Her community had recently experienced a wave of viral pneumonia, creating mental availability for the physician. He diagnosed her as having a subclinical case, suggesting she had the pneumonia but that the symptoms had yet to surface. Instead, it turned out she had a case of aspirin toxicity. She had taken too many aspirin in an attempt to treat a cold, and her fever and respiratory rate were classic symptoms. But the doctor overlooked them because of the vividness of the viral pneumonia. Like representativeness, availability encourages us to ignore alternatives.11

  Think carefully about how the representativeness and availability heuristics may impose on your decisions. Have you ever judged someone solely based on how he or she looks? Have you ever feared flying more after hearing of a plane crash? If the answer is yes, you are a normal human. But you also risk misunderstanding, or missing altogether, plausible outcomes.

  Is the Trend Your Friend?

  Let’s play a little game. Look at a random sequence of squares and circles (figure 2–2). What shape do you expect next?

  FIGURE 2-2

  Source: Adapted from Jason Zweig, Your Money and Your Brain: How the New Science of Neuroeconomics Can Help Make You Rich (New York: Simon & Schuster, 2007).

  The minds of most people strongly suggest the same answer: another square. This leads us to the third common mistake, a tendency to extrapolate inappropriately from past results. Scott Huettel, a psychologist and neuroscientist at Duke University, and his colleagues confirmed this finding when they placed subjects in a brain-reading functional magnetic resonance imaging (fMRI) machine and showed them random patterns of circles and squares. After one symbol, people did not know what to expect next. But after two in a row, they automatically expected a third, even though they knew the series was random. Two may not be a trend, but our brains sure think so.12

  This mistake is tough because our minds have a deep-seated desire to make out patterns and our prediction process is very rapid (the researchers call it “automatic and obligatory”). This pattern recognition ability evolved through the millennia and was profoundly useful for most of human existence. “In a natural environment, almost all patterns are predictive,” says Huettel. “For example, when you hear a crash behind you, it’s not something artificial; it means that a branch is falling, and you need to get out of the way. So, we evolved to look for those patterns. But these causal relationships don’t necessarily hold in the technological world that can produce irregularities, and in which we look for patterns where none exist.”13

  Extrapolation puts a finer point on a number of other mistakes as well. We can restate the problem of induction as inappropriately projecting into the future, based on a limited number of observations. Failure to reflect reversion to the mean is the result of extrapolating
earlier performance into the future without giving proper weight to the role of chance. Models based on past results forecast in the belief that the future will be characteristically similar to history. In each case, our minds—or the models our minds construct—anticipate without giving suitable consideration to other possibilities.

  When in Doubt, Rationalize Your Decision

  Cognitive dissonance is one facet of our next mistake, the rigidity that comes with the innate human desire to be internally and externally consistent.14 Cognitive dissonance, a theory developed in the 1950s by Leon Festinger, a social psychologist, arises when “a person holds two cognitions—ideas, attitudes, beliefs, opinions—that are psychologically inconsistent.”15 The dissonance causes mental discomfort that our minds seek to reduce.

  Many times we resolve the discomfort by figuring out how to justify our actions, for example, the man who recognizes that wearing a seat belt improves safety but who doesn’t do it. To reduce the dissonance, he may rationalize the decision by noting the seat belt is uncomfortable or by claiming that his above-average driving ability will keep him from harm’s way. A little self-delusion is OK for most of us, because the stakes are generally low and it lets us sleep at night.