Think Twice: Harnessing the Power of Counterintuition Read online

Page 3


  Remarkably, the least capable people often have the largest gaps between what they think they can do and what they actually achieve.9 In one study, researchers asked subjects to rate their perceived ability and likely success on a grammar test. Figure 1-1 shows that the poorest performers dramatically overstated their ability, thinking that they would be in the next-to-highest quartile. They turned in results in the bottom quartile. Furthermore, even when individuals do acknowledge that they are below average, they tend to dismiss their shortcomings as inconsequential.

  FIGURE 1-1

  The least competent are often the most confident

  Source: Justin Kruger and David Dunning, “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.” Journal of Personality and Social Psychology 77, no. 6 (1999): 1121–1134.

  The second is the illusion of optimism. Most people see their future as brighter than that of others. For example, researchers asked college students to estimate their chances of having various good and bad experiences during their lives. The students judged themselves far more likely to have good experiences than their peers, and far less likely to have bad experiences.10

  Finally, there is the illusion of control. People behave as if chance events are subject to their control. For instance, people rolling dice throw softly when they want to roll low numbers and hard for high numbers. In one study, researchers asked two groups of office workers to participate in a lottery, with a $1 cost and a $50 prize. One group was allowed to choose their lottery cards, while the other group had no choice. Luck determined the probability of winning, of course, but that’s not how the workers behaved.

  Before the drawing, one of the researchers asked the participants at what price they would be willing to sell their cards. The mean offer for the group that was allowed to choose cards was close to $9, while the offer from the group that had not chosen was less than $2. People who believe that they have some control have the perception that their odds of success are better than they actually are. People who don’t have a sense of control don’t experience the same bias.11

  I must concede that my occupation, active money management, may be one of the best examples of the illusion of control in the professional world. Researchers have shown that, in aggregate, money managers who actively build portfolios deliver returns lower than the market indexes over time, a finding that every investment firm acknowledges.12 The reason is pretty straightforward: markets are highly competitive, and money managers charge fees that diminish returns. Markets also have a good dose of randomness, assuring that all investors see good and poor results from time to time. Despite the evidence, active money managers behave as if they can defy the odds and deliver market-beating returns. These investment firms rely on the inside view to justify their strategies and fees.

  The Odds of Success Are Poor … But Not for Me

  A vast range of professionals commonly lean on the inside view to make important decisions with predictably poor results. This is not to say these decision makers are negligent, naïve, or malicious. Encouraged by the three illusions, most believe they are making the right decision and have faith that the outcomes will be satisfactory. Now that you are aware of the distinction between the inside and outside view, you can measure your decisions and the decisions of others more carefully. Let’s look at some examples.

  Corporate mergers and acquisitions (M&A) are a multitrillion dollar global business year in and year out. Corporations spend vast sums identifying, acquiring, and integrating companies in order gain a strategic edge. There is little doubt that companies make deals with the best of intentions.

  The problem is that most deals don’t create value for the shareholders of the acquiring company (shareholders of the companies that are bought do fine, on average). In fact, researchers estimate that when one company buys another, the acquiring company’s stock goes down roughly two-thirds of the time.13 Given that most managers have an explicit objective of increasing value—and that their compensation is often tied to the stock price—the vigor of the M&A market appears moderately surprising. The explanation is that while most executives recognize that the overall M&A record is not good, they believe that they can beat the odds.

  “A high-quality beachfront property” is how the chief executive officer of Dow Chemical described Rohm and Haas after Dow agreed to acquire the company in July 2008. Dow was undaunted by the bidding war, which had driven the price premium it had to pay to a steep 74 percent. Instead, the CEO declared the deal “a decisive step towards establishing Dow as an earnings-growth company.”14 The enthusiasm of Dow’s management had all the hallmarks of the inside view. When the deal was announced, the stock price of Dow Chemical slumped 4 percent, putting the deal on top of a growing pile of losses suffered through acquisitions.

  Basic math explains why most companies don’t add value when they acquire another firm. The change in value for the buyer equals the difference between the increase in cash flow from combining the two companies (synergies) and the amount over the market value that the acquirer pays (premium). Companies want to get more than they pay for. So if synergies exceed the premium, the price of the buyer’s stock will rise. If not, it will fall. In this case, the value of the synergy—based on Dow’s own figures—was less than the premium it paid, justifying a drop in price. Glowing rhetoric aside, the numbers were not good for the shareholders of Dow Chemical.15

  The Plural of Anecdote Is Not Evidence

  A few years ago, my father was diagnosed with late-stage cancer. After the chemotherapy failed, he was basically out of options. One day, he called seeking my advice. He had read a magazine advertisement about an alternative cancer treatment that claimed near-miraculous results and pointed to a Web site full of glowing testimonials. If he sent me the information, would I tell him what I thought?

  It didn’t take long to do the research. No well-constructed studies had shown the treatment’s efficacy, and the evidence in favor of the approach amounted to a collection of anecdotes. When my father called back, I could hear in his voice that his mind was made up. Despite the substantial cost and taxing travel, he wanted to pursue this long-shot alternative. When he asked me what I thought, I told him, “I try to think like a scientist. And based on everything I can see, this won’t work.” Hanging up the phone, I felt torn. I wanted to believe the story and go with the inside view. I wanted my father to be well again. But the scientist in me admonished me to stick with the outside view. Even considering the power of the placebo effect, hope is not a strategy.

  My father died shortly after that episode, but the experience compelled me to think about how we decide about our medical treatments. For a long time, the paternalistic model reigned in relationships between physicians and patients. Physicians would diagnose a condition and select the treatment that seemed best for the patient. Patients nowadays are more informed and generally want to take part in making decisions. Physicians and patients frequently discuss the pros and cons of various treatments and together select the best course of action. Indeed, studies show that patients involved in making those decisions are more satisfied with their medical treatment.

  But research also suggests that patients regularly make choices that are not in their best interests, often due to a failure to consider the outside view.16 In one study, researchers presented subjects with a fictitious disease and various treatments. Each subject had a choice between two treatments. The first, the control treatment, had 50 percent effectiveness. The second was one of twelve options that combined a positive, neutral, or negative anecdote about a fictional patient with four possible levels of effectiveness, ranging from 30 percent to 90 percent.

  The stories made a huge difference and swamped the base-rate data in the decision-making process. Table 1-2 tells the tale. Patients selected a treatment with 90 percent effectiveness less than 40 percent of the time when it was paired with a story about a failed patient. Conversely, nearly 80 percent of
the patients selected a treatment with 30 percent effectiveness when it was matched with a success story. The results of this study were fully consistent with my father’s behavior.

  TABLE 1-2

  Are anecdotes more important than antidotes?

  Percent of subjects choosing the treatment

  BASE RATE

  90%

  70%

  50%

  30%

  Positive anecdote

  88

  92

  93

  78

  Neutral anecdote

  81

  81

  69

  29

  Negative anecdote

  39

  43

  15

  7

  Source: Angela K. Freymuth and George F. Ronan, “Modeling Patient Decision-Making: The Role of Base-Rate and Anecdotal Information,” Journal of Clinical Psychology in Medical Settings 11, no. 3 (2004): 211–216.

  While it’s good for patients to be informed and engaged, they run the risk of being influenced by sources that rely predominantly on anecdotes, including friends, family, the Internet, and mass media. Doctors might find anecdotes to be an effective way of getting their points across to patients. But doctors and patients should be careful not to lose sight of the scientific evidence.17

  On Time and Within Budget—Maybe Next Time

  You will be familiar with this example if you have ever been part of a project, whether it involved renovating a house, introducing a new product, or meeting a work deadline. People find it hard to estimate how long a job will take and how much it will cost. When they are wrong, they usually underestimate the time and expense. Psychologists call this the planning fallacy. Here again, the inside view takes over, as the majority of people imagine how they will complete the task. Only about one-quarter of the population incorporates the base-rate data either from their own experience or from that of others, while laying out planning timetables.

  Roger Buehler, a professor of psychology at Wilfrid Laurier University, did an experiment that illustrates the point. Buehler and his collaborators asked college students how long it would take to complete a school assignment with three levels of chance: 50, 75, and 99 percent. For example, a subject might say that there was a 50 percent chance that he would finish the project by next Monday, a 75 percent chance he’d be done by Wednesday, and a 99 percent chance by Friday.

  Figure 1-2 shows how accurate the estimates were: when the deadline arrived for which the students had given themselves a 50 percent chance of finishing, only 13 percent actually turned in their work. At the point when the students thought there was 75 percent chance they’d be done, just 19 percent had completed the project. All the students were virtually sure they’d be done by the final date. But only 45 percent turned out to be right. As Buehler and his fellow researchers note, “Even when asked to make a highly conservative forecast, a prediction that they felt virtually certain that they would fulfill, students’ confidence in their time estimates far exceeded their accomplishments.”18

  This work has an interesting twist. While people are notoriously poor at guessing when they’ll finish their own projects, they’re pretty good at guessing about other people. In fact, the planning fallacy embodies a broader principle. When people are forced to look at similar situations and see the frequency of success, they tend to predict more accurately. If you want to know how something is going to turn out for you, look at how it turned out for others in the same situation. Daniel Gilbert, a psychologist at Harvard University, ponders why people don’t rely more on the outside view, “Given the impressive power of this simple technique, we should expect people to go out of their way to use it. But they don’t.” The reason is most people think of themselves as different, and better, than those around them.19

  FIGURE 1-2

  There’s a huge gap between when people believe they will complete a task and when they actually do

  Source: Roger Buehler, Dale Griffin, and Michael Ross, “It’s About Time: Optimistic Predictions in Work and Love,” in European Review of Social Psychology, vol. 6, ed. Wolfgang Stroebe and Miles Hewstone (Chichester, UK: John Wiley & Sons, 1995), 1–32.

  Now that you are aware of how the inside-outside view influences the way people make decisions, you’ll see it everywhere. In the business world, it will show up as unwarranted optimism for how long it takes to develop a new product, the chance that a merger deal succeeds, and the likelihood a portfolio of stocks will do better than the market. In your personal life, you’ll see it in the parents who believe their seven-year-old is destined for a college sports scholarship, debates about what impact video games have on kids, and the time and cost it will take to remodel a kitchen.

  Even people who should know better forget to consult the outside view. Years ago, Daniel Kahneman assembled a group to write a curriculum to teach judgment and decision making to high school students. Kahneman’s group included a mix of experienced and inexperienced teachers as well as the dean of the school of education. After about a year, they had written a couple of chapters for the textbook and had developed some sample lessons.

  During one of their Friday afternoon sessions, the educators discussed how to elicit information from groups and how to think about the future. They knew that the best way to do this was for each person to express his or her view independently and to combine the views into a consensus. Kahneman decided to make the exercise tangible by asking each member to estimate the date the group would deliver a draft of the textbook to the Ministry of Education.

  Kahneman found that the estimates clustered around two years and that everyone, including the dean, estimated between eighteen and thirty months. It then occurred to Kahneman that the dean had been involved in similar projects. When asked, the dean said he knew of a number of similar groups, including ones that had worked on the biology and mathematics curriculum. So Kahneman asked him the obvious question: “How long did it take them to finish?”

  The dean blushed and then answered that 40 percent of the groups that had started similar programs had never finished, and that none of the groups completed it in less than seven years. Seeing only one way to reconcile the dean’s optimistic answer about this group with his knowledge of the shortcomings of the other groups, Kahneman asked how good this group was compared with the others. After a pause, the dean responded, “Below average, but not by much.”20

  How to Incorporate the Outside View into Your Decisions

  Kahneman and Amos Tversky, a psychologist who had a long collaboration with Kahneman, published a multistep process to help you use the outside view.21 I have distilled their five steps into four and have added some thoughts. Here are the four steps:

  1. Select a reference class. Find a group of situations, or a reference class, that is broad enough to be statistically significant but narrow enough to be useful in analyzing the decision that you face. The task is generally as much art as science, and is certainly trickier for problems that few people have dealt with before. But for decisions that are common—even if they are not common for you—identifying a reference class is straightforward. Mind the details. Take the example of mergers and acquisitions. We know that the shareholders of acquiring companies lose money in most mergers and acquisitions. But a closer look at the data reveals that the market responds more favorably to cash deals and those done at small premiums than to deals financed with stock at large premiums. So companies can improve their chances of making money from an acquisition by knowing what deals tend to succeed.

  2. Assess the distribution of outcomes. Once you have a reference class, take a close look at the rate of success and failure. For example, fewer than one of six horses in Big Brown’s position won the Triple Crown. Study the distribution and note the average outcome, the most common outcome, and extreme successes or failures.

  In his book Full House, Stephen Jay Gould, who was a paleontologist at Harvard University, showed the importance of knowing the distribution of
outcomes after his doctor diagnosed him with mesothelioma. His doctor explained that half of the people diagnosed with the rare cancer lived only eight months (more technically, the median mortality was eight months), seemingly a death sentence. But Gould soon realized that while half the patients died within eight months, the other half went on to live much longer. Because of his relatively young age at diagnosis, there was a good chance he would be one of the fortunate ones. Gould wrote, “I had asked the right question and found the answers. I had obtained, in all probability, the most precious of all possible gifts in the circumstances—substantial time.” Gould lived another twenty years.22

  Two other issues are worth mentioning. The statistical rate of success and failure must be reasonably stable over time for a reference class to be valid. If the properties of the system change, drawing inference from past data can be misleading. This is an important issue in personal finance, where advisers make asset allocation recommendations for their clients based on historical statistics. Because the statistical properties of markets shift over time, an investor can end up with the wrong mix of assets.