You're About to Make a Terrible Mistake! - Olivier Sibony - E-Book

You're About to Make a Terrible Mistake! E-Book

Olivier Sibony

0,0
9,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

'A masterful introduction to the state of the art in managerial decision-making. Surprisingly, it is also a pleasure to read' – Daniel Kahneman, author of Thinking, Fast and Slow A lively, research-based tour of nine common decision-making traps – and practical tools for avoiding them – from a professor of strategic thinking We make decisions all the time. It's so natural that we hardly stop to think about it. Yet even the smartest and most experienced among us make frequent and predictable errors. So, what makes a good decision? Should we trust our intuitions, and if so, when? How can we avoid being tripped up by cognitive biases when we are not even aware of them? You're About to Make a Terrible Mistake! offers clear and practical advice that distils the latest developments in behavioural economics and cognitive psychology into actionable tools for making clever, effective decisions in business and beyond.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 414

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Praise for

YOU’RE ABOUT TO MAKE

A TERRIBLE MISTAKE!

‘Finally! Actionable advice for leaders based on decades of decision science. Succinct, accurate, and even-handed’

Angela Duckworth, author of Grit

‘An elegant synthesis of the best scientific work on human judgment that will be useful whether your aspirations in life are modest – become a smarter consumer of news – or whether they are grandiose – run a large company or country’

Philip E. Tetlock, co-author of Superforecasting

‘The best, funniest, most useful guide to cognitive bias in business. If you make decisions, you need to read this book’

Safi Bahcall, author of Loonshots

‘Olivier Sibony has that rare and magical ability to take complex concepts and package them in a fast-paced, easy-to-understand narrative. This book should be required reading for anyone looking to improve their decision process’

Annie Duke, author of Thinking in Bets

SWIFT PRESS

First published in the United States of America by Little, Brown Spark 2020 First published in Great Britain by Swift Press 2020

Copyright © Olivier Sibony, 2019

Translation copyright © Kate Deimling, 2020

Originally published in France by Débats Public.

Translated from the 2019 edition, published by Flammarion.

The right of Olivier Sibony to be identified as the Author of this Work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

A CIP catalogue record for this book is available from the British Library

ISBN: 978-1-80075-000-5 eISBN: 9781800750012

For Anne-Lise

CONTENTS

Introduction

You’re About to Make a Terrible Mistake (Unless You Read On)

PART 1: THE NINE TRAPS

1. “Too Good Not to Be True”

The Storytelling Trap

2. “Steve Jobs Was Such a Genius”

The Imitation Trap

3. “I’ve Seen This Before”

The Intuition Trap

4. “Just Do It”

The Overconfidence Trap

5. “Why Rock the Boat?”

The Inertia Trap

6. “I Want You to Take Risks”

The Risk Perception Trap

7. “The Long Term Is a Long Way Off”

The Time Horizon Trap

8. “Everyone’s Doing It”

The Groupthink Trap

9. “I’m Not Thinking of Myself, of Course”

The Conflict of Interest Trap

PART 2: DECIDING HOW TO DECIDE

10. Human, All Too Human

Are Cognitive Biases the Root of All Evil?

11. Lose a Battle, Win the War

Can We Overcome Our Own Biases?

12. When Failure Is Not an Option

Collaboration Plus Process

13. A Good Decision Is a Decision Made the Right Way

Is Paul the Psychic Octopus a Good Decision Maker?

PART 3: THE DECISION ARCHITECT

14. Dialogue

Confronting Viewpoints

15. Divergence

Seeing Things from a Different Angle

16. Dynamics

Changing Your Decision-Making Processes and Culture

Conclusion

You’re About to Make Excellent Decisions

Acknowledgments

Appendixes 1: Five Families of Biases

Appendixes 2: 40 Techniques for Better Decisions

Notes

INTRODUCTION

You’re About to Make a Terrible Mistake (Unless You Read On)

Unless you’ve been living in a cave for at least a decade, you have heard about cognitive biases. Particularly since the publication of Daniel Kahneman’s Thinking, Fast and Slow, terms like “overconfidence,” “confirmation bias,” “status quo bias,” and “anchoring” have become part of daily conversations at the water cooler. Thanks to decades of research by cognitive psychologists and the behavioral economists they inspired, we are now familiar with a simple but crucially important idea: when we make judgments and choices—about what to buy, how to save, and so on—we are not always “rational.” Or at least not “rational” in the narrow sense of economic theory, in which our decisions are supposed to optimize for some preexisting set of goals.

THE RATIONALITY OF BUSINESS DECISIONS

This is true, too, of business decisions. Just type “biases in business decisions” into your favorite search engine, and many millions of articles will confirm what experienced managers know: when executives make business decisions (even important strategic ones), their thought process does not remotely resemble the rational, thoughtful, analytical approach described in business textbooks.

My own discovery of this fact took place long before I’d heard of behavioral science, when I was a young business analyst freshly hired by McKinsey & Company. The first client I was assigned to work with was a midsize European company contemplating a large acquisition in the United States. The deal, if it went through, would more than double the size of the company and transform it into a global group. Yet after we spent several months researching and analyzing the opportunity, the answer was clear: the acquisition did not make sense. The strategic and operational benefits expected from the merger were limited. The integration would be challenging. Most importantly, the numbers did not add up: the price our client would have to pay was far too high for the acquisition to have any chance of creating value for his shareholders.

We presented our findings to the CEO. He did not disagree with any of our assumptions. Yet he dismissed our conclusion with an argument we had not anticipated. By modeling the acquisition price in U.S. dollars, he explained, we had missed a key consideration. Unlike us, when he thought about the deal, he converted all the numbers into his home currency. Furthermore, he was certain that the U.S. dollar would soon appreciate against that currency. When converted, the dollar-based cash flows from the newly acquired American company would be higher, and easily justify the acquisition price. The CEO was so sure of this that he planned to finance the acquisition with debt denominated in his home currency.

I was incredulous. Like everyone else in the room (including the CEO himself), I knew that this was the financial equivalent of committing one crime to cover up another. Finance 101 had taught me that CEOs are not foreign exchange traders, and that shareholders do not expect companies to take bets on currencies on their behalf. And this was a gamble: no one could know for sure which way exchange rates would move in the future. If, instead of appreciating, the dollar kept falling, the deal would go from bad to horrible. That was why, as a matter of policy, a large dollar-based asset should be evaluated (and financed) in dollars.

To a starry-eyed twentysomething, this was a shock. I had expected thorough analysis, careful consideration of multiple options, thoughtful debate, quantification of various scenarios. And here I was, watching a CEO who basically trusted his gut instinct and not much else knowingly take an unjustifiable risk.

Of course, many of my colleagues were more jaded. Their interpretations divided them into two camps. Most just shrugged and explained (albeit in more tactful terms) that the CEO was a raving lunatic. Wait and see, they said—he won’t last. The others offered a diametrically opposite explanation: the man was a genius who could formulate strategic visions and perceive opportunities well beyond what we consultants were able to comprehend. His disregard for our myopic, bean-counting analytics was proof of his superior insight. Wait and see, they said—he’ll be proven right.

I did not find either explanation particularly satisfactory. If he was crazy, why was he the CEO? And if he was a genius, gifted with powers of strategic divination, why did he need to ask us to apply our inferior methods, only to ignore our conclusions?

THE REVERSE ANNA KARENINA PRINCIPLE OF STRATEGY

The passage of time brought some answers. This CEO was certainly not a madman: before this deal, and even more so after it, he was regarded in his home country as one of the most respected business leaders of his generation.

He was also astoundingly successful. The acquisition turned out to be a great success (yes, the dollar did rise). Several big bets later, many of them equally risky, he had turned a near-bankrupt provincial company into a global industry leader. “See,” some of my colleagues might have said, “he was a genius after all!”

If only it were that simple. During the following twenty-five years, as a consultant to CEOs and senior executives in multinational companies, I had a chance to observe many more strategic decisions like this one. I soon realized that the sharp contrast between the textbook decision-making process and the reality of how choices were made was not a quirk of my first client. It was the norm.

But another, equally important conclusion struck me too: although some of these unorthodox decisions had a happy ending, most did not. Errors in strategic decision-making are not exceptional at all. If you doubt it, just ask the people who observe them most closely: in a survey of some two thousand executives, only 28 percent said their company “generally” makes good strategic decisions. The majority (60 percent) felt bad decisions were just as frequent as good ones.

Indeed, our firm regularly produced voluminous reports warning business leaders against the risks of bad decisions. Along with other consulting firms and an army of academics, we felt compelled to blow the whistle on specific types of strategic decisions that proved especially perilous. But apparently no one listened. Watch out for overpaid acquisitions, we told executives—who immediately proceeded, like my first client, to buy bigger and more expensive companies, quite often destroying shareholder value in the process. Budget your investments carefully, we suggested, as plans are usually far too optimistic—and optimistic they remained. Don’t let yourself be pulled into a price war, we wrote—but by the time our clients paid attention to this advice, they were deep in the trenches, under heavy fire. Don’t let competitors “disrupt” you with new technologies, we warned—only to watch incumbent upon incumbent go out of business. Learn to cut your losses and stop reinvesting in a failing venture, we advised—and this advice, too, fell on deaf ears.

For each of these mistakes, there were, of course, a few specific examples, presented as cautionary tales. These were striking and memorable, even entertaining for readers given to Schadenfreude. (You will find more such stories—thirty-five of them, to be precise—in this book.)

But the individual stories were not the point. The point was that, when it comes to certain types of decisions, failures are much more frequent than successes. Of course, this is not an absolute, hard-and-fast rule: some acquirers did manage to create value through acquisitions, some incumbents did revitalize their core business before being disrupted, and so on. These successes gave some hope to those facing the same situation. But statistically speaking, they were the exception. Failure was the rule.

In short, when our clients made strategic decisions that turned out great, it was sometimes because they broke the rules and acted unconventionally, as my first client had. But when they failed, they rarely did so in a new, creative way. Instead, they made precisely the same poor decisions that others had made before them. It was just the reverse of Tolstoy’s famous observation about families in Anna Karenina: as scholars of strategic differentiation have long theorized, every successful strategy is successful in its own way. But all strategic failures are alike.

THE BAD MAN THEORY OF FAILURE—AND WHY IT FAILS

The standard explanation for these failures remains the one most of my colleagues had offered on my first assignment: blame the bad, the incompetent, the crazy CEOs! Whenever a company runs into trouble, the stories we read in the business press put the blame squarely on the company’s leadership. Books recounting these failures generally list the “inexcusable mistakes” of the people in charge and attribute them without hesitation to character flaws. The usual ones are straight out of the eight-hundred-year-old list of the seven deadly sins. Sloth (under the more business-friendly name “complacency”), pride (usually called “hubris”), and of course greed (no translation necessary) top the list. Wrath, envy, and even gluttony make cameo appearances.* That just leaves lust . . . well, for that, read the news.

Just as we lionize the leaders of successful companies (the Great Man Theory of leadership and success), we seem to unquestioningly embrace the Bad Man Theory of Failure. Good CEOs produce good results; bad results are the fault of bad CEOs. This explanation feels morally satisfying and provides justification for holding CEOs accountable (including, importantly, when they are generously compensated for successes). It also seems, at least superficially, logical: if CEOs, despite being copiously forewarned, repeat the mistakes that others have made, there must be something seriously wrong with them.

However, it does not require much digging to see the problems with this theory. First, defining good decisions and good decision makers by the results they will eventually achieve is circular, and therefore useless. If you are making decisions (or selecting people who will make them), you need a way to know what works (or who is good) before the results are in. In practice, as I learned from the divided opinions of my colleagues about my first client, there is no sure way, at the time a decision is made, of telling who is good and who isn’t. Even knowing whether an individual decision is “good” or “bad” would, by this definition of “good,” require an ability to read the future.

Second, if all companies tend to make the same mistakes, it is not at all logical to attribute those mistakes to the decision maker, who is different every time. Sure, incompetent decision makers might all make bad decisions. But wouldn’t we expect them to make different bad decisions? If we observe one thousand identical errors, this seems to call for one explanation, not one thousand different ones.

Third and most importantly, calling these CEOs incompetent or crazy is blatantly absurd. Those who become the CEOs of large, established corporations have put in decades of hard work, consistently demonstrating an exceptional range of skills and establishing an impressive track record of success. Short of invoking some mysterious psychological transformation associated with the deleterious effects of supreme power (“whom the Gods would destroy, they first make mad”), it simply makes no sense to assume that so many leaders of large enterprises are mediocre strategists and bad decision makers.

If we rule out the Bad Man Theory of Failure, we’re left with an intriguing problem. Bad decisions are not made by bad leaders. They are made by extremely successful, carefully selected, highly respected individuals. These leaders get advice from competent colleagues and advisors, have access to all the information they could hope for, and are generally incentivized in healthy and appropriate ways.

These aren’t bad leaders. These are good, even great, leaders who make predictable bad decisions.

BEHAVIORAL SCIENCE TO THE RESCUE

To this puzzle, behavioral science brings a much-needed solution. Because humans do not conform to the economists’ theoretical model of rational decision-making, they make mistakes. And not just any mistakes: systematic, non-random, predictable mistakes. These systematic deviations from economic rationality are the errors we have learned to call biases. No need to postulate mad decision makers: we should expect sane people, including CEOs, to make the same mistakes others have made before them!

This realization goes a long way toward explaining the popularity of behavioral science among leaders in business and government. But so far, the most visible manifestations of this popularity have not concerned the decisions of CEOs. Instead, they have taken two forms you have certainly heard about—unconscious-bias training and nudging.

The “unconscious biases” that training aims to eradicate are those we bring to bear in our interactions with people, especially those who belong to minority groups. A growing number of organizations are aware of the problems posed by sexism, racism, and other biases, and train their employees to recognize and fight them. Training makes participants aware that, despite their good intentions, they are susceptible to these biases, and it usually exposes them to different images or models in order to change their unconscious associations. (Whether or not such mandatory training interventions are effective is a hotly debated topic, and not the focus of this book.)

In contrast to these attempts at making biases disappear, the second approach aims to use them productively. This is what the “Nudge” movement, launched by Richard Thaler and Cass R. Sunstein in their book of the same title, does.

The starting point is a debate as old as political science: if the choices of citizens produce outcomes that, as judged by the citizens themselves, are not optimal, what should government do? Some argue government should intervene actively. If, for instance, people don’t save enough, they can be given tax incentives to do so; if they eat too much, taxes and bans can be put in place to deter them. Others, however, retort that adults should make their own choices, which may include making their own mistakes: so long as their choices do not harm others, it is not for government to tell them what to do and what not to do.

Thaler and Sunstein’s great insight is that between these two views, the paternalistic and the libertarian, there is a third way, which they dubbed “libertarian paternalism.” Choices can be presented in a way that gently “nudges” people toward the best behavior (again, as judged by themselves) without coercing them in any way. For instance, changing the order in which options are presented, and especially changing the option that will be selected by default if an individual does nothing, can make a large difference in many situations.

The UK government was the first to adopt nudging as a policy tool by creating the Behavioural Insights Team, more often referred to as the Nudge Unit. National, regional, and local government institutions (the Organisation for Economic Co-operation and Development counts more than two hundred) have created their own nudge units to assist policymakers in various areas, ranging from tax compliance to public health to waste disposal.

Businesses have adopted the “nudge” terminology as well, sometimes even setting up “corporate behavioral science units.” Some, particularly in finance, have managed to exploit systematic anomalies in trading behavior to their advantage. For the most part, however, the methods businesses “discover” by applying behavioral economics are not new. As Thaler has written elsewhere, “Nudges are merely tools, and these tools existed long before Cass and I gave them a name.” Indeed, exploiting other people’s biases is one of the oldest ways to do business, legitimately or otherwise. When experts in “behavioral marketing” claim to analyze consumers’ biases in order to influence them more effectively, this often leads them to rediscover well-known advertising techniques. And of course, Thaler notes wryly, “Swindlers did not need to read our book to know how to go about their business.”

BEHAVIORAL STRATEGY

There is a third way of using behavioral science. Decision makers who adopt it do not aim to correct the biases of their own employees, as in unconscious-bias training. Nor do they attempt to exploit the biases of others, as with nudges and their corporate equivalents. They want to tackle biases in their own strategic decisions.

Once you think about it, this makes a lot of sense. If you believe your strategic decisions make a difference, and if you accept that biases in decisions result in errors, then your own biases might produce strategic errors. Even if you are a competent, careful, and hardworking executive, you might end up making avoidable, predictable mistakes. This is precisely the mysterious problem of bad decisions by good leaders that we discussed above. Except it is not “them”—it’s you. And it is not mysterious—it is behavioral.

In academia, a new stream of strategy research, appropriately called behavioral strategy, focuses on this topic. In the words of some of its leaders, it aims “to bring realistic assumptions about human cognition, emotions, and social behavior to the strategic management of organizations.” Keywords like cognition, psychology, behavior, and emotion now appear frequently in scholarly strategy journals. (In 2016, they appeared in more than one-fifth of papers in Strategic Management Journal.) Practitioner-oriented publications also reflect the growing interest in this topic. And surveys of decision makers show that many of them feel the need to tackle the bias problem to improve the quality of their decisions: a McKinsey survey of some eight hundred corporate board directors found that “reducing decision biases” was the number one aspiration of “high-impact” boards.

In short, many business leaders now realize that they should do something about biases in their own strategic decisions. But do what, exactly? Answering that question is the focus of this book.

THREE CORE IDEAS

Here is a very short overview of the answer. It can be summarized in three core ideas, each developed in one of the three parts of this book.

First idea: our biases lead us astray, but not in random directions. There is method to our madness. We may be irrational, but we are predictably irrational, as Dan Ariely memorably put it. In the strategic decisions of organizations, combinations of biases result in recurring patterns of strategic error that we can learn to recognize. These patterns explain the frequency with which we observe bad outcomes of certain types of strategic decisions, those where failure is not the exception but the rule. The first part of this book presents nine such patterns, nine decision traps into which our biases drive us.

Second idea: the way to deal with our biases is not to try to overcome them. Contrary to much of the advice that you may have read on the topic, you will generally not be able to overcome your own biases. Moreover, you don’t need to. Consider a question that skeptics of behavioral science have often raised: how do humans achieve so much, despite their limitations? Or: “If we’re so stupid, how did we get to the moon?” The answer, of course, is that “we,” individual humans, did not land on the moon. A large and sophisticated organization, NASA, did. We have cognitive limitations that we may not be able to overcome, but organizations can make up for our shortcomings. They can produce choices that are less biased and more rational than our individual decisions would be. As I will show in part 2, this requires two key ingredients: collaboration and process. Collaboration is needed because many people are more likely to detect biases than a lonely decision maker is. Good process is required to act on their insights.

Third idea: while organizations can overcome individual biases, this does not just happen by chance. Left to their own devices, groups and organizations do little to curb individual biases. Often, they even exacerbate them. Fighting the effects of biases requires thinking critically about how decisions are made, or “deciding how to decide.” A wise leader, therefore, does not see herself as someone who simply makes sound decisions; because she realizes she can never, on her own, be an optimal decision maker, she views herself as a decision architect in charge of designing her organization’s decision-making processes.

In part 3, I will present three principles that decision architects use to design effective strategic decision processes. I will illustrate them with forty practical techniques implemented in organizations around the world, from start-ups to multinational corporations. These techniques are by no means “forty habits” that you should adopt by Monday morning. My hope in presenting this list is to prompt you to select the ones that may work for your organization or team, but also to encourage you to invent your own.

My essential aim in writing this book is to inspire you to view yourself as the architect of the decision processes on your team, in your department, or in your company. If, before your next important decision, you give some thought to deciding how you will decide, you will be on the right track. And you will, perhaps, avoid making a terrible mistake.

* Yes, gluttony. A Fortune cover story about J. C. Penney, which will be discussed in chapter 1, notes: “There were hints that the board was not as focused as it could be. Ackman had consistently complained about the chocolate-chip cookies served at Penney’s board meetings. . . . Other Penney directors also expressed concern about the caliber of cuisine served at their meetings.”

PART 1

THE NINE TRAPS

1

“TOO GOOD NOT TO BE TRUE”

The Storytelling Trap

This story is completely true, because I made up the whole thing.

—Boris Vian, Froth on the Daydream

In 1975, in the wake of the first oil shock, the French government launched an advertising campaign to encourage energy savings. Its tagline: “In France, we don’t have oil, but we do have ideas.” That same year, two men approached Elf Aquitaine, the French state-owned oil major. The two had no prior experience in the oil industry but claimed to be inventors of a revolutionary method for discovering oil underground without drilling. Their method, they explained, would allow a specially equipped airplane to “sniff” oil from a high altitude.

The so-called technology was, of course, a fraud—and not even a particularly sophisticated one. The con artists had fabricated, ahead of time, the images that the miraculous machine would produce during test runs. When the trials took place, they simply used a remote control to make images of oil reserves appear on the screen.

The story may seem preposterous, but the leaders of Elf Aquitaine—from the scientists in the R&D department to the CEO—bought it. When the time came to commit large sums of money to test the new process, they convinced the prime minister and the president of France to sign off. Remarkably, the scam went on for more than four years and cost the company roughly one billion francs. From 1977 to 1979, the amounts paid to the con men even surpassed the dividends that Elf Aquitaine paid the French state, its controlling shareholder.

This story is so incredible that when a younger audience hears it today, their reaction (especially if they’re not French) is, at best, condescending pity, and, at worst, sarcastic attacks on the intelligence (or integrity) of the French leaders. How could such an obvious scam fool the top management of one of the biggest French companies, not to mention the entire French government? How could anyone be so foolish as to believe in oil-sniffing airplanes? Serious businesspeople would never fall for such a ridiculous story!

Or would they? Fast-forward thirty years to 2004. The place: California. A start-up called Terralliance is raising money. Its founder, Erlend Olson, has no experience in the oil industry: he is a former NASA engineer. What is his pitch? You guessed it! He wants to perfect a technology for detecting oil from airplanes.

The same scam takes place again, only the set and the actors have changed. This time the investors are Goldman Sachs, the venture capital firm Kleiner Perkins, and other big-name investment firms. The “inventor” has the rugged charm of a Texas cowboy. The rustic Boeing 707 that Elf Aquitaine purchased has made way for Sukhoi jets, bought surplus from the Russian army. History repeats itself so neatly that approximately the same amount of money, adjusted for inflation, is invested: half a billion dollars. Needless to say, the results are just as disappointing as they were the first time around: “sniffing” oil from airplanes, apparently, is quite difficult.

When smart, experienced professionals, highly skilled in their field, make large, consequential decisions, they can still be strangely blind. This is not because they decide to throw caution to the wind and take wild risks—in both oil “sniffing” cases, the investors did a considerable amount of due diligence. But while they thought they were critically examining the facts, they had already reached a conclusion. They were under the spell of storytelling.

THE STORYTELLING TRAP

The storytelling trap can derail our thinking about all kinds of managerial decisions, including ordinary ones. Consider the following case, adapted from a real (and typical) story.

You are the head of sales in a company that operates in an intensely competitive market for business services. You’ve just had a troubling call from Wayne, one of your highest-performing sales representatives. He told you that, twice in a row, your most formidable competitor, Grizzly, won business against your company. In both cases, Grizzly quoted a price that was much lower than yours. Wayne has also heard that two of your best salespeople have just resigned: the word is they’re going to work for Grizzly. On top of that, he told you that there are rumors circulating that Grizzly is aggressively pitching some of your oldest, most loyal clients. Before hanging up, Wayne suggested that at the next management meeting you review your pricing levels, which, based on his day-to-day interactions with customers, seem increasingly unsustainable.

This call is cause for concern. But as an experienced professional, you do not lose your cool. You know, of course, that you must check the information that was just shared with you.

Right away, you call another sales rep, Schmidt, in whom you have complete confidence. Has he also noticed an atmosphere of unusually intense competition? As a matter of fact, Schmidt was planning on bringing this up with you! Without hesitating, he confirms that Grizzly has been especially aggressive recently. Schmidt just renewed a contract with one of his most loyal clients, despite a quote from Grizzly that was 15 percent lower than his. Schmidt only managed to keep the client thanks to his strong, longstanding personal relationship with the company’s president. However, he adds, another contract is up for renewal soon. That one will be harder to keep if the price differential between Grizzly’s offer and yours is this large.

You thank Schmidt for his time and hang up. Your next call is to the head of the human resources department: you want to check Wayne’s report of salespeople who joined the competition. HR does indeed confirm that both departing sales reps, in exit interviews, said that they were going to Grizzly, drawn by the promise of higher performance-based bonuses.

Taken together, this information is starting to worry you. The first warning could have been just an insignificant incident, but you took the time to verify it. Could Wayne be right? Do you need to consider price cuts? At the very least, you’ll put the question on the agenda of the next executive committee meeting. You have not decided to start a price war—yet. But the question is now on the table, with potentially devastating consequences.

To understand what brought you to this point, let’s retrace your reasoning to Wayne’s phone call. Whether purposely or not, what Wayne did is the essence of storytelling: he constructed a story by giving meaning to isolated facts. Yet the story he told is not at all self-evident.

Let’s consider the same facts critically. Two salespeople have quit? Given the historical attrition rate of your sales force, maybe there is nothing unusual about this. The fact that they’re leaving you for your largest competitor is not unusual, either: where would they be more likely to go? Then, both Wayne and Schmidt sound the alarm by complaining about the aggressiveness of the competition. When they manage to renew contracts and keep their clients, they take all the credit for it, attributing it to their strong relationships. Coming from sales reps, this is hardly surprising. Most importantly, how many deals are we really talking about? Wayne failed in his attempt to win two new clients, but did not lose any. Schmidt kept an existing client and is managing your expectations about an upcoming renegotiation. All in all, so far, you have not lost (or won) a single contract! If this information is considered without the distorting lens of the first story, it really doesn’t add up to much.

So how did you get to the point of seriously considering a price cut? The storytelling trap was laid. You believed that you were objectively checking the facts Wayne presented, but you were actually seeking to corroborate what he said. To really check Wayne’s story, for instance, you could have asked: How many new clients did all your other sales reps sign in recent weeks? Are you actually losing market share? Does the low price offered by Grizzly to one of your clients truly correspond to the same scope of work?

Asking these questions (and many others) would have helped you spot the only issue that might justify a price cut: a significant erosion of your company’s value proposition relative to your competitors. If such a problem existed, you might want to cut prices. But those are not the questions you asked. Your definition of the problem was shaped by Wayne’s initial story. Instead of searching for data that could disprove that story, you instinctively went looking for information that would confirm it.

It is easy to see how the same way of thinking can lead others astray—including the management of the French oil company and the American venture capital investors. When someone tells us a good story, our natural tendency is to search first and foremost for elements that corroborate it—and, of course, to find them. We think we’re doing rigorous fact-checking. Checking the facts is essential, of course: Wayne’s information, for instance, could have been factually incorrect. But one can draw a false conclusion from accurate facts. Fact-checking is not the same as story-checking.

The power of storytelling is based on our insatiable need for stories. As Nassim Taleb notes in The Black Swan, “Our minds are wonderful explanation machines, capable of making sense out of almost anything, capable of mounting explanations for all manner of phenomena.” Neither Wayne, faced with some isolated facts, nor you, once those clues were in your hands, could imagine that the pattern they produced could be a fluke; that, taken together, they could mean nothing at all. Our first impulse is to see them as elements of a coherent narrative. The idea that they could be a mere coincidence does not occur to us spontaneously.

CONFIRMATION BIAS

The mental mechanism that makes us fall into this trap has a familiar name: confirmation bias. It’s one of the more universal sources of reasoning errors.

Confirmation bias is especially powerful in politics. We have long known that people’s susceptibility to political arguments depends on their preexisting opinions: when they watch the same debate between candidates, supporters of each side think that their champion has “won.” Each side is more receptive to its own candidate’s arguments and less attentive to the points the opponent scores—a phenomenon also known as myside bias. The same phenomenon occurs when individuals on opposite sides of the political fence are presented with identical facts and arguments on topics about which they already have firm opinions. It is even stronger when the two sides can choose the information sources they expose themselves to: doing so makes it even easier for them to ignore the data that inconveniently contradicts their positions.

The impact of confirmation bias on political opinions has become exponentially larger with the rise of social media. By design, social media overexposes its members to their friends’ posts, which tend to match and therefore bolster each user’s existing opinions. This is the now-familiar “echo chamber” or “filter bubble” phenomenon. Furthermore, social media often spreads incorrect or misleading information, now famously known as “fake news.” There is little doubt that, under the influence of confirmation bias, many social media users take fake news at face value when it supports their preexisting beliefs. And confirmation bias does not just affect political opinions: even our reading of scientific facts is susceptible to it. Whether the subject is climate change, vaccines, or GMOs, we tend to uncritically accept accounts that confirm our opinions, while immediately searching for reasons to ignore those that challenge them.

You might think that this is a matter of education and intelligence, and that only obtuse, distracted, or blindly partisan readers fall into these traps. Surprisingly, this is not the case: myside bias has little to do with intelligence. For example, when Americans are presented with a study showing that a German car is dangerous, 78 percent of them think it should be banned on American roads. But if they’re given identical data suggesting that a Ford Explorer is deemed dangerous in Germany, only 51 percent think that the German government should act. This is a blatant example of myside bias: national preference colors the way respondents interpret the same facts. Troublingly, the outcome of this experiment doesn’t vary based on the intelligence of its subjects. The most intelligent subjects give the same response as those with a lower IQ. Intelligence does not guard against confirmation bias.

Obviously, not all people are equally naive or credulous. Some studies have reported a negative correlation between the inclination to believe the most ridiculous fake news stories and traits such as scientific curiosity or strong critical thinking skills. But whatever our critical thinking abilities may be, we all buy more easily into a good story that bolsters our opinions than one that troubles or challenges us.

Confirmation bias even slips into judgments that we think (and hope) are completely objective. For instance, a series of studies conducted by Itiel Dror, a cognitive neuroscience researcher at University College London, showed that forensic scientists—made famous by television shows like CSI—are also subject to confirmation bias.

In one of his most striking studies, Dror showed fingerprint examiners pairs of “latent” and “exemplar” prints (taken, respectively, from a crime scene and a fingerprint database) and asked them if the two were a match. In fact, the experts had seen these pairs of prints some months earlier in the course of their day-today work. But since they could not recognize these pairs among the hundreds that they examine every year, they believed they were dealing with new prints from new cases. The “evidence” was presented along with information that could bias the examiner—for example, “the suspect confessed” or, on the contrary, “the suspect has a solid alibi.” In a significant proportion of cases, the experts contradicted their own previous readings of the data in order to provide a conclusion that was compatible with the “biasing” information supplied. Even if we are very competent and well-intentioned, we can be the victims of our biases without realizing it.

CHAMPION BIAS AND EXPERIENCE BIAS

For confirmation bias to be activated, there must be a plausible hypothesis, such as the ones Dror provided in his fingerprint experiments. And in order for the hypothesis to be plausible, its author must be believable.

In the example where you stepped into the shoes of the head of sales who received Wayne’s phone call, one of the things that led you to believe Wayne’s story was your confidence in him. If you had received the same call from one of your weakest salespeople, you might have written it off as the whining of an underperformer. Of course, we trust some people more than others, and what we know about the bearer of a message affects its believability. But we often underestimate how easily a story with a credible source can win us over. When the reputation of the messenger outweighs the value of the information he bears, when the project champion is more important than the project, we fall for champion bias.

And who is the champion in whom we have the most confidence? Ourselves! Faced with a situation we need to make sense of, the story that is immediately available to our mind, the one that we will then try to confirm, comes from our memory, our experience of apparently analogous situations. This is experience bias.

Champion bias and experience bias were both at work in the story of J. C. Penney. In 2011, this middle-market retailer, which operated some 1,100 department stores, was searching for a new CEO to breathe life into the aging company. Its board of directors found itself a “champion,” a savior with the perfect résumé: Ron Johnson. A true retailer, Johnson had successfully transformed merchandising at Target. But most of all, he was credited (along with Steve Jobs, of course) with creating and developing the Apple Stores, which revolutionized electronics retailing and became one of the most stunning successes in the history of retail. What better leader could J. C. Penney find to spearhead its reinvention? No one doubted that Johnson would produce results just as spectacular as those he had achieved at Apple.

Johnson suggested a strategy that was a radical break from tradition, and he implemented it with rare vigor. In essence, he took inspiration from the strategy that had made the Apple Stores successful: an innovative store design, offering a new in-store experience in order to attract a new consumer target. But he applied it even more energetically to J. C. Penney, because he was now transforming an existing company instead of creating one from scratch.

Johnson’s zeal for change knew no bounds, and his Apple inspiration was evident. Conscious that brand power played a key role in the Apple Stores’ success, Johnson struck costly exclusive agreements with major brands and began reorganizing stores around brands, not departments. Remembering that Apple had spent extravagantly to create a luxurious setting for its products, Johnson invested large sums in redesigning J. C. Penney stores and rebranding them “jcp.” Mirroring Apple’s inflexible policy of fixed prices, with no sales or discounts, Johnson broke with Penney’s practice of nonstop promotions and ubiquitous rebate coupons, replacing them with everyday low prices and modest monthly sales. Fearing that J. C. Penney’s staff would not implement these changes energetically enough, Johnson replaced a large portion of its management team, often with former Apple executives.

Surprisingly, none of these changes were tested on a small scale or with focus groups before they were implemented across the company. Why? Because, as Johnson explained, Apple disdained tests, and that never stopped it from being wildly successful. Did anyone harbor doubts about this radical break in strategy? “I don’t like negativity,” Johnson would reply. “Skepticism takes the oxygen out of innovation.”

To say that the results of this strategy were disastrous would be an understatement. Regular J. C. Penney customers no longer recognized the store or found coupons to draw them there. Other customers, whom Johnson wanted to wow with the new “jcp,” were not impressed. By the end of 2012, sales were down 25 percent, and Penney’s annual losses were approaching $1 billion, despite 20,000 layoffs to reduce costs. The stock price was down 55 percent.

Johnson’s first full year at the helm would also be his last. Seventeen months after his arrival, the board of directors finally ended the experiment. It rehired Johnson’s predecessor, who tried as best he could to undo everything Johnson had done.

The board of directors had believed in its champion, and the champion trusted his experience. Both believed in a great story. What business story is more irresistible than the promise of a savior who can repeat his amazing success by once again breaking all the rules? Once sold on that story, the board (and the CEO himself) ignored all the signs that the strategy was failing. On the contrary, everywhere they looked, they found reasons to confirm their initial beliefs. Confirmation bias and the power of storytelling were at work.

ALL BIASED

Of course, we all believe that, had we been J. C. Penney board members, we wouldn’t have bought Johnson’s story. His mistakes—like those of the leaders of Elf Aquitaine in the oil-sniffing plane scandal—seem ridiculous. How incompetent, how arrogant these people must have been!

No wonder we react this way: after a shipwreck, we blame the captain. The financial press consistently attributes the failures of large corporations to their leaders’ mistakes. Business books are full of these kinds of stories, usually centered on the leader’s character flaws: pride, personal ambition, delusions of grandeur, bullheadedness, inability to listen to others, and, of course, greed.

How reassuring it is to blame every disaster on an individual’s faults! This way, we can keep on thinking that we would not have made the same mistakes in their shoes. It also lets us conclude that such errors must be highly unusual. Unfortunately, both conclusions are false.

First of all, let’s state the obvious: the leaders we discuss here are not stupid. Far from it! Before these failures, and sometimes still afterward, they were regarded as highly skilled executives, and much more than that: business wizards, visionary strategists, role models for their peers. The bosses of Elf Aquitaine, pure products of the French meritocracy, were certainly not considered naive, and neither were the Goldman Sachs or Kleiner Perkins investors. As for Ron Johnson, an article on his departure from Apple described him as “humble and imaginative,” “a mastermind,” and “an industry icon.” As evidence of his reputation, it’s worth noting that J. C. Penney’s stock price shot up by 17 percent when his arrival was announced.

More importantly, while these stories are spectacular, the mistakes they illustrate are far from exceptional. As we shall see in the following chapters, there are many types of decisions for which error and irrationality are not the exception but the rule. In other words, these examples—like those that follow—are not chosen because they are out of the ordinary but, on the contrary, because they are all too ordinary. They represent archetypes of recurring errors that push leaders in predictable but wrong directions.

Instead of dismissing these examples as exceptions, we should ask ourselves a simple question: how could widely admired decision makers, surrounded by carefully selected teams, heading time-tested organizations, have fallen into traps that seem very crude to us? The simple answer is that when we are in the grip of a great story, confirmation bias can become irresistible. As we will see, the same reasoning applies to the biases we will discover in the coming chapters.

“JUST GIVE ME THE FACTS”

Many executives believe themselves immune to the dangers of storytelling. The antidote, they say, is simple: put your trust in facts, not stories. “Facts and figures!” What trap could they possibly fall into?

The very same one, it turns out. Even when we believe that we’re making a decision on the basis of facts alone, we are already telling ourselves a story. We cannot consider objective facts without finding, consciously or not, a story that makes sense of them. One illustration of this danger comes from those who should be, both by method and by temperament, obsessed with the facts and immunized against confirmation bias: scientists.

In the past couple of decades, a growing number of published scientific results have turned out to be impossible to replicate. Particularly in medicine and experimental psychology, a “replication crisis” is raging. One of the most cited articles on the issue is simply titled “Why Most Published Research Findings Are False.” Of course, explanations for the phenomenon are many, but confirmation bias plays an essential role.

In theory, the scientific method should guard against the risk of confirmation bias. If, for example, we are testing a new drug, our experiment should not aim to confirm the hypothesis that the treatment works. Instead, we should test the “null hypothesis” that the drug has no effect. When the results allow for rejecting this null hypothesis with sufficient probability, the alternative hypothesis—that the drug has an effect—is plausible, and the conclusion of the study is positive. On paper, the process of scientific discovery goes against our natural instincts: it seeks to disprove an initial hypothesis.

In practice, however, things are more complicated. A research project is a long effort, during which researchers make many decisions. As they define their research questions, conduct their experiments, decide which “outlier” data points to exclude, choose statistical analysis techniques, and select which results to submit for publication, researchers face many methodological questions and may have a choice among several acceptable answers. Leaving aside cases of scientific fraud (which are rare), these choices are the holes through which confirmation bias slips in. With the best intentions, and in complete good faith, a researcher can influence her results in the direction of her desired hypothesis. If these influences are subtle enough, they may remain undetected in the peer-review process. This is one of the reasons why scientific journals can end up publishing “false positives,” studies that are technically solid and pass all the required tests of statistical significance but turn out to be impossible for other researchers to replicate.

The authors of a 2014 piece in Psychology, Public Policy, and Law, for instance, had to add an erratum to a published article: a mistake in statistical analysis had led them to overestimate their results. And what was the subject of their article? The effect of cognitive biases, especially confirmation bias, on the court testimony of mental health experts! As the authors noted in their correction, their mistake “ironically demonstrates the very point of the article: that cognitive biases can easily lead to error—even by people who are highly attuned to and motivated to avoid bias.”

Ironic indeed . . . but telling: however hard we try to be “objective,” our interpretations of facts and figures are always subject to our biases. We can only view them through the prism of a story we are unconsciously trying to confirm.

THE ILLUSION MACHINE

Let’s return to the two stories of the oil-sniffing airplanes. Confirmation bias and the power of storytelling help explain how so many smart, experienced people managed to get things so totally wrong. While the details of the 1975 scam and the 2004 pipe dream differ, both featured skillful “inventors” who targeted their victims with a tailor-made story.