The Black Swan Problem - Hakan Jankensgard - E-Book

The Black Swan Problem E-Book

Hakan Jankensgard

0,0
29,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

An incisive framework for companies seeking to increase their resilience In The Black Swan Problem: Risk Management Strategies for a World of Wild Uncertainty, renowned risk and finance expert Håkan Jankensgård delivers an extraordinary and startling discussion of how firms should navigate a world of uncertainty and unexpected events. It examines three fundamental, high-level strategies for creating resilience in the face of "black swan" risks, highly unlikely but devastating events: insurance, buffering, and flexibility: The author also presents: * Detailed case studies, stories, and examples of major firms that failed to anticipate Black Swan Problems and, as a result, were either wiped out or experienced a major strategy disruption * Extending the usual academic focus on individual biases to analyze Swans from an organizational perspective and prime organizations to proactive rather than reactive action * Practical applications and tactics to mitigate Black Swan risks and protect corporate strategies against catastrophic losses and the collateral damage that they cause * Strategies and tools for turning Black Swan events into opportunities, reflecting the fact that resilience can be used for strategic advantage An expert blueprint for companies seeking to anticipate, mitigate, and process tail risks, The Black Swan Problem is a must-read for students and practitioners of risk management, executives, founders, managers, and other business leaders.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 512

Veröffentlichungsjahr: 2022

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright

Dedication

Prologue

Note

Acknowledgements

CHAPTER ONE: The Swans Revisited

THE NATURE OF RANDOMNESS

THE MOVING TAIL

THE ROLE OF EXPECTATIONS

WHAT MAKES US SUCKERS?

THE RELATIVITY OF BLACK SWANS

MEET THE PREPPERS

NOTES

CHAPTER TWO: Corporate Swans

THE BOARD'S PERSPECTIVE

SWANS ATTACK

STRATEGY SWANS

THE SWAN WITHIN

THE GROWTH FETISH

THE FEAR FACTOR

THE CHIEF EXECUTIVE SWAN

SWANS ON THE RISE

NOTES

CHAPTER THREE: The Black Swan Problem

TAIL RISK AND FIRM VALUE

UNDERSTANDING WIPEOUTS

STRATEGY DISRUPTION

ALL YOU ZOMBIES

THE AFFORDABILITY ISSUE

THE CONUNDRUM

NOTES

CHAPTER FOUR: Greeting the Swan

RANDOMNESS REDUX

THE ROADS NOT TAKEN

FUNCTIONAL STUPIDITY

THE SWANMAKERS

ON TOOLS AND MODELS

A SWAN RADAR FOR THE BOARD

NOTES

CHAPTER FIVE: Taming the Swan

DRAWING THE LINE

DISTANCE TO WIPEOUT

RISK CAPITAL

STRESS TESTING

RESILIENCE VS ENDURANCE

QUANTITATIVE MODELS

LIQUIDITY IS KING

NOTES

CHAPTER SIX: Catching the Swan

ANTIFRAGILITY

RESTORING THE TRUE PATH

BUYING ON THE CHEAP

OPPORTUNITY CAPITAL

FLIGHT TO SAFETY

RISK AS STRATEGY

NOTES

CHAPTER SEVEN: Riding the Swan

RISK SHIFTING

A BEAUTIFUL STRATEGY

FUEL FOR GROWTH

NARCISSISM REDEEMED

A TAIL OF TWO COMPANIES

END OF THE RIDE

SWANS TO THE RESCUE?

NOTES

Epilogue

Note

Index

End User License Agreement

List of Tables

Chapter 1

TABLE 1.1 The prepper's list

Chapter 4

TABLE 4.1 Value chain vulnerability

Chapter 7

TABLE 7.1 Leverage and growth

List of Illustrations

Chapter 2

FIGURE 2.1 Corporate Swan rates 1955–2020

Chapter 3

FIGURE 3.1 Collateral damage

FIGURE 3.2 The risk management paradox

Chapter 4

FIGURE 4.1 The Swan radar

Chapter 5

FIGURE 5.1 Risk Capacity

FIGURE 5.2 Resilience‐endurance matrix

FIGURE 5.3 Shift in risk profile (risk measure over time)

Chapter 6

FIGURE 6.1 Strategic cash

Chapter 7

FIGURE 7.1 Riding the Swans

Guide

Cover

Table of Contents

Title Page

Copyright

Dedication

Prologue

Acknowledgements

Begin Reading

Epilogue

Index

End User License Agreement

Pages

iii

iv

v

vi

x

xi

xii

xiii

xiv

xv

xvii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

FOUNDED IN 1807, JOHN Wiley & Sons is the oldest independent publishing company in the United States. With offices in North America, Europe, Asia, and Australia, Wiley is globally committed to developing and marketing print and electronic products and services for our customers’ professional and personal knowledge and understanding.

The Wiley Corporate F&A series provides information, tools, and insights to corporate professionals responsible for issues affecting the profitability of their company, from accounting and finance to internal controls and performance management.

The Black Swan Problem

Risk Management Strategies for a World of Wild Uncertainty

 

 

HÅKAN JANKENSGÅRD

 

 

 

 

 

 

 

 

 

This edition first published 2022

Copyright © 2022 by John Wiley & Sons, Ltd.

Registered officeJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom

For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Wiley publishes in a variety of print and electronic formats and by print‐on‐demand. Some material included with standard print versions of this book may not be included in e‐books or in print‐on‐demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising here from. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

Library of Congress Cataloging‐in‐Publication Data:

Names: Jankensgård, Håkan, author.

Title: The black swan problem : risk management strategies for a world of wild uncertainty / Håkan Jankensgård.

Description: First edition. | Chichester, United Kingdom : Wiley, 2022. | Includes index.

Identifiers: LCCN 2021062804 (print) | LCCN 2021062805 (ebook) | ISBN 9781119868149 (cloth) | ISBN 9781119868156 (adobe pdf) | ISBN 9781119868163 (epub)

Subjects: LCSH: Risk management. | Organizational effectiveness.

Classification: LCC HD61 .J355 2022 (print) | LCC HD61 (ebook) | DDC 658.15/5—dc23/eng/20220112

LC record available at https://lccn.loc.gov/2021062804

LC ebook record available at https://lccn.loc.gov/2021062805

Cover Design and Image: WileyIllustration inspired by: © rainman_in_sun/Getty Images

My dear children, Wilma and August, I wrote this book for you.

‘Our world is dominated by the extreme, the unknown, and the very improbable … and all the while we spend our time engaged in small talk, focusing on the known and the repeated.’

Nicholas Nassim Taleb

Prologue

MY FIRST INTRODUCTION TO Black Swan thinking did not come from the famous book by Nicholas Nassim Taleb published in 2007.1 It came well before that, courtesy of my instructor as I was training to get a driver's licence for motorcycles. We had stopped at an intersection, when the arrow turned green and I just rode off onto the highway in a nice left turn. The instructor immediately told me to pull over. Once we did, he started lambasting me for having made the turn without looking even once to my right. He was visibly upset, or at least played his part well. ‘But the arrow was green,’ I meekly responded, ‘so the others must stop.’ In my worldview, that was how it worked. My experience, while not exactly extensive, supported that notion. In fact, nothing had ever suggested otherwise. ‘Do you think,’ he yelled, ‘that you can trust others to do what they are supposed to do? Never assume that!’ The penny dropped. By operating on a very naïve assumption about how the world worked, based on a handful of observations, I had managed to make myself a sucker. I was setting myself up, unnecessarily, for a highly improbable major calamity.

The deeper meaning of the Black Swan idea is not to be paranoid and develop trust issues. Rather it is to resist the temptation to base our course of action on pristine ideas and models of how the world ought to be. Especially dangerous is our inclination to form expectations of a benign world based on recent observations that all seem to indicate stability. Taleb's choice of metaphor – the Black Swan – was meant to convey precisely this problem of induction. It had always been taken for granted that all swans are white, an assumption supported by millions of observations across centuries. Imagine the surprise when the black variety was sighted in Australia subsequent to the ‘discovery’ of that continent. All it took was one single observation to turn completely on its head what people had internalized as self‐evident and true for hundreds of years. Likewise, a single rogue or inattentive driver can invalidate, in an instant, hundreds of observations of other drivers yielding when the arrow is green. A recurring theme in Taleb's Black Swan framework is that we fail to make provisions for such outliers because they are outside our mental models and decision‐support tools.

So what is this famed and metaphorical creature more precisely? A Black Swan, according to Taleb, has three attributes. Before it occurs, it is considered extremely unlikely, to the extent we grant it for possible at all. When it occurs, its consequences are massive. After it has occurred, it makes perfect sense to us that something like that could happen. The last attribute is due to our brains being awesome ‘explanation machines’ that crave coherence. Dots will be ambitiously connected until coherence is established. The world inhabited by these Black Swans is characterized by uncertainty that is ‘wild’ rather than benign, meaning that there is always a potential for great dislocations and non‐linearities. Even when we manage to bring our focus to the tail end of the distribution, we frequently find that it is in flux, with events greatly surpassing anything we have seen or heard before.

The impact of the Black Swan idea has been substantial. While it may not have trickled down into popular culture just yet, the educated public seems largely aware of it, whether or not they have actually read the book. Admittedly, Taleb has drawn a fair amount of criticism for his work. Often, the target has been his abrasive and self‐aggrandizing style of writing and the use of fictional characters (the Tonys and the Valerias). Critics also claim to have found more than a few inconsistencies in his web of ideas. I do not wish to dwell on such trivial matters here. Instead, I would like to point to a habit among takers of the Black Swan concept to render it synonymous with low‐probability high‐impact events (usually disastrous ones). We certainly did not need a risk philosopher, somebody might go on to say, to point out the existence of that. In this view, Taleb just popularized what we already knew.

Reducing the Black Swan to be merely a shorthand for describing low‐probability high‐impact events is to do the construct significant injustice. It trivializes it and detracts from its core ideas. In this book, I will re‐emphasize the importance of expectations in the formation of Black Swans. This is a crucial point because it means that we can turn at least some Black Swans into ‘mere’ tail risk of a more calculated sort by controlling and improving our expectations in various ways. I will also stress how differences in expectations have strategic implications. No doubt, a more realistic appreciation of the nature of randomness is valuable for its own sake, but it gets even more interesting when we realize that others may systematically get themselves into perilous positions because they operate on unrealistic assumptions about randomness. What is a Black Swan to others need not be to us, and we may stand to benefit from this in various ways.

The main ambition of this book, however, is to place the Black Swan framework in a corporate context and explore its ramifications for business strategy. Taleb, motivated as he is by the grand sweep of history and civilization, and the philosophical underpinnings of randomness, does not take more than a fleeting interest in the firm as an institution. Yet, if we accept the premises of the Black Swan framework, we should ask what the implications are for how businesses are to be run. Businesses, after all, are stewards of a large part of humanity's resources, and innumerable people depend on them for their livelihood as well as the hope of making it big. Extreme uncertainty may deliver great risk but also holds out the promise of great opportunity.

As it turns out, firms are peculiar entities in several ways. To begin with, they operate under a mandate to maximize value, which introduces a focus on cost minimization that hugely shapes the corporate attention span for extreme events. There are also interesting features like separation between ownership and control, limited liability and multiple layers of stakeholders with different viewpoints. All these features profoundly influence how we should process the fact that uncertainty is wild rather than benign, and this book is about exploring them. How are massive consequences to be understood when we take the vantage point of a firm? When is it justifiable to spend corporate resources in preparation for highly improbable catastrophes? When should we have a strategy for Black Swan events? When and how can we make wild uncertainty work for us rather than against?

We start out, however, by recapping some of the core themes in the Black Swan framework (Chapter 1, ‘The Swans Revisited’). In this chapter, we take a fresh look at the question of what constitutes a Black Swan. We also revisit the issue of what kind of randomness we face as decision‐makers in the real world, as well as the crucial role of expectations. We establish the relative nature of Swans: players in the same industry can have very diverging expectations and are therefore likely to display different levels of preparation and vulnerability.

In Chapter 2 (‘Corporate Swans’), we begin our exploration of what the Black Swan framework means for corporate management. I argue that it is the expectations of the firm's Board of Directors that is the correct perspective for determining what counts as a Black Swan. This has the interesting effect of increasing the number of Swans because of the structural information disadvantage the directors find themselves in as compared to the executive team. Even more troubling, the rate also goes up because the executives are a potential source of Black Swans. We go on to explore several other Swans that are specific to the corporate domain, over and above the ones generated by the natural world and complex systems at the level of society.

In Chapter 3 (‘The Black Swan Problem’), we dive deeper into the question of how risk, and tail risk more specifically, relates to firm value. The risk of wipeout and strategy disruption provide economic arguments in favour of managing tail risk. Ultimately, though, we want to be able to demonstrate that managing tail risk produces benefits that are larger than the costs. This proves somewhat harder than expected, because when the probability of something goes towards zero – as they do with Black Swans – the cost of risk also tends towards zero. The conundrum is that when the cost of risk is near‐zero, there is little apparent justification for spending corporate resources on tail risk management. The second part of the Black Swan problem is the difficulty of mustering a proactive response to extreme events. A whole host of biases, both on the individual and organizational level, work against it.

In Chapter 4 (‘Greeting the Swan’), we discuss the need to alter our attitude to randomness as a first basic step in developing a strategy for dealing with the Black Swan phenomenon. We also need to recognize that the resources and patience available for managing the risk associated with extreme events are very small. Running through this book is a persistent tension between tail risk management and economic efficiency. Another part of adapting our mindset to a world of wild uncertainty is to own up to the delicate matter that Black Swans are sometimes made closer to home by people we want to trust. The Board of Directors need to extend their Swan map to cover known company‐wreckers like acquisitions, derivative portfolios and, yes, the executive team.

In Chapter 5 (‘Taming the Swan’), we arrive at the issue of how to make ourselves robust to wild uncertainty, to continue to survive and thrive even in much‐worse‐than‐expected scenarios. We dig deeply into the role of buffers and flexibility in achieving resilience, with particular emphasis on risk capital, a set of financial resources that absorbs shocks and ensures survival and strategy execution. Stress testing, I argue, is a particularly important, but currently under‐used, tool in the corporate battle against Black Swans. In them, we push resilience to the limits to learn about breaking points that can inform risk management strategies. The chapter contains extensive discussions on the role of models in the process of developing resilience. On the one hand, they help us see by reducing complexity and clarifying important mechanisms. On the other hand, they can make us blind to whatever is outside the model, creating a potential source of vulnerability.

In Chapter 6 (‘Catching the Swan’), we learn about the concept of antifragility, which goes beyond resilience to identify things that actually benefit from disorder. This is where we fully explore the strategic implications of differences in expectations and levels of preparedness. Black Swans have the potential to decimate corporate strategies, which may open up opportunities for stronger competitors. We move past the idea of designing our risk management strategy exclusively by looking at our own vulnerabilities – we now incorporate the vulnerabilities of our closest competitors.

In Chapter 7 (‘Riding the Swan’), we switch into a risk‐loving mode, searching for positive Black Swans. When our tinkering has yielded what is beginning to look like a beautiful, potentially world conquering strategy, the time has come to press the gas pedal to the floor. We go for maximum growth by pulling all the levers at our disposal, all the while tolerating huge amounts of risk. We reach the disconcerting conclusion that all the things that were undesirable in order to build resilience are now in high demand.

Note

1

   Taleb, N. N., 2007.

The Black Swan: The impact of the highly probable

. Random House: New York.

Acknowledgements

THE AUTHOR WISHES TO express a heartfelt thanks to the following people for their feedback throughout the writing process:

Petter Kapstad

Stanley Myint

John Fraser

Vesa Hakanen

Ulrich Adamheit

Martin Stevens

Aswath Damodaran

Tomas Sörensson

Seth Bernström

Tom Aabo

Niclas Andrén

Jacob Pedersen

Carl Montalvo

CHAPTER ONEThe Swans Revisited

THE BLACK SWAN OF the popular imagination is one that swoops down from a clear blue sky, creating massive disorder in a very short amount of time. We expect it to be sudden and dramatic. The archetypical Black Swan is perhaps the 9/11 attack on the twin towers in New York in 2001. Virtually nobody could have been able to imagine such a thing. It was simply not on the mental map that something like that should even exist. Yet it happened, and in a single stroke, the world was a different place. The path we were on changed. The attack led to a whole new security apparatus, the war on terror, and the war in Iraq, to mention but a few of its consequences.

Actually, the ‘out of the blue’ aspect is not part of the original framework. Some Swans cited by Taleb take years if not centuries to play out. According to Taleb, Black Swans have just three attributes, none of which refers to suddenness. First, they are highly improbable. Second, they are highly consequential. Third, they make perfect sense after the fact.1 When people talk about Black Swans it is usually the first two aspects they focus on, as if the term were essentially shorthand for low probability high impact risks. Simplifying in this way is wholly consistent with the reason that the Black Swan problem exists in the first place, reflecting as it does our tendency to reduce the number of dimensions of the phenomenon before us down to something more tractable and convenient.

Equating Black Swans with ‘mere’ low probability high impact risk, however, is to do the concept significant injustice. In reality, the Black Swan framework is valuable because it represents an altogether different way of approaching the world. Taleb asks us to reconsider some of our core assumptions about the very nature of the randomness we face as decision‐makers and the inferences we make based on what we can observe. Furthermore, he brings our attention to the crucial role of expectations and attitudes in dealing with uncertainty. The problem, Taleb explains, is one of not being humble enough with respect to the limitations of our knowledge. If we believe the world consists of a certain kind of randomness and that we can have mastery over it, we may be in for some pretty bad surprises if those beliefs do not conform with reality. We can try to impose crisp and stylized ideas that appeal to our aesthetic sensibilities as much as we want, but the chaotic world we live in refuses to bend. This insistence on abstract beauty is what Taleb has in mind when he labels something as ‘Platonic’, after the famed Greek philosopher who saw loveliness in order and maintained that it could be superimposed on the messy reality we can observe with our senses (Taleb, 2007, p.19).

THE NATURE OF RANDOMNESS

Randomness refers to unpredictability. It applies whenever the outcome for some variable, such as the number of visitors to the Louvre on a given weekday, cannot be known with certainty beforehand. It is a function of our inability to know and predict the future. Try as we might, we never seem able to build those perfect forecasting algorithms that get it right all the time. In fact, as Taleb is at pains to point out, our overall track record in forecasting is awful (more on this later).

Why is there a general failure to predict what the future will bring? To answer this question, first consider that one very basic source of randomness is the physical world itself, which is constantly changing through processes that we do not fully comprehend. Science marches on, chipping away at the ignorance that produces apparent randomness. But despite the many laws of nature that have been uncovered, we never know where the next lightning will strike or how ocean currents will respond to changes in melting ice sheets. In the end, there are too many variables and too many complicated feedback loops in these highly dynamic systems. On top of that there is human civilization itself. While once rudimentary and mostly local, over time society has become complex beyond imagination. Technical innovations have made possible advanced systems that increasingly connect people across different parts of the globe. It is fundamentally unknowable what outcomes these vast and interconnected systems of interacting people and technologies will produce. Human agency by itself ensures why the future keeps bringing so many surprises, as the 9/11 attack illustrates. It should be clear that we are up against a complexity that is beyond our ability to predict successfully.

The difficulties we face in predicting the future is related to the problem of induction, a classic problem in philosophy. While data can certainly teach us a great deal about the workings of the world, the philosopher and sceptic David Hume made us realize that we cannot arrive at secure knowledge on the basis of empirical observations. The problem of induction says that no matter how many observations you obtain, you cannot know for sure that the observed pattern is going to hold in the future. This inherent limitation is at the heart of the Black Swan concept. Any knowledge obtained through observation, Taleb says, is fragile. It is what the Black Swan metaphor itself is meant to convey. Recall that millions of observations on white swans had seemingly verified the notion that all swans are white, and it only took one observation of a black one to falsify it. Along the same lines, Peter Bernstein (1996) observed in his epic story about risk that: ‘… history repeats itself, but only for the most part’2 (emphasis added). This sentence really sums it all up and explains why induction is treacherous ground for making assumptions about the future.

Once we capitulate to the fact that we cannot predict the future, the next best thing would be to be able to characterize randomness itself, i.e. describe it. In that way, we would have some idea about the scope for deviations from what we expect. A description of randomness would involve some degree of quantification of things like the range within which the values of a variable can be assumed to fall and how the outcomes are distributed within that range (frequencies). We might occasionally find such descriptions of random processes to be practically relevant insofar as they help us make informed decisions and our future wellbeing depends on the outcome of the variable in question. They are potentially helpful, for example, in coming up with a reasonable analysis of the trade‐off between risk and return in different kinds of investment situations.

When characterizing randomness, a useful first distinction is between uncertainty and known odds.3 Uncertainty simply means that the odds are not known, indeed cannot be known. When randomness is of this sort, there is no way of knowing with certainty the range of outcomes and their respective probabilities. Known odds, in contrast, means that we have fixed the range of outcomes and the associated probabilities. The go‐to example is the roll of a dice, in which the six possible outcomes have equal probabilities. Drawing balls with different colours out of an urn is another favourite textbook example of controlled randomness.

Uncertainty, it turns out, is what the world has to offer. In fact, known odds hardly exist outside man‐made games. This is the case for exactly the same reasons that forecasting is generally unsuccessful: there are some hard limits to our theoretical knowledge of the world.4 There is ample data, for sure, which partly makes up for it. But the world generates only one observable outcome at a time, out of an infinite number of possibilities, through mechanisms and interactions that are beyond our grasp. There is nothing to say that we should be able to objectively pinpoint the odds of real‐world phenomena. Whenever a bookie, for example, offers you odds on the outcome of the next presidential election, it is a highly subjective estimate (tweaked in favour of the bookie).

Whenever data exists, it is of course possible to try to use it to come up with descriptions of the randomness in a stochastic process. Chances are that we can ‘fit’ the data to one of the many options available in our library of theoretical probability distributions. Once we have, we have seemingly succeeded in our quest to describe randomness, or to turn it into something resembling known odds. This is the frequentist approach to statistical inference, in which observed frequencies in the data provide the basis for probability approximations. Failure rates for a certain kind of manufacturing process, for example, can serve as a reasonably reliable indication of the probability of failure in the future.

It is important to see, however, that even when we are able to work with large quantities of data, we are still in the realm of uncertainty. The data frequencies typically only approximate one of the theoretical distributions. What is more, the way we collect, structure, and analyse these data points determines how we end up characterizing the random process and therefore the probabilities we assign to different outcomes. To the untrained eye, they might seem like objective and neutral probabilities because they are data‐driven and obtained by ‘scientists’. However, there is always some degree of subjectivity involved in the parameterization. The model used to describe the process could end up looking different depending on who designs it. Hand a large dataset over to ten scientists and ask them what the probability of a certain outcome is, and you may well get ten different answers. Because of the problem of induction, as discussed, there is always the possibility that the dataset, i.e. history, is a completely misleading guide to the future. Whenever we approximate probabilities using data, we assume that the data points we use are representative for describing the future.

THE MOVING TAIL

At this point, we are ready to conclude that the basic nature of randomness is uncertainty. Known odds, probabilities in the purest sense of the word, are an interesting man‐made exception to that rule. If we accept that uncertainty is what we are dealing with, a natural follow‐up question is: What is uncertainty like? A distinction we will make in this regard is between ‘benign’ and ‘wild’ uncertainty.5 Benign uncertainty means that we do not have perfect knowledge of the underlying process that generates the outcomes we observe, but the observations nonetheless behave as if they conform to some statistical process that we are able to recognize. Classic examples of this are the distribution of things like height and IQ in a population, which the normal distribution seems to approximate quite well.

While the normal distribution is often highlighted in discussions about ‘well‐behaved’ stochastic processes, many other theoretical distributions appear to describe real‐world phenomena with some accuracy. There is nothing, therefore, in the concept of benign uncertainty that rules out deviations from the normal distribution, such as fat tails or skews. It merely means that the data largely fits the assumptions of some theoretical distribution and appears to do so consistently over time. It is as if we have a grip on randomness.

Wild uncertainty, in contrast, means that there is scope for a more dramatic type of sea change. Now we are dealing with outcomes that represent a clear break with the past and a violation of our expectations as to what was even supposed to be possible. Imagine long stretches of calm and repetition punctured by some extreme event. In these cases, what happened did not resemble the past in the least. Key words to look out for when identifying wild uncertainty is ‘unprecedented’, ‘unheard of’, and ‘inconceivable’, because they (while overused) signal that we might be dealing with a new situation, something that sends us off on a new path.

The crucial aspect of wild uncertainty is precisely that the tails of the distributions are in flux. In other words, the historically observed minimum and maximum outcomes can be surpassed at any given time. I will refer to the idea of an ever‐changing tail of a distribution as The Moving Tail. With wild uncertainty, an observation may come along that is outside the established range – by a lot. Such an event means that the tail of the distribution just assumed a very different shape. Put another way, there was a qualitative shift in the tail. Everything we thought we knew about the variable in question turned out to be not even in the ballpark.

An illustration of wild uncertainty and of a tail in flux is provided by ‘the Texas freeze’, which refers to a series of severe blizzards that took place in February 2021, spanning a 10‐day period. The blizzards and the accompanying low temperatures badly damaged physical structures, and among those afflicted were wellheads and generators related to the production and distribution of electricity. As the freeze set in, demand soared as people scrambled to get hold of whatever electricity they could to stay warm and keep their businesses going. In an attempt to bring more capacity to the market, the operator of the Texas power grid, Ercot, hiked the price of electricity to the legally mandated price ceiling of 9,000 $/MWh. The price had touched that ceiling on prior occasions – but only for a combined total of three hours. The extremeness of this event lay in the fact that Ercot kept it at this level for almost 90 consecutive hours.6 A normal trading range leading up to this point had been somewhere between 20–40 $/MWh.

Any analysis of this market prior to February 2021 would have construed tail risk as being about short‐lived spikes, which, when averaged out over several trading days, implied no serious market distress. The Texas freeze shifted the tail. It was a Black Swan. The consequences for market participants were massive,7 and there was nothing in the historical experience that convincingly pointed to the possibility that the price could or would remain at its maximum for 90 hours. After the fact, it looked obvious that something like that could happen. Prolonged winter freezes in Texas are very rare, but with the climate getting more extreme by the day, why not?

The ‘by a lot’ is actually an important qualifier of wild uncertainty. To see why, consider that whenever we have a dataset, some of the observations will represent the tail of the distribution. They are large but rare deviations from some more normal state. Let us say that we have, in a given dataset, a handful of observations that can be said to constitute the tail. There will be, by construction, a minimum and a maximum value, which are the most extreme values that history has had to offer so far.

Unless we are talking about a truly truncated distribution, like income having zero as the lower limit, it is a potential mistake to think that the ‘true’ underlying data‐generating process is somehow capped by the observed minimum and maximum values. If we feed all the observations we have into a statistical software, we can ask it to analyse which random process that most plausibly generated the patterns in the data. Now, if we immediately take the process identified by the programme and draw random values based on it in a simulation, it will come up with a distribution that contains outcomes that go beyond the lowest/highest observed values in the dataset without the probability of that dropping to virtually zero. This will always happen as long as the approach is to assume that there is some underlying random process generating the data and use real data to approximate it. It is as if the software doing the fitting ‘gets it’ that if we have observed certain extreme values, even more extreme observations cannot be ruled out. If we have observed a drop in the S&P 500 of minus 58% over a certain period of time, who would say that a drop of minus 60% is outside the realm of possibilities? The simulated extremes will lie somewhere to the left (right) of the minimum (maximum) observed in the data. The tail we model in this way will encompass the observed tail and then some.

The upshot of this discussion is that experiencing an outlier that is only somewhat more extreme than the hitherto observed minimum/maximum should fall within the realm of benign uncertainty. We should not be surprised or taken aback by it. There is an implied probability of that, meaningfully separate from zero, being handed to us by the fitted distribution. We have to add ‘by a lot’ for it to count as wild uncertainty, because then the tail has shifted dramatically and in a way that was by no means implied by the historical track record. It is an outlier so extreme that it has a probability of effectively zero, even when the underlying random process we use to form a view of the future has been fitted to all the tail events in the historical track record.

Under conditions of wild uncertainty, it is clear that the concept of probability starts looking increasingly subjective and unverifiable. Indeed, Taleb calls probability ‘the mother of all abstract concepts’ (Taleb, 2007, p. 133) and maintains that we cannot calculate probabilities of shocks (Taleb, 2012, p. 8).8 It is important to see, though, that his scorn is reserved mostly for those who insist on using the symmetric normal distributions and its close relatives. The properties of the normal are seductive because we can derive, with relative ease, all sorts of interesting results, but it is, Taleb maintains, positively dangerous as a guide to decision‐making in a world of wild uncertainty. Why? Primarily because of how it rules out extreme outliers and blinds us to them. A key feature of the normal distribution is that its tails quickly get thinner the further removed from the mean you move, which implies that their likelihood of happening gets lower and lower. In fact, as we move away from the mean, the assigned probabilities drop very fast – much too fast, in Taleb's view (Taleb, 2007, p. 234). The stock market crash in October 1987, for example, saw a return of minus 20.5%. The odds of a drop of at least that magnitude would have been roughly one in a trillion according to the normal. In other words, anyone going by that distribution would have considered it, for practical purposes, an impossible event.

The first priority, therefore, is to avoid the normal distribution like the plague. In its place, if we still feel compelled to work with probabilities, Taleb offers the idea of fractals. Fractals refer to a geometrical pattern in which the structure and shape of an object remains similar across different scales. The practical implication is that the likelihood of extreme events decreases at a much slower rate. If one subscribes to this view, the probability of finding an exceptionally large oil field is not materially lower than a large or medium‐sized one because the geological processes that generate them are scale‐independent. This relation between frequency and size is associated with the so‐called power law distributions, which we will relate to socio‐economic processes in Chapter 7. According to Taleb, the idea of fractals should be our default, the baseline mental model for how probabilities change as we move further out on the tail (Taleb, 2007, p. 262).

In many cases, we lack data that we can explore for mapping out the tail of a random process. In this kind of setting, uncertainty tends to be wildly out of the gate. Technological innovation fits right into this picture, because it brings novelty and injects it into the existing, already volatile, world order. New dynamics are set in motion, triggering unintended consequences and side effects that ripple through the system in an unpredictable fashion. Because we keep innovating, we also keep changing the rules of the game, forever adding to the complexity. Two Black Swans that have sprung from the onward march of technology are the emergence of the internet and the more recent invasion of social media and mobile phones into our lives. There was no existing dataset that we could have studied prior to them that might have suggested that such transformations of our reality were about to happen. Or, more importantly, that they were even possibilities at all. To appreciate how technologies that we are completely immersed in today and take for granted are actually Black Swans, cases of wild uncertainty, consider the words of Professor Adam Alter of New York University:

‘Just go back twenty years [to 2000] … imagine you could speak to people and say, hey, you are going to go to the restaurant and everyone's going to be sitting isolated and looking at a small device, and then they're going to go back home and spend four hours looking at that device, and then you're going to wake up in the morning and look at that device … and people are going to be willing to have body parts broken to preserve the integrity of that device … people would say that is crazy'9

Alter's thought experiment of going back 20 years in time and imagining talking to people about something highly consequential that later happened is a useful one for deciding whether something is to be considered a Black Swan. If you imagine their reaction to what you describe would be that it is ridiculous or inconceivable, chances are that you have found one.

THE ROLE OF EXPECTATIONS

To continue our story, it becomes clear that any characterizations of random processes will be increasingly subjective as we move away from data‐driven approaches. We leave the world of inference from data and enter the realm of the imagination. Our faculties for reasoning and logic can partly make up for a lack of data – we can figure certain stuff out. When the imagination fails us, we have those truest of Black Swans, the inconceivable ones, the ‘unknown unknowns’. We have already mentioned the 9/11 attack as being in this category. In a similar way, the collapse of the Soviet Union was utterly unthinkable to the Western intelligentsia and political establishment at the time. George Kennan, an American diplomat and historian, commented as follows, based on a review of the history of international affairs in the modern era:

‘[It is] hard to think of any event more strange and startling, and at first glance inexplicable, than the sudden and total disintegration and disappearance … of the great power known successively as the Russian Empire and then the Soviet Union.’10

That is, nobody expected the Soviet Union to crumble at this point in time. One of the most crucial aspects of Black Swans is that they are always measured against expectations and prior knowledge. This is an underappreciated point. As noted, most people use the term loosely, largely equating it with high‐impact outcomes that were somehow shocking to us. With the considerable difference, perhaps, that calling it a Black Swan provides an air of complete unpredictability and that, therefore, one is not to be blamed for what just happened. Getting tail risk wrong may be an indictable offense. But Black Swans? They seem to absolve everyone of any responsibility for what went down, because nobody could have seen it coming.

The habit mentioned earlier of equating Black Swans with ‘mere’ tail risk misses out on what is perhaps its most important dimension, namely the expectations we had going into the situation. Because of the role of expectations, what is a Swan to you may not be one to me. It is, in Taleb's preferred terminology, a ‘sucker's problem'. Naïve and ignorant individuals are more prone to experience Black Swans simply because they fail to form realistic expectations. To illustrate this idea, Taleb uses the example of a turkey somewhere in the US as Thanksgiving approaches. Having walked about generously fed for its entire life, the turkey is unsuspecting of the calamity that is about to befall it. The butcher, however, is obviously not unsuspecting, and he is therefore not in for a Black Swan – exactly the same event, but wildly diverging expectations.

The relativity of Black Swans has wide implications. Whenever a high impact event occurs, this may or may not be shocking. The more interesting discussion to be had is about who was attuned to this possibility and who was caught out? A Black Swan always requires a vantage point. To the suckers, it appears as if the tail just moved, but not necessarily to someone who sees the world a bit differently. Whenever we hear the term Black Swan mentioned, therefore, what should immediately spring to mind is the follow‐up question ‘Well, a Black Swan to whom?’

The Covid‐19 pandemic is a case in point. Was this a Black Swan? It certainly meets the criterion of being a highly consequential event. Interestingly, Taleb himself has gone on record saying that C‐19 was not a Black Swan. His argument is that there is a history of pandemics, based on which popular films were made well before C‐19. A basic analysis of the connectivity of the modern world (i.e. means of travel) would also have pointed to the obvious plausibility of a global pandemic. Respected institutions issued reports warning of global pandemics already in the early 2000s. Bill Gates gave a thoughtful talk on the subject in 2015, also issuing words of warning for those willing to listen.11 These considerations may well have sensitized students of history who also had the wisdom to internalize the possibility that this could happen in their own lifetime.

However, casual observation suggests that most of us have not reached such a state of immaculate wisdom. Many do not read books and have never heard of connectivity. All of our egocentric biases make going into denial about pandemics the easiest thing in the world, something which Albert Camus, the French philosopher, understood well:

‘Everybody knows that pestilences have a way of recurring in the world; yet somehow we find it hard to believe in ones that crash down on our heads from a blue sky.’12

For most of us, watching a film about something makes it seem even more unreal. It puts it in the same category as Bruce Willis drilling holes in meteorites about to smash into Earth – pure entertainment.13 If so, consuming films may turn us into even bigger suckers because they warp our expectations. Since we are now discounting zombies heavily as just movie entertainment, woe on us the day they show up on the doorstep, because we have not prepared one bit for that eventuality!

At any rate, even if we had read about pandemics and realized that the probability of another one is clearly not zero, there is something about the magnitude of the consequences. Just like the ‘business‐as‐usual’ risk, Black Swans have two dimensions: possibility and consequence. Granting the possibility of something is a binary situation: a recognition that such a thing could happen (as opposed to saying there is no way it ever could). Even if we are willing to entertain the possibility of something, we could still be suckers with respect to the consequences once the event is unleashed. This is why C‐19, for most people, was a genuine Black Swan. Pandemics, sure, I think I heard about that in school. But who could have imagined entire countries shutting down? Spending months on end in lock‐downs? Tourism coming to a near stand‐still? Mad dashes – knife fights even – for toilet rolls? It would take a pretty serious student of history, and one with a very fertile imagination at that, to envision the severity of the consequences along so many paths. Therefore, for a substantial majority of the planet's inhabitants, C‐19 was a Black Swan. Along the same lines, it would be questionable not to label the French Revolution with its decidedly wild consequences a Black Swan just because there had been revolutions prior to that.

When it comes to the possibility of something happening, history by now offers a pretty impressive palette of different types of events. We might think of these as ‘known unknowns’, as it is clearly within our reach to form an understanding of them. What even the most creative and superbly educated mind cannot envision, however, is the magnitude of the consequences of those events as they would play out in the present day. The dynamics are truly impossible to imagine, and hence the consequences. History only repeats itself for the most part. With respect to the consequences, most of us are suckers, especially when it comes to our own lives and times. Whenever something impactful but entirely conceivable hits our vicinity, we are stunned no matter what.

WHAT MAKES US SUCKERS?

The previous section referred briefly to the ‘not in our lifetime’ perspective. Let us linger on this point for a bit, as it is one of the keys to understanding Black Swans and why we are essentially born suckers. Most of us will freely admit that humanity is in for one disaster or other. Sooner or later, that asteroid will knock us out of our pants, for sure, but it is always later, somewhere out in the distant future. It is not going to happen in my lifetime. Why? Because I am somehow special. Stuff only happens to other people, whereas I am destined to lead a glorious and comfortable existence. Based on such egocentric beliefs, we might coolly concede that in the larger scheme of things, something is sure to happen, yet almost completely discount the possibility as far as our lifetime and corner of the world is concerned. Not that we always say so publicly or even think in those terms outright, it is more of a tacit assumption.

This ‘because I'm special' protective mechanism goes a long way in setting the stage for Black Swans. However, it is only one of the many ways in which our outlook is warped, which brings us to the long catalogue of biases that have been identified and described by scholars. A bias can be said to be a predisposition to make a mistake in a decision‐making situation, because it leads us away from the decision that would be taken by a rational and well‐informed person who diligently weighs pros and cons. What biases tend to have in common is that they make us more of a sucker than we need to be. They are a staple of business books nowadays (those on risk management in particular) and may bore the educated reader. Since they are so fundamental to the concept of Black Swans, we must briefly review them nonetheless. What follows is a non‐exhaustive list of certain well‐documented biases that in various ways contribute to the Black Swan phenomenon. As is commonly pointed out, these biases have been mostly to our advantage over the long evolutionary haul, but are often liabilities in the unnatural and complex environment we find ourselves in today.

The narrative fallacy

In explaining why we are so poorly equipped to deal with randomness, Taleb focuses on what he refers to as ‘the narrative fallacy’, which he defines as ‘our need to fit a story or pattern to a series of connected or disconnected facts’ (Taleb, 2007, p. 309). We invent causes for things that we observe in order to satisfy our need for coherent explanations. It turns out that we do not suffer dissonance gladly, so our brain will supply any number of rationalizations to connect the dots. By reducing the number of dimensions of the problem at hand and creating a neat narrative, things become more orderly. Everything starts to hang together and make sense, and that is how the dissonance is resolved. Since we are lazy as well, we often converge on the rationalization that satisfies our craving with the least amount of resistance. However, when we force causal interpretations on our reality, and invent stories that satisfy our need for explanations, we make ourselves blind to powerful mechanisms that lie outside these simple narratives.

Confirmation bias

This is one of the leading causes of Swan‐blindness discussed in The Black Swan, where Taleb refers to confirmation as ‘a dangerous error’ (Taleb, 2007, p. 51). It has to do with the general tendency to adopt a theory or idea and then start to look for evidence that corroborates it. When we suffer from this bias, all the incoming data seems, as if by magic, to confirm that the belief we hold is correct; that the theory we are so fond of is indeed true. Whatever instances contradict the theory are brushed aside or ignored, or re‐interpreted (tweaked) in a way that supports our pre‐existing beliefs. Out the window goes Karl Popper's idea of falsification, the true marker of science and open inquiry. Using falsification as a criterion, a theory is discarded if evidence contradicting it becomes undeniable. In the specific context of managing risks, the confirmation bias is a problem because we will be too prone to interpret incoming observations of stability to suggest that the future will be similarly benign.

The optimistic bias

Research has shown that humans tend to view the world as more benign than it really is. Consequently, in a decision‐making situation, people tend to produce plans and forecasts that are unrealistically close to a best‐case scenario.14 The evidence shows that this is a bias with major consequences for risk taking. In the words of Professor Daniel Kahneman (2011): ‘The evidence suggests that an optimistic bias plays a role – sometimes the dominant role – whenever individuals or institutions voluntarily take on significant risks. More often than not, risk takers underestimate the odds they face, and do not invest sufficient effort to find out what they are.’15 Pondering on extreme and possibly calamitous outcomes will clearly not be a priority for an individual with an optimistic bent. Taking a consistently rosy view distorts expectations and therefore invites the Black Swan.

The myopia bias

Myopia, in the literature on the psychology of judgement, refers to the tendency to focus more on short‐term consequences than long‐term implications. Because of our desire for instant gratification, we tend to place much less weight on future gains and losses relative to those in the near‐term. Professors Meyer and Kunreuther call this the most ‘crippling’ of all biases, resulting in gross underpreparedness for disasters that could have been mitigated with relatively simple measures.16 This was the case, for example, with the tsunami in the Indian Ocean in 2004. Only a few years prior, in Thailand, relatively inexpensive mitigation measures had been discussed – and dismissed. The reason? There were many reasons, but among other things, there was a worry that it might cause unnecessary alarm among tourists. Such miniscule short‐term benefits got the upper hand in preparing for events with colossal consequences.

The overconfidence bias

Humans are prone to overrate their own abilities and the level of control they have over a situation. The typical way of exemplifying this tendency is to point to the fact that nearly everyone considers himself an above‐average driver. Taleb prefers the more humorous example of how most French people rate themselves well above the rest in terms of the art of love‐making (Taleb, 2007, p. 153). As for the effect of overconfidence on decision‐making, it is profound – and not in a favourable way. Professor Scott Plous (1993) argues that a large number of catastrophic events, such as the Chernobyl nuclear accident and the Space Shuttle Challenger explosion, can be traced to overconfidence. He offers the following summary: ‘No problem […] in decision‐making is more prevalent and more potentially catastrophic than overconfidence.’17 Overconfidence has been used to explain a wide range of observed phenomena, such as entrepreneurial market entry and trading in financial markets, despite available data suggesting high failure rates.

Considering the above, one is inclined to agree with Taleb when he remarks that ‘… it is as if we have the wrong user's manual' (Taleb, 2007, prologue xxii) for navigating successfully in a world of wild uncertainty. We crave simple but coherent narratives. We value elegant theories and become committed to them. We think we are special and that the world around us is benign. We are equipped with a mind that was created for an existence with much fewer variables and more direct cause‐and‐effect mechanisms. Reflecting deeply about interconnected systems was not key to survival in our evolutionary past. In a somewhat shocking passage, Taleb says that ‘our minds do not seem made to think and introspect’ because, historically speaking, it has been ‘a great waste of energy’ (ibid.).

In fact, information, which potentially helps us rise above sucker‐status, is costly to acquire and process. Imagine that I bring up the possibility of nuclear terror affecting a major US city. Such a scenario involves hundreds of thousands of dead and an upheaval of life as we knew it, before even considering what the countermeasures might be. Any firm with operations in the US is likely to be greatly affected by this calamity. Now what is your gut reaction to this proposed topic of conversation? In all likelihood, your kneejerk reaction is to immediately try to shut it down. The sheer unpleasantness of the topic makes us not want to go there, even for a brief moment of time. It is too much to take in, and frankly too boring, so, to save us the mental energy, we are perfectly willing to resort to the handy tactic of denial.

As problems, extreme and abstract possibilities, remote from everyday practicalities, are not inspiring enough to energize us. They are out of sight and therefore out of mind. We are unable to maintain a focus on them for long enough. Our thoughts will gravitate towards something more tangible, some action that yields a more gratifying sense of accomplishment here and now. It often takes a herculean effort to process remote possibilities and we are rarely in the mood for it. They are therefore not necessarily ‘unknown unknowns’, rather they can be thought of as ‘unknown knowables’. Unknown knowables is meant to convey that it is within our reach to form an understanding of the possibility and most of its consequence, but we fail to do so because of our laziness or disinterest. That makes it, for practical purposes, a Black Swan, at par with the unknown unknowns. At least to some, that is, because others might be prepared to take up the challenge.

THE RELATIVITY OF BLACK SWANS

Earlier in this chapter, we noted that the popular view of Black Swans is that they strike quickly and unexpectedly. Except that there is nothing in the Black Swan framework that says it has to be sudden or even happen within a reasonably short time‐period, like a few months. In fact, many of the examples discussed in Taleb's book are episodes that may seem like distinct and well‐delineated events in a history book, but were prolonged affairs with a long lead‐up. World Wars I and II are both in this category. The rise of Christianity is mentioned as another Black Swan event. A dominant Christianity would no doubt have appeared like an absurd proposition to someone living around the time of the birth of Jesus. Its consequences were certainly immense, so it meets this criterion too. It also took centuries to gain a foothold and start making its impact felt. The rise of the internet and social media were mentioned earlier as examples of technology‐driven Black Swans. They too emerged gradually over many years, infiltrating our lives one small step after the other. Therefore, from the viewpoint of a decision‐maker in the real world (which is the perspective that Taleb urges us to take) they were not instantaneous.