Reframe - Eric Knight - E-Book

Reframe E-Book

Eric Knight

0,0
10,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Why can't we eliminate terrorism by killing terrorists? Why can't we learn anything about climate change by discussing the weather? And what do fishermen in Turkey have to teach us about international relations? Often we compound our problems by focusing on the apparent crux of the matter. In Reframe, Eric Knight encourages us to step back and observe our world from afar. By tackling problems from original perspectives and discarding the magnifying glass, we will discover hidden solutions. A remarkably innovative and compelling book from one of the world's most exciting young thinkers, Reframe illustrates how we can cast a fresh eye on seemingly insoluble difficulties by seeing the wood for the trees.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Veröffentlichungsjahr: 2012

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



For my mother and father, with love and gratitude

Table of Contents

Title Page

Dedication

INTRODUCTION WHY PEOPLE ARE SMART but act so dumb…

 

CHAPTER 1 THE WALL STREET BANKER GENE and why we all have it…

 

CHAPTER 2 HOW TO SPOT GUERILLASin the mist…

 

CHAPTER 3 CROSSING THE BORDER into Tea Party America…

 

CHAPTER 4 MEDIA MAGNATESand intellectual magnets…

 

CHAPTER 5 FREEDOM FIGHTERSin lab coats…

 

CHAPTER 6 THE ALL-YOU-CAN EAT GUIDEto carbon slimming…

 

CHAPTER 7 THE VALLEY OF DEATHand how to climb out of it…

 

CHAPTER 8 HEDGEHOG VERSUS FOXwhy nimble is better…

 

CONCLUSION CHANGING THE WORLDone frame at a time…

 

ACKNOWLEDGEMENTS…

 

INDEX…

Copyright

INTRODUCTION

WHY PEOPLE ARE SMART

but act so dumb

I first realised I was missing a part of the world during the summer I spent in the jungles of Costa Rica. Hidden beneath the giant arms of the ceiba tree, I felt at peace. Costa Rica offered my twenty-year-old self a refuge from the pace of the modern world. There was a thrill that came with anonymity. I had spent my first two years of university learning Spanish and, with a carefully cultivated tan, I now delighted in being mistaken as tico. I loved bartering with locals at the market for fruit and vegetables. I savoured the simple taste of gallo pinto, rice with beans. Whenever I could, I would carry a day’s supplies in a small string bag. The idea that I might get lost and be able to survive on the bare necessities was my ultimate escapist dream.

In Costa Rica I was posted to a little village called Grano de Oro as a volunteer aid worker with a group of eleven other Australians and Canadians. Grano de Oro, we were told, needed a community hall. The Cabécars, the local indigenous people, would rest there after days of journeying through dense jungle before making their way to the regional markets in Turrialba. The hall would offer shelter and a chance to mingle. Every few months the US government organised a helicopter drop of food, toys, clothes and other supplies at a spot nearby. The hall would also make it easier to distribute these things up into the mountains.

Our modest mission that summer never happened. The building materials for the hall never arrived. Warm tropical rain set in and the road into the mountains became impossibly treacherous. Bored and restless to change the world, I began spending more time at the pulpería, the corner shop where people from the village and the mountains hung out and played pool. As my Spanish improved and the people at the pulpería went from being strangers to friends, I learnt something that profoundly changed the way I saw the world.

The thing was, no one in Grano de Oro wanted this community hall. They wouldn’t say no if you offered it to them for free, but they already had one which more than did the job. The reason they had signed the government papers and what they really wanted was people to help mentor the kids in their community – the ones my age. The world was changing and these kids were missing out on the benefits of economic progress. Solving the economic challenges facing Grano de Oro was more complicated than building a new community hall. But seeing this required more patience and a different approach than first met the eye.

As I went to the pulpería day after day, playing more pool than I liked and speaking more Spanish than I knew how, I began to realise that my Costa Rican friends didn’t want the life I had imagined for them. It did them no favours to put up a few planks or clean up their gardens so they could grow vegetables and settle into life in the jungle. When I asked them what they wanted to do with their lives, their answers shocked me. Alejandro wanted to become an accountant. Henry wanted to be a banker or a businessman. Nazareth wanted to study international relations, and the rest wanted to be lawyers.

“Why on earth would you want to do that?” I asked them indignantly. “I’ve tried it, and believe me, what you’ve got is much better.” It was only later that I realised the mistake I was making. In projecting my own dreams for them, I was robbing them of the freedom to have their own.

Solving the economic challenges of Grano de Oro was about education and mentoring, not about building something you could bounce a ball against or take a photo of. The building was an obvious answer to Grano’s problems, but it wasn’t the best one. My friends eventually went to university in San Jose, but I was the one who learnt the lesson that summer. By seeing the problem from one angle, I had missed the answers lying just out of view.

1.

This is a book about our trickiest problems, how they have answers, and why we miss them. I’m interested in political and economic problems, mainly. In the chapters ahead we will travel through history: from the spectacular financial bubbles which plagued Dutch finance in the seventeenth century to Lawrence of Arabia’s great campaigns in the Middle East, from the fears of ecological catastrophe in eighteenth-century England to the f low of Mexican migrants crossing the US border every day. Each of these problems is very different, but I want to persuade you that we make the same mistake in each case. We focus on what’s immediately apparent and we miss the bigger picture.

To see what I mean, take a look at the drawing below. It is a picture of a table resting against a wall, with several objects sitting on it: a box of pins, a candle and some matches. How would you light the candle and attach it to the wall so that no wax drips onto the table?

Don’t worry if you find this tricky. The experiment was invented more than half a century ago by Karl Duncker, a German psychologist, to examine the way we think through puzzles. The most common answer is to pin the candle to the wall at an angle so that the wax runs down the wall. The correct answer is to empty the box of pins and use it as a candleholder. When you pin the box to the wall and place the candle inside, there is no risk of wax splashing on the table.

The Duncker candle problem isn’t an intelligence test. It doesn’t reflect how well educated you are or how big your brain is. Young children are among the best at getting the right answer. The reason we struggle with it is that we are so used to seeing the box in terms of one purpose – as a container for pins – that we miss the other way of using it. Once the solution is revealed to us, we get it. No one makes the same mistake twice with the Duncker candle problem. You will now never forget to see a box of pins as a potential candleholder.

The Duncker problem is a neat party trick. But in this book I want to pull it out of the realm of puzzles and apply it to the world of politics. My contention is that we often struggle with our trickiest political problems because of how we see them. We tend to view the world in set ways. We become so intent on analysing a problem one way that we lose all the subtlety needed to get to the best answer. When this happens, we need to readjust how we interpret the world around us. We can solve seemingly insoluble problems by changing the way we think about them.

I am aware this is an optimistic – some would even say simplistic – view of the world. I’ll justify it as the book goes on. But to fully appreciate why we miss the answers latent in the world around us, we need to go one step beyond the Duncker experiment. It’s not just that we fixate on one thing and not others. It’s that we fixate on certain kinds of things. Let me show you what I mean.

Suppose for a moment that you are a randomly selected person living in the United States in 2006. Think about what you know of the place and ask yourself the following question: which of the following is most likely to kill you next year? Take a close look at each line and select which cause of death, in column “A” or “B”, is more likely in each case.

CAUSES OF DEATHABAll accidents (unintentional injuries) StrokeSuicideDiabetesHigh blood pressureInfluenza and pneumoniaHomicideAlzheimer’s diseaseWarSyphilis

If you answered “A” to any of the options above, you have illustrated the point I am about to make. These causes of death are taken from the records of the US National Center for Health Statistics for 2007. In each case, the likelihood of being killed by “B” is higher than for “A”. In fact, in almost every case the likelihood of “B” outnumbers “A” by a factor of at least two to one.

The reason why many of us were drawn to column “A” is because it contains all the causes of death we are most likely to see and hear about. We have all read stories of people who committed suicide, but deaths by diabetes rarely make the papers. Some of our relatives probably suffer from high blood pressure, and it seems a whole lot worse than when they last had the flu. As for murder and war, they are the stuff of the nightly news.

Two cognitive psychologists working in the 1970s, Amos Tversky and Daniel Kahneman, conducted an experiment much like the one above. They went through a list of causes of death with interview subjects and recorded their reactions. We will learn more about Tversky and Kahneman’s work in Chapter 1, but their conclusions were basically this: people overestimated the likelihood of certain causes of death – murder, suicide, fatal accidents – because these were the most dramatic and easy to see. What they underestimated were the silent killers – asthma, stroke and so on.

In the chapters ahead, I call this impulse the magnifying glass trap. Most people think of a magnifying glass as a visual aid, but I think of it in the opposite way. The magnifying glass trap is the tendency to zoom in and fix on one corner of the universe and miss those elements of a solution lying just outside the lens. Sometimes we are lured into the trap by shiny objects: those parts of a problem which are visually compelling and graphic. At other times we are lured by intellectual trinkets and shiny ideas. Both can distract us on our mission to solve complex problems. It’s only when we cast them aside that we have a chance of making progress.

2.

I’ve presented two ideas so far. One: we tend to view the world through a magnifying glass. Two: we tend to point the magnifying glass in the direction of shiny objects. These ideas should prompt you to ask a very good question. If we look for the answers in the wrong places, what should we do about it?

I will suggest in this book that we should “reframe” the problem. It is important to be clear what I mean by this. Reframing is not a linguistic tool, a trick to disguise or evade difficult problems. Rather, it is an intellectual choice we must make. Seeing the answer to our problem requires us to have the right elements of the problem – the right system – in focus. Instead of a magnifying glass, think for a moment of the lens on a camera. When the aperture is set at one width, we see a f lower. That’s one system in focus. Widen the aperture and we see a meadow – that’s another system in focus. Widen it further and we see the mountains – a third system. Having a particular system in focus alters our ability to see the answer. Focus on the flower, and the mountains are invisible. Reframe the problem – remove the magnifying glass – and we may arrive at the right answer.

In the chapters ahead I will reframe some of our most intractable political debates. I will be ambitious in choosing what we take on: the biggest political headaches of the last decade. We will begin by looking at the way a Wall Street banker did a deal in the late ’90s. We will also examine how a US general fought a war in the Middle East. We will consider climate change, immigration and more. In each case, these debates have become stuck because someone has fallen into the magnifying glass trap.

Reframe is as much a book about human psychology as it is about politics. I’m less interested in telling you what to think and more interested in showing you an alternative way to approach sticky situations. In the end the story I will tell is an optimistic one of how we can make the world more peaceful and prosperous. We can resolve our trickiest problems. In some cases we already are solving them – we just don’t see it. The most common mistake is to search for the answers in the wrong place without thinking to adjust our point of view. Sometimes this mistake is made by others. In this book I want us to ask a different question: when is the mistake our own?

CHAPTER 1

THE WALL STREET BANKER GENE

and why we all have it

When Robert Merton and Myron Scholes accepted the Nobel Prize for Economics in late 1997, they were already waist-deep in one of the most spectacular crashes of modern finance. The pair were awarded the prize for their work in financial mathematics. Their accomplishment? A tool to model complex financial products. Globalisation had opened up highways for money to flow across borders faster than ever before, but to all intents and purposes the money was invisible.

Merton and Scholes’s models were installed in the engine room of Wall Street’s most exclusive hedge fund, Long-Term Capital Management. By late 1998, the fund had collapsed. The real story of LTCM’s demise was a surprising one. More important than the amount of money lost were the implications for financial modelling. Merton and Scholes had treated history as if it was a sequence of events to which savvy traders reacted, but it turned out that the world was more complicated than that. The problem was that the LTCM models were more like a magnifying glass than a mirror. They zoomed in on micro events on the trading floor, but they missed the powerful macroeconomic processes at play in the global economy.

Before going any further, it’s worth noting what a big deal LTCM was. The fund was run by a high-finance dream team. John Meriwether, a legendary bond trader at Salomon Brothers in the 1980s, was its executive director. The team he put together was hand-picked from Salomon Brothers when it collapsed in the 1990s. The rest were ex-faculty members from Harvard Business School and MIT. Merton and Scholes sat on the board of directors. When the fund finally came together in 1994, it was oversubscribed to the tune of billions of dollars. America’s richest people, associated through firms like Merrill Lynch and UBS, the investment banks, lined out the door for a chance to put their dollars to work in the fund. It didn’t take long for LTCM to become one of the biggest private money-making machines in history.

The package LTCM offered its investors was known as fixed-income arbitrage. It was the ultimate investment opportunity: maximum return, minimum risk. It worked like this: computers were programmed to scan the markets for potentially attractive investments. If two assets were found which were virtually identical but traded at different prices in different parts of the world, the fund bought the cheaper asset and squeezed out a profit from the difference in market prices. The strategy worked because these discrepancies were hard to find and usually too small for ordinary investors to take advantage of. With its superior technology and number-crunching ability, LTCM backed itself to find the anomalies before anyone else and exploit them ruthlessly.

LTCM described itself as a market-neutral fund. In other words, it promised to make investors money no matter what the state of the market – up or down – or the performance of any particular asset class – stocks, bonds and so on. This feat was possible because instead of picking a particular asset (for example, sub-prime mortgages in the United States) and making money from its spectacular ascent, LTCM made money when the same asset was traded at different prices.

When the fall came, it was swift. On Friday 21 August 1998, LTCM lost $550 million in a single day. Over a four-month period that year, LTCM lost close to half its value, wiping out US$4.6 billion in investor capital. Reeling from the losses, John Meriwether wrote a letter to investors on 2 September asking for emergency capital to carry the fund through a difficult time. News of the letter swept Wall Street. The amount of money at stake was so large that when the New York Federal Reserve heard what was happening, it paid the fund a visit. On 20 September, Peter Fisher, the executive vice-president of the New York Federal Reserve, some of his colleagues from the US Federal Reserve and a string of bankers from Goldman Sachs and JP Morgan turned up at the fund’s New Haven off ices. They asked to inspect the books. What they discovered was extraordinary.

The model which LTCM had been peddling was known in the finance industry as “relative betting”. The “relative” part referred to the fact that there were two almost identical assets. The “betting” was the assumption that the prices would eventually converge, ensuring a profit for the holder of the lower-priced asset. In theory, this would always happen over the long term, which in the world of finance meant roughly every seven years. Over the very long term, however, history showed that markets crashed on average once a decade. And when the crash came, it didn’t matter what relative positions existed in the market. All assets in the market went down.

LTCM’s troubles had started in May 1998, when the Asian financial crisis spurred a sell-off in American and European stock markets. By August the contagion had spread to bond markets after the Russian government, weakened by political unrest and f lagging oil prices, defaulted on its debts. But instead of adjusting for these long-term trends, the brains behind LTCM bet on their ability to outfox the market. Instead of unwinding the fund’s investment positions, they doubled up and leveraged to the hilt. It was the wrong move: by the time Fisher and his intervention team arrived in September 1998, LTCM was exposed to debt thirty times its capital base. It needed more than emergency capital. It needed a cash injection the size of the GDP of a small country.

The lesson of LTCM was that its bosses had the wrong system in focus. By reacting to recent events, they had ignored long-run processes. LTCM’s managers had been so confident of their ability to beat history that they had not sufficiently stress-tested their financial tools. According to Niall Ferguson, the British economic historian, had they plugged as little as a decade’s worth of data into their models before setting out, they would have realised the weak point in their strategy. They had failed to turn to history. “If I had lived through the Depression,” Meriwether later said, “I would have been in a better position to understand events.”

1.

What is the precise mistake the people at LTCM made? Ignorance about the future is an age-old problem, so they can hardly be blamed for that. Nor can you fault the conclusions they drew from the data flickering across their computer screens. Given what they knew about converging asset prices, it made sense to hold onto their investments.

The error in judgement happened at the very start of the investment process. It came from the data they chose to feed into the system and rely on when making their decisions. By taking a small strip of history and plugging it into their model, they were reacting to short-term anomalies and not long-term trends. Long-Term Capital Management’s mistake was, ironically enough, to not think long-term enough.

There is nothing original about the punchline that someone on Wall Street was short-term in their thinking. But you’ll be glad to know that that is not my punchline. What’s more interesting, indeed truly remarkable, about the LTCM story is that these people should have fallen into the short-term trap. These were intelligent, well-educated, rational people who were specifically employed for their ability to avoid losses of just this kind. It would be easy to say that greed and a passion for making money got the better of them, but that is manifestly untrue.

Michael Lewis noted in a New York Times article in 1999 that the most conspicuous form of consumption among the “young professors” at LTCM was to reinvest their bonuses back into improving their model. Intellectual accuracy, in other words, was the ultimate prize. These were some of the most conservative economic thinkers and academics of their time. They had every incentive to make the right decisions. “When you asked them a simple question,” Lewis wrote, “they thought about it for eight months before they answered, and then their answer was so complicated you wished you had never asked.” To the extent that they were vulnerable to emotion and human error, their models were designed to detect and eliminate this. That they failed so spectacularly needs more than a little explaining. Let’s go back in time, fifty years earlier, to try to solve this riddle.

In the 1950s, economists trying to understand how people made decisions about things as simple as buying food at the supermarket, or as complex as buying shares on the stock market, referred to something called the “pigeon puzzle”. The pigeon puzzle was an experiment involving a pigeon and a series of incentives and punishments. When the pigeon did an approved task, it was rewarded with food. When it did a disapproved one, it received an electric shock. The conclusion of the pigeon puzzle was that pigeons tried to optimise outcomes, responding to the carrots and sticks. They tried to maximise their gains and minimise their losses. The pigeon was, in other words, a perfectly rational being.

As the Cold War engulfed the second half of the twentieth century, a key political debate was how best to organise human society. The question was a simple one. Were people like pigeons? Was it fair to assume that people were rational and able to allocate resources optimally in a market economy? Or was it safer to rely on a paternal government to organise how things were produced and distributed?

The answer? To a large extent people were rational. Unlikely as it might sound, a society governed by incentives and punishments resulted in a fairer allocation of resources than a socialist society ruled by benevolent dictate. Because people were rational, they were able to make choices in a marketplace which optimised their social outcomes, such as finding the right person to marry or buying fashionable clothes. The end result was that almost everyone was better off.

There was, however, a small catch. The assumption that people were pigeons could not account for the occasional, seemingly random moments of demonstrably bad human judgement. If people were so smart, why did they sometimes make decisions that were so dumb? If people always made the optimal choice, then why was the divorce rate so high? If we always chose in our best interest, why did some people have such demonstrably bad dress sense? The failure of LTCM was a case in point. It was in no one’s interest to lose US$4.6 billion in four months, and yet history had a way of repeating when it came to such mistakes. Clearly, something was up.

The easy way of resolving the apparent contradiction was to argue that the starting assumption was wrong: people were in fact irrational. The better answer came from an economist at Carnegie Mellon University, Herb Simon. Simon spent his career unpicking the conundrum that people, though rational, sometimes made decisions which went against their best long-term interests. A contemporary of the legendary free-marketeers Milton Friedman and Gary Becker, Simon dedicated his Nobel Prize lecture in 1978 to outlining why their rational choice theory was not a complete explanation of reality. Like the pigeon puzzle, rational choice theory contended that people were perfectly capable of deciding what was in their own best interests without any assistance. The extension of this was that all people acting in their best interests benefited society as a whole. Simon’s view was that the real world was a little more complicated.

In his opening remarks in Stockholm, Simon directed his audience to the words of the great nineteenth-century economist Alfred Marshall. Marshall had made the following observation: “Economics … is on the one side a study of wealth; and on the other, and more important side, a part of the study of man.” What was missing from Friedman and Becker’s economics, Simon argued, was an appreciation of human psychology. Theirs was the story of homo economicus. It needed an extra chapter on homo sapiens.

The “sapiens”, or thinking, part of Herb Simon’s theory was important. Simon had not been awarded the Nobe Prize for arguing that humankind was irrational. After all, neurologists and psychologists universally agreed that this was plainly wrong. The very fact that humankind had maintained itself for so many millennia was evidence that it was supremely rational. Self-preservation was a rational goal. Over the long run humans had done an exceptional job of securing and advancing their material self-interest. The problem, Simon argued, came down to how we reasoned our way through short-term events.

Simon’s theory was called “bounded rationality”. Its central claim was that although we were supremely rational, we tended to see the world through blinkers. Our ability to make decisions was “bounded” by various limits. There were limits on how much of the world we could see directly at any one time, the number of perspectives we could consider, our memory capacity, our level of expertise, and so on. These limits meant that we tended to solve complex problems by breaking them down and focusing on their most digestible parts. By using these intellectual shortcuts, people had a tendency to leave an awful lot of information out of the picture. Large tracts of the world remained outside the frame.

After inventing the notion of bounded rationality, Simon left it to others to add flesh through real-world experiments. Towards the end of his 1978 speech, Simon referred to two up-and-coming cognitive psychologists – Amos Tversky of Stanford University and Daniel Kahneman of Hebrew University, whom we met earlier – who had shown some promise in this regard. Tversky and Kahneman used social experiments to drill down and examine the detail of Simon’s theory.

In 2002, Kahneman was awarded a Nobel Prize for his contribution to a new field called behavioural economics. Through the 1980s and 1990s behavioural economists spent a lot of time examining the world of finance. Pointing out how the traders at LTCM fixated on small datasets to the exclusion of broader historical trends was the kind of research they did.

2.

The LTCM debacle showed that some well-paid Wall Street bankers viewed the world through a magnifying glass. Not too many surprises there. But the point behind the emerging academic discipline of behavioural economics was that it wasn’t just Wall Street bankers who walked into the magnifying glass trap. We all did. Let me show you what I mean.

Linda is thirty-one years old, single, outspoken and very bright. She majored in philosophy at university. As a student, she was deeply concerned with issues of discrimination and social justice, and she also participated in anti-nuclear demonstrations. Which is more probable?

(1) Linda is a bank teller

(2) Linda is a bank teller and a committed feminist

Most of us answer (2), but it cannot possibly be more probable. Consider the mathematics of the question. The chance that Linda fulfils both conditions – that she is both a bank teller and a feminist – is always less than the chance that she fulfils just one condition – that she is a bank teller. But the question feeds our cognitive biases. We are inclined to fast answers based on what we have seen before. As soon as we see the words “social justice” and “feminist”, we circle the second answer, forgetting to zoom out and consider the problem objectively. We leap to a conclusion, which can yield the wrong answer.

Let’s try another one. Suppose I have just given you $1000 for participating in my experiment. Now I’m going to give you a choice. You can have either (a) a 50 per cent chance of winning another $1000 or (b) a guaranteed additional $500. Which would you prefer? Write the answer down.

Now clear your mind and answer a new question. Suppose you are on a quiz show and you have just won $2000. The host gives you an option in your final round. You can have either (c) a 50 per cent chance of losing $1000 or (d) a guaranteed loss of $500 and no more. Which option do you choose? Write down your second answer.

Tversky and Kahneman conducted an experiment similar to this in 1979 with two groups. One group answered the first question. Most people – 84 per cent – chose option (b), a guaranteed $500. The other group answered the second question and most people – 69 per cent – chose option (c), a 50–50 chance of losing $1000. The results are slightly distorted by asking you both questions in sequence, but the point should still be clear. The pay-off in both questions is identical, but our response to them is different. In both cases we have a 50 per cent probability of ending up with $2000 and a 100 per cent probability of ending up with $1500. But the majority of us react differently depending on whether the choice is presented as a win or as a loss. We are “risk-averse for positive prospects”, but “risk-seeking for negative prospects”.

Tversky and Kahneman ran many such experiments on well-educated people. What they discovered supported the view that we all view the world with certain built-in biases. They gave each of these biases a name. The Linda experiment showed the “conjunction fallacy”, and the choice of different money options was called “prospect theory”.

Then there was the “representativeness bias”, the tendency to assume that the past predicted the future because recent events were fresh in our minds. At a roulette table, gamblers were likely to assume that red was more probable after a run of black. In fact, the probability of red or black each time was always 50–50. Our expectation was affected by what we had seen before. We assumed that the world had a way of balancing out the recent hot run of black, even though a new player to the table would never make this mistake.

In another case, people in one group gave a different estimate of the answer to the sum