18,99 €
In today's hyper-connected society, understanding the mechanisms of trust is crucial. Issues of trust are critical to solving problems as diverse as corporate responsibility, global warming, and the political system. In this insightful and entertaining book, Schneier weaves together ideas from across the social and biological sciences to explain how society induces trust. He shows the unique role of trust in facilitating and stabilizing human society. He discusses why and how trust has evolved, why it works the way it does, and the ways the information society is changing everything.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 655
Veröffentlichungsjahr: 2012
Table of Contents
A Note for Readers
Chapter 1: Overview
Part I: The Science of Trust
Chapter 2: A Natural History of Security
Chapter 3: The Evolution of Cooperation
Chapter 4: A Social History of Trust
Chapter 5: Societal Dilemmas
Part II: A Model of Trust
Chapter 6: Societal Pressures
Chapter 7: Moral Pressures
Chapter 8: Reputational Pressures
Chapter 9: Institutional Pressures
Chapter 10: Security Systems
Part III: The Real World
Chapter 11: Competing Interests
Chapter 12: Organizations
Chapter 13: Corporations
Chapter 14: Institutions
Part IV: Conclusions
Chapter 15: How Societal Pressures Fail
Chapter 16: Technological Advances
Chapter 17: The Future
Acknowledgments
Notes
References
Copyright
Credits
About the Author
A Note for Readers
This book contains both notes and references. The notes are explanatory bits that didn't make it into the main text. These are indicated by superscript numbers in both the paper and e-book formats. The references are indicated by links in the main text.
High-resolution versions of the figures can be found at www.schneier.com/lo.
Chapter 1
Overview
Just today, a stranger came to my door claiming he was here to unclog a bathroom drain. I let him into my house without verifying his identity, and not only did he repair the drain, he also took off his shoes so he wouldn't track mud on my floors. When he was done, I gave him a piece of paper that asked my bank to give him some money. He accepted it without a second glance. At no point did he attempt to take my possessions, and at no point did I attempt the same of him. In fact, neither of us worried that the other would. My wife was also home, but it never occurred to me that he was a sexual rival and I should therefore kill him.
Also today, I passed several strangers on the street without any of them attacking me. I bought food from a grocery store, not at all concerned that it might be unfit for human consumption. I locked my front door, but didn't spare a moment's worry at how easy it would be for someone to smash my window in. Even people driving cars, large murderous instruments that could crush me like a bug, didn't scare me.
Most amazingly, this worked without much overt security. I don't carry a gun for self-defense, nor do I wear body armor. I don't use a home burglar alarm. I don't test my food for poison. I don't even engage in conspicuous displays of physical prowess to intimidate other people I encounter.
It's what we call “trust.” Actually, it's what we call “civilization.”
All complex ecosystems, whether they are biological ecosystems like the human body, natural ecosystems like a rain forest, social ecosystems like an open-air market, or socio-technical ecosystems like the global financial system or the Internet, are deeply interlinked. Individual units within those ecosystems are interdependent, each doing its part and relying on the other units to do their parts as well. This is neither rare nor difficult, and complex ecosystems abound.
At the same time, all complex ecosystems contain parasites. Within every interdependent system, there are individuals who try to subvert the system to their own ends. These could be tapeworms in our digestive tracts, thieves in a bazaar, robbers disguised as plumbers, spammers on the Internet, or companies that move their profits offshore to evade taxes.
Within complex systems, there is a fundamental tension between what I'm going to call cooperating, or acting in the group interest; and what I'll call defecting, or acting against the group interest and instead in one's own self-interest. Political philosophers have recognized this antinomy since Plato. We might individually want each other's stuff, but we're collectively better off if everyone respects property rights and no one steals. We might individually want to reap the benefits of government without having to pay for them, but we're collectively better off if everyone pays taxes. Every country might want to be able to do whatever it wants, but the world is better off with international agreements, treaties, and organizations. In general, we're collectively better off if society limits individual behavior, and we'd each be better off if those limits didn't apply to us individually. That doesn't work, of course, and most of us recognize this. Most of the time, we realize that it is in our self-interest to act in the group interest. But because parasites will always exist—because some of us steal, don't pay our taxes, ignore international agreements, or ignore limits on our behavior—we also need security.
Society runs on trust. We all need to trust that the random people we interact with will cooperate. Not trust completely, not trust blindly, but be reasonably sure (whatever that means) that our trust is well-founded and they will be trustworthy in return (whatever that means). This is vital. If the number of parasites gets too large, if too many people steal or too many people don't pay their taxes, society no longer works. It doesn't work both because there is so much theft that people can't be secure in their property, and because even the honest become suspicious of everyone else. More importantly, it doesn't work because the social contract breaks down: society is no longer seen as providing the required benefits. Trust is largely habit, and when there's not enough trust to be had, people stop trusting each other.
The devil is in the details. In all societies, for example, there are instances where property is legitimately taken from one person and given to another: taxes, fines, fees, confiscation of contraband, theft by a legitimate but despised ruler, etc. And a societal norm like “everyone pays his or her taxes” is distinct from any discussion about what sort of tax code is fair. But while we might disagree about the extent of the norms we subject ourselves to—that's what politics is all about—we're collectively better off if we all follow them.
Of course, it's actually more complicated than that. A person might decide to break the norms, not for selfish parasitical reasons, but because his moral compass tells him to. He might help escaped slaves flee into Canada because slavery is wrong. He might refuse to pay taxes because he disagrees with what his government is spending his money on. He might help laboratory animals escape because he believes animal testing is wrong. He might shoot a doctor who performs abortions because he believes abortion is wrong. And so on.
Sometimes we decide a norm breaker did the right thing. Sometimes we decide that he did the wrong thing. Sometimes there's consensus, and sometimes we disagree. And sometimes those who dare to defy the group norm become catalysts for social change. Norm breakers rioted against the police raids of the Stonewall Inn in New York in 1969, at the beginning of the gay rights movement. Norm breakers hid and saved the lives of Jews in World War II Europe, organized the Civil Rights bus protests in the American South, and assembled in unlawful protest at Tiananmen Square. When the group norm is later deemed immoral, history may call those who refused to follow it heroes.
In 2008, the U.S. real estate industry collapsed, almost taking the global economy with it. The causes of the disaster are complex, but were in a large part caused by financial institutions and their employees subverting financial systems to their own ends. They wrote mortgages to homeowners who couldn't afford them, and then repackaged and resold those mortgages in ways that intentionally hid real risk. Financial analysts, who made money rating these bonds, gave them high ratings to ensure repeat rating business.
This is an example of a failure of trust: a limited number of people were able to use the global financial system for their own personal gain. That sort of thing isn't supposed to happen. But it did happen. And it will happen again if society doesn't get better at both trust and security.
Failures in trust have become global problems:
The Internet brings amazing benefits to those who have access to it, but it also brings with it new forms of fraud. Impersonation fraud—now called identity theft—is both easier and more profitable than it was pre-Internet. Spam continues to undermine the usability of e-mail. Social networking sites deliberately make it hard for people to effectively manage their own privacy. And antagonistic behavior threatens almost every Internet community.Globalization has improved the lives of people in many countries, but with it came an increased threat of global terrorism. The terrorist attacks of 9/11 were a failure of trust, and so were the government overreactions in the decade following.The financial network allows anyone to do business with anyone else around the world; but easily hacked financial accounts mean there is enormous profit in fraudulent transactions, and easily hacked computer databases mean there is also a global market in (terrifyingly cheap) stolen credit card numbers and personal dossiers to enable those fraudulent transactions.Goods and services are now supplied worldwide at much lower cost, but with this change comes tainted foods, unsafe children's toys, and the outsourcing of data processing to countries with different laws.Global production also means more production, but with it comes environmental pollution. If a company discharges lead into the atmosphere—or chlorofluorocarbons, or nitrogen oxides, or carbon dioxide—that company gets all the benefit of cheaper production costs, but the environmental cost falls on everybody else on the planet.And it's not just global problems, of course. Narrower failures in trust are so numerous as to defy listing. Here are just a few examples:
In 2009–2010, officials of Bell, California, effectively looted the city's treasury, awarding themselves unusually high salaries, often for part-time work.Some early online games, such as Star Wars Galaxy Quest, collapsed due to internal cheating.The senior executives at companies such as WorldCom, Enron, and Adelphia inflated their companies' stock prices through fraudulent accounting practices, awarding themselves huge bonuses but destroying the companies in the process.What ties all these examples together is that the interest of society was in conflict with the interests of certain individuals within society. Society had some normative behaviors, but failed to ensure that enough people cooperated and followed those behaviors. Instead, the defectors within the group became too large or too powerful or too successful, and ruined it for everyone.
This book is about trust. Specifically, it's about trust within a group. It's important that defectors not take advantage of the group, but it's also important for everyone in the group to trust that defectors won't take advantage.
“Trust” is a complex concept, and has a lot of flavors of meaning. Sociologist Piotr Sztompka wrote that “trust is a bet about the future contingent actions of others.” Political science professor Russell Hardin wrote: “Trust involves giving discretion to another to affect one's interests.” These definitions focus on trust between individuals and, by extension, their trustworthiness.1
When we trust people, we can either trust their intentions or their actions. The first is more intimate. When we say we trust a friend, that trust isn't tied to any particular thing he's doing. It's a general reliance that, whatever the situation, he'll do the right thing: that he's trustworthy. We trust the friend's intentions, and know that his actions will be informed by those intentions.2
The second is less intimate, what sociologist Susan Shapiro calls impersonal trust. When we don't know someone, we don't know enough about her, or her underlying motivations, to trust her based on character alone. But we can trust her future actions.3 We can trust that she won't run red lights, or steal from us, or cheat on tests. We don't know if she has a secret desire to run red lights or take our money, and we really don't care if she does. Rather, we know that she is likely to follow most social norms of acceptable behavior because the consequences of breaking these norms are high. You can think of this kind of trust—that people will behave in a trustworthy manner even if they are not inherently trustworthy—more as confidence, and the corresponding trustworthiness as compliance.4
In another sense, we're reducing trust to consistency or predictability. Of course, someone who is consistent isn't necessarily trustworthy. If someone is a habitual thief, I don't trust him. But I do believe (and, in another sense of the word, trust) that he will try to steal from me. I'm less interested in that aspect of trust, and more in the positive aspects. In The Naked Corporation, business strategist Don Tapscott described trust, at least in business, as the expectation that the other party will be honest, considerate, accountable, and transparent. When two people are consistent in this way, we call them cooperative.
In today's complex society, we often trust systems more than people. It's not so much that I trusted the plumber at my door as that I trusted the systems that produced him and protect me. I trusted the recommendation from my insurance company, the legal system that would protect me if he did rob my house, whatever the educational system is that produces and whatever insurance system bonds skilled plumbers, and—most of all—the general societal systems that inform how we all treat each other in society. Similarly, I trusted the banking system, the corporate system, the system of police, the system of traffic laws, and the system of social norms that govern most behaviors.5
This book is about trust more in terms of groups than individuals. I'm not really concerned about how specific people come to trust other specific people. I don't care if my plumber trusts me enough to take my check, or if I trust that driver over there enough to cross the street at the stop sign. I'm concerned with the general level of impersonal trust in society. Francis Fukuyama's definition nicely captures the term as I want to use it: “Trust is the expectation that arises within a community of regular, honest, and cooperative behavior, based on commonly shared norms, on the part of other members of that community.”
Sociologist Barbara Misztal identified three critical functions performed by trust: 1) it makes social life more predictable, 2) it creates a sense of community, and 3) it makes it easier for people to work together. In some ways, trust in society works like oxygen in the atmosphere. The more customers trust merchants, the easier commerce is. The more drivers trust other drivers, the smoother traffic flows. Trust gives people the confidence to deal with strangers: because they know that the strangers are likely to behave honestly, cooperatively, fairly, and sometimes even altruistically. The more trust is in the air, the healthier society is and the more it can thrive. Conversely, the less trust is in the air, the sicker society is and the more it has to contract. And if the amount of trust gets too low, society withers and dies. A recent example of a systemic breakdown in trust occurred in the Soviet Union under Stalin.
I'm necessarily simplifying here. Trust is relative, fluid, and multidimensional. I trust Alice to return a $10 loan but not a $10,000 loan, Bob to return a $10,000 loan but not to babysit an infant, Carol to babysit but not with my house key, Dave with my house key but not my intimate secrets, and Ellen with my intimate secrets but not to return a $10 loan. I trust Frank if a friend vouches for him, a taxi driver as long as he's displaying his license, and Gail as long as she hasn't been drinking. I don't trust anyone at all with my computer password. I trust my brakes to stop the car, ATM machines to dispense money from my account, and Angie's List to recommend a qualified plumber—even though I have no idea who designed, built, or maintained those systems. Or even who Angie is. In the language of this book, we all need to trust each other to follow the behavioral norms of our group.
Many other books talk about the value of trust to society. This book explains how society establishes and maintains that trust.6 Specifically, it explains how society enforces, evokes, elicits, compels, encourages—I'll use the term induces—trustworthiness, or at least compliance, through systems of what I call societal pressures, similar to sociology's social controls: coercive mechanisms that induce people to cooperate, act in the group interest, and follow group norms. Like physical pressures, they don't work in all cases on all people. But again, whether the pressures work against a particular person is less important than whether they keep the scope of defection to a manageable level across society as a whole.
A manageable level, but not too low a level. Compliance isn't always good, and defection isn't always bad. Sometimes the group norm doesn't deserve to be followed, and certain kinds of progress and innovation require violating trust. In a police state, everybody is compliant but no one trusts anybody. A too-compliant society is a stagnant society, and defection contains the seeds of social change.
This book is also about security. Security is a type of a societal pressure in that it induces cooperation, but it's different from the others. It is the only pressure that can act as a physical constraint on behavior regardless of how trustworthy people are. And it is the only pressure that individuals can implement by themselves. In many ways, it obviates the need for intimate trust. In another way, it is how we ultimately induce compliance and, by extension, trust.
It is essential that we learn to think smartly about trust. Philosopher Sissela Bok wrote: “Whatever matters to human beings, trust is the atmosphere in which it thrives.” People, communities, corporations, markets, politics: everything. If we can figure out the optimal societal pressures to induce cooperation, we can reduce murder, terrorism, bank fraud, industrial pollution, and all the rest.
If we get pressures wrong, the murder rate skyrockets, terrorists run amok, employees routinely embezzle from their employers, and corporations lie and cheat at every turn. In extreme cases, an untrusting society breaks down. If we get them wrong in the other direction, no one speaks out about institutional injustice, no one deviates from established corporate procedure, and no one popularizes new inventions that disrupt the status quo—an oppressed society stagnates. The very fact that the most extreme failures rarely happen in the modern industrial world is proof that we've largely gotten societal pressures right. The failures that we've had show we have a lot further to go.
Also, as we'll see, evolution has left us with intuitions about trust better suited to life as a savannah-dwelling primate than as a modern human in a global high-tech society. That flawed intuition is vulnerable to exploitation by companies, con men, politicians, and crooks. The only defense is a rational understanding of what trust in society is, how it works, and why it succeeds or fails.
This book is divided into four parts. In Part I, I'll explore the background sciences of the book. Several fields of research—some closely related—will help us understand these topics: experimental psychology, evolutionary psychology, sociology, economics, behavioral economics, evolutionary biology, neuroscience, game theory, systems dynamics, anthropology, archaeology, history, political science, law, philosophy, theology, cognitive science, and computer security.
All these fields have something to teach us about trust and security.7 There's a lot here, and delving into any of these areas of research could easily fill several books. This book attempts to gather and synthesize decades, and sometimes centuries, of thinking, research, and experimentation from a broad swath of academic disciplines. It will, by necessity, be largely a cursory overview; often, the hardest part was figuring out what not to include. My goal is to show where the broad arcs of research are pointing, rather than explain the details—though they're fascinating—of any individual piece of research.8
In the last chapter of Part I, I will introduce societal dilemmas. I'll explain a thought experiment called the Prisoner's Dilemma, and its generalization to societal dilemmas. Societal dilemmas describe the situations that require intra-group trust, and therefore use societal pressures to ensure cooperation: they're the central paradigm of my model. Societal dilemmas illustrate how society keeps defectors from taking advantage, taking over, and completely ruining society for everyone. It illustrates how society ensures that its members forsake their own interests when they run counter to society's interest. Societal dilemmas have many names in the literature: collective action problem, Tragedy of the Commons, free-rider problem, arms race. We'll use them all.
Part II fully develops my model. Trust is essential for society to function, and societal pressures are how we achieve it. There are four basic categories of societal pressure that can induce cooperation in societal dilemmas:
Moral pressure. A lot of societal pressure comes from inside our own heads. Most of us don't steal, and it's not because there are armed guards and alarms protecting piles of stuff. We don't steal because we believe it's wrong, or we'll feel guilty if we do, or we want to follow the rules.Reputational pressure. A wholly different, and much stronger, type of pressure comes from how others respond to our actions. Reputational pressure can be very powerful; both individuals and organizations feel a lot of pressure to follow the group norms because they don't want a bad reputation.Institutional pressure. Institutions have rules and laws. These are norms that are codified, and whose enactment and enforcement is generally delegated. Institutional pressure induces people to behave according to the group norm by imposing sanctions on those who don't, and occasionally by rewarding those who do.Security systems. Security systems are another form of societal pressure. This includes any security mechanism designed to induce cooperation, prevent defection, induce trust, and compel compliance. It includes things that work to prevent defectors, like door locks and tall fences; things that interdict defectors, like alarm systems and guards; things that only work after the fact, like forensic and audit systems; and mitigation systems that help the victim recover faster and care less that the defection occurred.Part III applies the model to the more complex dilemmas that arise in the real world. First I'll look at the full complexity of competing interests. It's not just group interest versus self-interest; people have a variety of competing interests. Also, while it's easy to look at societal dilemmas as isolated decisions, it's common for people to have conflicts of interest: multiple group interests and multiple societal dilemmas are generally operating at any one time. And the effectiveness of societal pressures often depends on why someone is considering defecting.
Then, I'll look at groups as actors in societal dilemmas: organizations in general, corporations, and then institutions. Groups have different competing interests, and societal pressures work differently when applied to them. This is an important complication, especially in the modern world of complex corporations and government agencies. Institutions are also different. In today's world, it's rare that we implement societal pressures directly. More often, we delegate someone to do it for us. For example, we delegate our elected officials to pass laws, and they delegate some government agency to implement those laws.
In Part IV, I'll talk about the different ways societal pressures fail. I'll look at how changes in technology affect societal pressures, particularly security. Then I'll look at the particular characteristics of today's society—the Information Society—and explain why that changes societal pressures. I'll sketch what the future of societal pressures is likely to be, and close with the social consequences of too much societal pressure.
This book represents my attempt to develop a full-fledged theory of coercion and how it enables compliance and trust within groups. My goal is to suggest some new questions and provide a new framework for analysis. I offer new perspectives, and a broader spectrum of what's possible. Perspectives frame thinking, and sometimes asking new questions is the catalyst to greater understanding. It's my hope that this book can give people an illuminating new framework with which to help understand how the world works.
Before we start, I need to define my terms. We talk about trust and security all the time, and the words we use tend to be overloaded with meaning. We're going to have to be more precise…and temporarily suspend our emotional responses to what otherwise might seem like loaded, value-laden, even disparaging, words.
The word society, as used in this book, isn't limited to traditional societies, but is any group of people with a loose common interest. It applies to societies of circumstance, like a neighborhood, a country, everyone on a particular bus, or an ethnicity or social class. It applies to societies of choice, like a group of friends, any membership organization, or a professional society. It applies to societies that are some of each: a religion, a criminal gang, or all employees of a corporation. It applies to societies of all sizes, from a family to the entire planet. All of humanity is a society, and everyone is a member of multiple societies. Some are based on birth, and some are freely chosen. Some we can join, and to some we must be invited. Some may be good, some may be bad—terrorist organizations, criminal gangs, a political party you don't agree with—and most are somewhere in between. For our purposes, a society is just a group of interacting actors organized around a common attribute.
I said actors, not people. Most societies are made up of people, but sometimes they're made up of groups of people. All the countries on the planet are a society. All corporations in a particular industry are a society. We're going to be talking about both societies of individuals and societies of groups.
Societies have a collection of group interests. These are the goals, or directions, of the society. They're decided by the society in some way: perhaps formally—either democratically or autocratically—perhaps informally by the group. International trade can be in the group interest. So can sharing food, obeying traffic laws, and keeping slaves (assuming those slaves are not considered to be part of the group). Corporations, families, communities, and terrorist groups all have their own group interests. Each of these group interests corresponds to one or more norms, which is what each member of that society is supposed to do. For example, it is in the group interest that everyone respect everyone else's property rights. Therefore, the group norm is not to steal (at least, not from other members of the group9).
Every person in a society potentially has one or more competing interests that conflict with the group interest, and competing norms that conflict with the group norm. Someone in that we-don't-steal society might really want to steal. He might be starving, and need to steal food to survive. He just might want other people's stuff. These are examples of self-interest. He might have some competing relational interest. He might be a member of a criminal gang, and need to steal to prove his loyalty to the group; here, the competing interest might be the group interest of another group. Or he might want to steal for some higher moral reason: a competing moral interest—the Robin Hood archetype, for example.
A societal dilemma is the choice every actor has to make between group interest and his or her competing interests. It's the choice we make when we decide whether or not to follow the group norm. Those who do cooperate, and those who do not defect. Those are both loaded terms, but I mean them to refer only to the action as a result of the dilemma.
Defectors—the liars and outliers of the book's title—are the people within a group who don't go along with the norms of that group. The term isn't defined according to any absolute morals, but instead in opposition to whatever the group interest and the group norm is. Defectors steal in a society that has declared that stealing is wrong, but they also help slaves escape in a society where tolerating slavery is the norm. Defectors change as society changes; defection is in the eye of the beholder. Or, more specifically, it is in the eyes of everyone else. Someone who was a defector under the former East German government was no longer in that group after the fall of the Berlin Wall. But those who followed the societal norms of East Germany, like the Stasi, were—all of a sudden—viewed as defectors within the new united Germany.
Figure 1: The Terms Used in the Book, and Their Relationships
Criminals are defectors, obviously, but that answer is too facile. Everyone defects at least some of the time. It's both dynamic and situational. People can cooperate about some things and defect about others. People can cooperate with one group they're in and defect from another. People can cooperate today and defect tomorrow, or cooperate when they're thinking clearly and defect when they're reacting in a panic. People can cooperate when their needs are cared for, and defect when they're starving.
When four black North Carolina college students staged a sit-in at a whites-only lunch counter inside a Woolworth's five-and-dime store in Greensboro, in 1960, they were criminals. So are women who drive cars in Saudi Arabia. Or homosexuals in Iran. Or the 2011 protesters in Egypt, who sought to end their country's political regime. Conversely, child brides in Pakistan are not criminalized and neither are their parents, even though in some cases they marry off five-year-old girls. The Nicaraguan rebels who fought the Sandinistas were criminals, terrorists, insurgents, or freedom fighters, depending on which side you supported and how you viewed the conflict. Pot smokers and dealers in the U.S. are officially criminals, but in the Netherlands those offenses are ignored by the police. Those who share copyrighted movies and music are breaking the law, even if they have moral justifications for their actions.
Defecting doesn't necessarily mean breaking government-imposed laws. An orthodox Jew who eats a ham and cheese sandwich is violating the rules of his religion. A Mafioso who snitches on his colleagues is violating omertà, the code of silence. A relief worker who indulges in a long, hot shower after a tiring journey, and thereby depletes an entire village's hot water supply, unwittingly puts his own self-interest ahead of the interest of the people he intends to help.
What we're concerned with is the overall scope of defection. I mean this term to be general, comprising the number of defectors, the rate of their defection, the frequency of their defection, and the intensity (the amount of damage) of their defection. Just as we're interested in the general level of trust within the group, we're interested in the general scope of defection within the group.
Societal pressures are how society ensures that people follow the group norms, as opposed to some competing norms. The term is meant to encompass everything society does to protect itself: both from fellow members of society, and non-societal members who live within and amongst the society. More generally, it's how society enforces intra-group trust.
The terms attacker and defender are pretty obvious. The predator is the attacker, the prey is the defender. It's all intertwined, and sometimes these terms can get a bit muddy. Watch a martial arts match, and you'll see each person defending against his opponent's attacks while at the same time hoping his own attacks get around his opponent's defenses. In war, both sides attack and defend at the tactical level, even though one side might be attacking and the other defending at the political level. These terms are value-neutral. Attackers can be criminals trying to break into a home, superheroes raiding a criminal mastermind's stronghold, or cancer cells metastasizing their way through a hapless human host. Defenders can be a family protecting its home from invasion, the criminal mastermind protecting his lair from the superheroes, or a posse of leukocytes engulfing opportunistic pathogens they encounter.
These definitions are important to remember as you read this book. It's easy for us to bring our own emotional baggage into discussions about security, but most of the time we're just trying to understand the underlying mechanisms at play, and those mechanisms are the same, regardless of the underlying moral context.
Sometimes we need the dispassionate lens of history to judge famous defectors like Oliver North, Oskar Schindler, and Vladimir Lenin.
Part I
The Science of Trust
Chapter 2
A Natural History of Security
Our exploration of trust is going to start and end with security, because security is what you need when you don't have any trust and—as we'll see—security is ultimately how we induce trust in society. It's what brings risk down to tolerable levels, allowing trust to fill in the remaining gaps.
You can learn a lot about security from watching the natural world.
Lions seeking to protect their turf will raise their voices in a “territorial chorus,” their cooperation reducing the risk of encroachment by other predators for the local food supply.When hornworms start eating a particular species of sagebrush, the plant responds by emitting a molecule that warns any wild tobacco plants growing nearby that hornworms are around. In response, the tobacco plants deploy chemical defenses that repel the hornworms, to the benefit of both plants.Some types of plasmids secrete a toxin that kills the bacteria that carry them. Luckily for the bacteria, the plasmids also emit an antidote; and as long as a plasmid secretes both, the host bacterium survives. But if the plasmid dies, the antidote decays faster than the toxin, and the bacterium dies. This acts as an insurance policy for the plasmids, ensuring that bacteria don't evolve ways to kill them.In the beginning of life on this planet, some 3.8 billion years ago, an organism's only job was to reproduce. That meant growing, and growing required energy. Heat and light were the obvious sources—photosynthesis appeared 3 billion years ago; chemosynthesis is at least a half a billion years older than that—but consuming the other living things floating around in the primordial ocean worked just as well. So life discovered predation.
We don't know what that first animal predator was, but it was likely a simple marine organism somewhere between 500 million and 550 million years ago. Initially, the only defense a species had against being eaten was to have so many individuals floating around the primordial seas that enough individuals were left to reproduce, so that the constant attrition didn't matter. But then life realized it might be able to avoid being eaten. So it evolved defenses. And predators evolved better ways to catch and eat.
Thus security was born, the planet's fourth oldest activity after eating, eliminating, and reproducing.
Okay, that's a pretty gross simplification, and it would get me booted out of any evolutionary biology class. When talking about evolution and natural selection, it's easy to say that organisms make explicit decisions about their genetic future. They don't. There's nothing purposeful or teleological about the evolutionary process, and I shouldn't anthropomorphize it. Species don't realize anything. They don't discover anything, either. They don't decide to evolve, or try genetic options. It's tempting to talk about evolution as if there's some outside intelligence directing it. We say “prehistoric lungfish first learned how to breathe air,” or “monarch butterflies learned to store plant toxins in their bodies to make themselves taste bad to predators,” but it doesn't work that way. Random mutation provides the material upon which natural selection acts. It is through this process that individuals of a species change subtly from their parents, effectively “trying out” new features. Those innovations that turn out to be beneficial—air breathing—give the individuals a competitive advantage and might potentially propagate through the species (there's still a lot of randomness in this process). Those that turn out to be detrimental—the overwhelming majority of them—kill or otherwise disadvantage the individual and die out.
By “beneficial,” I mean something very specific: increasing an organism's ability to survive long enough to successfully pass its genes on to future generations. Or, to use Richard Dawkins's perspective from The Selfish Gene, genes that helped their host individuals—or other individuals with that gene—successfully reproduce tended to persist in higher numbers in populations.
If we were designing a life form, as we might do in a computer game, we would try to figure out what sort of security it needed and give it abilities accordingly. Real-world species don't have that luxury. Instead, they try new attributes randomly. So instead of an external designer optimizing a species' abilities based on its needs, evolution randomly walks through the solution space and stops at the first solution that works—even if just barely. Then it climbs upwards in the fitness landscape until it reaches a local optimum. You get a lot of weird security that way.
You get teeth, claws, group dispersing behavior, feigning injury and playing dead, hunting in packs, defending in groups (flocking and schooling and living in herds), setting sentinels, digging burrows, flying, mimicry by both predators and prey, alarm calls, shells, intelligence, noxious odors, tool using (both offensive and defensive),1 planning (again, both offensive and defensive), and a whole lot more.2 And this is just in largish animals; we haven't even listed the security solutions insects have come up with. Or plants. Or microbes.
It has been convincingly argued that one of the reasons sexual reproduction evolved about 1.2 billion years ago was to defend against biological parasites. The argument is subtle. Basically, parasites reproduce so quickly that they overwhelm any individual host defense. The value of DNA recombination, which is what you get in sexual reproduction, is that it continuously rearranges a species' defenses so parasites can't get the upper hand. For this reason, a member of a species that reproduces sexually is much more likely to survive than a species that clones itself asexually—even though such a species will pass twice as many of its genes to its offspring as a sexually reproducing species would.
Life evolved two other methods of defending itself against parasites. One is to grow and divide quickly, something that both bacteria and just-fertilized mammalian embryos do. The other is to have an immune system. Evolutionarily, this is a relatively new development; it first appeared in jawed fish about 300 million years ago.3
A surprising number of evolutionary adaptations are related to security. Take vision, for example. Most animals are more adept at spotting movement than picking out details of stationary objects; it's called the orienting response.4 That's because things that move may be predators that attack, or prey that needs to be attacked. The human visual system is particularly good at spotting animals.5 The human ability, unique on the planet, to throw things long distances is another security adaptation. Related is what's called the size-weight misperception: the illusion that easier-to-throw rocks are perceived to be lighter than they are. It's related to our ability to choose good projectiles. Similar stories could be told about many human attributes.6
The predator/prey relationship isn't the only pressure that drives evolution. As soon as there was competition for resources, organisms had to develop security to defend their own resources and attack the resources of others. Whether it's plants competing with each other for access to the sun, predators fighting over hunting territory, or animals competing for potential mates, organisms had to develop security against others of the same species. And again, evolution resulted in all sorts of weird security. And it works amazingly well.
Security on Earth went on more or less like this for 500 million years. It's a continual arms race. A rabbit that can run away at 30 miles per hour—in short bursts, of course—is at an evolutionary advantage when the weasels and stoats can only run 28 mph, but at an evolutionary disadvantage once predators can run 32 mph.
Figure 2: The Red Queen Effect in Action
It's different when the evolutionary advantage is against nature. A polar bear has thick fur because it's cold in the Arctic. And it's thick to a point, because the Arctic doesn't get colder in response to the polar bear's changes. But that same polar bear has fur that appears white so as to better sneak up on seals. But a better camouflaged polar bear means that only more wary seals survive and reproduce, which means that the polar bears need to be even better at camouflage to eat, which means that the seals need to be more wary, and on and on and on up to some physical upper limit on camouflage and wariness.
This only-relative evolutionary arms race is known as the Red Queen Effect, after Lewis Carroll's race in Through the Looking-Glass: “It takes all the running you can do, to keep in the same place.” Predators develop all sorts of new tricks to catch prey, and prey develop all sorts of new tricks to evade predators. The prey get more poisonous, so their predators get more poison-resistant, so the prey get even more poisonous. A species has to continuously improve just to survive, and any species that can't keep up—or bumps up against physiological or environmental constraints—becomes extinct.
Figure 3: The Red Queen Effect Feedback Loop
Along with becoming faster, more poisonous, and bitier, some organisms became smarter. At first, a little smarts went a long way. Intelligence allows individuals to adapt their behaviors, moment by moment, to suit their environment and circumstances. It allows them to remember the past and learn from experience. It lets them be individually adaptive. No one has a date, but vertebrates first appeared about 525 million years ago—and continued to improve on various branches of the tree of life: mammals (215 million years ago), birds (75 million years ago), primates (60 million years ago), the genus Homo (2.5 million years ago), and then humans (somewhere between 200,000 and 450,000 years ago, depending on whose evidence you believe). When it comes to security, as with so many things, humans changed everything.
Let's pause for a second. This isn't a book about animal intelligence, and I don't want to start an argument about which animals can be considered intelligent, or what about human intelligence is unique, or even how to define the word “intelligence.” It's definitely a fascinating subject, and we can learn a lot about our own intelligence by studying the intelligence of other animals. Even my neat intelligence progression from the previous paragraph might be wrong: flatworms can be trained, and some cephalopods are surprisingly smart. But those topics aren't really central to this book, so I'm going to elide them. For my purposes, it's enough to say that there is a uniquely human intelligence.7
And humans take their intelligence seriously. The brain only represents 3% of total body mass, but uses 20% of the body's total blood supply and 25% of its oxygen. And—unlike other primates, even—we'll supply our brains with blood and oxygen at the expense of other body parts.
One of the things intelligence makes possible is cultural evolution. Instead of needing to wait for genetic changes, humans are able to improve their survivability through the direct transmission of skills and ideas. These memes can be taught from generation to generation, with the more survivable ideas propagating and the bad ones dying out. Humans are not the only species that teaches its young, but humans have taken this to a new level.8 This caused a flowering of security ideas: deception and concealment; weapons, armor, and shields; coordinated attack and defense tactics; locks and their continuous improvement over the centuries; gunpowder, explosives, guns, cruise missiles, and everything else that goes “bang” or “boom”; paid security guards and soldiers and policemen; professional criminals; forensic databases of fingerprints, tire tracks, shoe prints, and DNA samples; and so on.
It's not just intelligence that makes humans different. One of the things that's unique about humans is the extent of our socialization. Yes, there are other social species: other primates, most mammals and some birds.9 But humans have taken sociality to a completely different level. And with that socialization came all sorts of new security considerations: concern for an ever-widening group of individuals, concern about potential deception and the need to detect it, concern about one's own and others' reputations, concern about rival groups of attackers and the corresponding need to develop groups of defenders, recognition of the need to take preemptive security measures against potential attacks, and after-the-fact responses to already-occurred attacks for the purpose of deterring others in the future.10
Some scientists believe that this increased socialization actually spurred the development of human intelligence.11 Machiavellian Intelligence Theory—you might also see this called the Social Brain Hypothesis—holds that we evolved intelligence primarily in order to detect deception by other humans. Although the “Machiavellian” term came later, the idea first came from psychologist Nicholas Humphrey. Humphrey observed that wild gorillas led a pretty simple existence, with abundant and easily harvested food, few predators, and not much else to do but eat, sleep, and play. This was in contrast to gorillas in the laboratory, which demonstrated impressive powers of creative reasoning. So the obvious question is: what's the evolutionary advantage of being intelligent and clever if it's not required in order to survive in the wild? Humphrey proposed that the primary role of primate intelligence and creativity was to deal with the complexities of living with other primates. In other words, we evolved smarts not to outsmart the world, but to outsmart each other.
It's more than that. As we became more social, we needed to learn how to get along with each other: both cooperating with each other and ensuring everyone else cooperates, too. It involves understanding each other. Psychologist Daniel Gilbert describes it very well:
We are social mammals whose brains are highly specialized for thinking about others. Understanding what others are up to—what they know and want, what they are doing and planning—has been so crucial to the survival of our species that our brains have developed an obsession with all things human. We think about people and their intentions; talk about them; look for and remember them.
This makes evolutionary sense. Intelligence is a valuable survival trait when you have to deal with the threats from the natural world. But intelligence is an even more valuable survival trait when you have to deal with the threats from other intelligent individuals. An intelligent adversary is a different animal, so to speak, than an unintelligent adversary. An intelligent attacker is adaptive. An intelligent attacker can learn about its prey. An intelligent attacker can make long-term plans. An intelligent adversary can predict your defenses and incorporate them into his plans. If you're being attacked by an intelligent human, your most useful defense is to also be an intelligent human. Our ancestors grew smarter because those around them grew smarter, and the only way to keep up was to become even smarter.12 It's a Red Queen Effect in action.
In primates, the frequency of deception is directly proportional to the size of a species' neocortex: the “thinking” part of the mammalian brain. That is, the bigger the brain, the greater the capacity for deception. The human brain has a neocortex that's four times the size of its nearest evolutionary relative. Eighty percent of our brain is neocortex, compared to 50% in our nearest existing relative and 10% to 40% in non-primate mammals.13
And as our neocortex grew, the complexity of our social interactions grew as well. Primatologist Robin Dunbar has studied primate group sizes. Dunbar examined 38 different primate genera, and found that the volume of the neocortex correlates with the size of the troop. He established that the mean human group size is 150.14 This is the Dunbar number: the number of people with whom we can have explicit and personal encounters, whose history we can remember, and with whom we can experience some level of intimacy.15 Of course, it's an average. You personally might be able to keep track of more or fewer. This number appears regularly in human society: it's the estimated size of a Neolithic farming village; the size at which Hittite settlements split; and it's a basic unit in professional armies, from Roman times to the present day. It's the average size of people's Christmas card lists. It's a common department size in modern corporations.
So as our ancestors got smarter, their social groups got larger. Chimpanzees live in groups of approximately 60 individuals. Australopithecus—our ancestor from 4.5 million years ago—had an average group size of 70 individuals. When our first tool-using ancestors appeared 2 million years ago, the group size grew to 80. Homo erectus had a mean group size of 110, and Neanderthals 140. Homo sapiens: 150.
One hundred and fifty people is a lot to keep track of, especially if they're all clever, sneaky, duplicitous, and—as it turns out—murderous. There is a lot of evidence—both from the anthropological record and from ethnographic studies of contemporary primitive cultures—that humans are innately quite violent, and that intertribal warfare was endemic in primitive society. Several studies estimate that 15–25% of prehistoric males died in warfare.16
Economist Paul Seabright postulates that intelligence and murderousness are mutually reinforcing. The more murderous a species is, the greater the selective benefit of intelligence; smarter people are more likely to survive their human adversaries. And the smarter someone is, the more an adversary wants to kill him—and not just make him submit, as other species do.
Looking at the average weight of humans and extrapolating from other animals, humans should primarily hunt medium-sized rodents; indeed, early humans primarily hunted small game. And hunting small game is much more efficient for a bunch of reasons.17 Even so, all primitive societies hunt large game: antelopes, walrus, and so on. The theory is that although large-game hunting is less efficient, the skill set is the same as what's required for intertribal warfare. The groups that excelled at large-game hunting were more likely to survive the endemic warfare that existed in our evolutionary past. Group hunting also reinforced social bonds, which are a useful group survival trait.
A male killing another male of the same species—especially an unrelated male—eliminates a sexual rival. If you have fewer sexual rivals, you have more of your own offspring. Natural selection favors murderousness. On the other hand, attempting to murder another individual of the same species is dangerous; you might get yourself killed in the process. This means fewer offspring, which implies a counterbalancing natural selection against murderousness.
It's another Red Queen Effect, this one involving murder. Evolutionary psychologist David Buss writes:
As the motivations to murder evolved in our minds, a set of counter-inclinations also developed. Killing is a risky business. It can be dangerous and inflict horrible costs on the victim. Because it's so bad to be dead, evolution has fashioned ruthless defenses to prevent being killed, including killing the killer. Potential victims are therefore quite dangerous themselves. In the evolutionary arms race, homicide victims have played a critical and unappreciated role—they pave the way for the evolution of anti-homicide defenses.
There is considerable debate about how violent we really are, with the majority opinion coming down on the “quite violent” side, especially among males from ages 16 to 24. On the other hand, some argue that human violence has declined over the millennia, primarily due to the changing circumstances that come with civilization. We do know it's been traditionally very hard to convince soldiers to kill in war, and our experience with post-traumatic stress disorder shows that it has long-lasting ill effects. Our violence may be innate, but it depends a lot on context. We're comparable with other primates.18
But if we are so naturally murderous, how did our prehistoric ancestors come to trust each other? We know they did, because if they hadn't, society would never have developed. People would never have gathered into groups that extended past immediate family, let alone into villages and towns and cities. Division of labor would have never evolved, because people couldn't trust others to do their parts. We would never have established trade with the strangers we occasionally encountered, let alone with companies based halfway across the planet. Friendships wouldn't exist. Societies based on either geography or interest would be impossible. Any sort of governmental structure: forget it. It doesn't matter how big your neocortex is or how abstractly you can reason: unless you can trust others, your species will forever remain stuck in the Stone Age.
The answer to that question will make use of the concepts presented in this chapter—the Red Queen Effect, the Dunbar number, our natural intelligence and murderousness—and it will make use of security. It turns out that trust in society isn't easy, and that we're still getting it wrong.
Chapter 3
The Evolution of Cooperation
Two of the most successful species on the planet are humans and leafcutter ants of Brazil. Evolutionary biologist Edward O. Wilson has spent much of his career studying the ants, and argues that their success is due to division of labor.1 There are four different kinds of leafcutter workers: gardeners, defenders, foragers, and soldiers. Each type of ant is specialized to its task, and together the colony does much better than colonies of non-specialized ant species.
Humans specialize too, and—even better—we can adapt our specialization to the situation. A leafcutter ant is born to a particular role; we get to decide our specialization in both the long and short term, and change it if it's not working out for us.2
Division of labor is an exercise in trust. A gardener leafcutter ant has to trust that the forager leafcutter ants will bring leaf fragments back to the nest. I, specializing right now in book writing, have to trust that my publisher is going to print this book and bookstores are going to sell it. And that someone is going to grow food that I can buy with my royalty check. If I couldn't trust literally millions of nameless, faceless other people, I couldn't specialize.
Brazilian leafcutter ant colonies evolved trust and cooperation because they're all siblings. We had to evolve it the hard way.
We all employ both cooperating and defecting strategies. Most of the time our self-interest and group interest coincide, and we act in accordance with the group norm. Only sometimes do we act in some competing norm. It depends on circumstance, and it depends on who we are. Some of us are more cooperative, more honest, more altruistic, and fairer. And some of us are less so. There isn't one dominant survival strategy that evolution has handed down to us; we have the flexibility to switch between different strategies.
One way to think of the relationship between society as a whole and its defectors is as a parasitic relationship. Take the human body as an example. Only 10% of the total number of cells in our human bodies are us—human cells with our particular genome. The other 90% are symbionts, genetically unrelated organisms.3 Our relationship with them ranges from mutualism (we both benefit) to commensalism (one benefits) to parasitism (one benefits and the other is harmed). The society of our bodies needs the cooperators to survive, and at the same time spends a lot of energy defending itself against the defectors.
Extending the analogy even further, our social systems are filled with parasites as well. Parasites steal stuff instead of buying it. They take more than their share in a communal situation. They overstay their welcome on their Aunt Faye's couch. They incur unsustainable debt, confident that bankruptcy laws—or some expensive lawyers—will enable them to bail out on their creditors when the going gets tough.
Parasites are all over the Internet. Crime is a huge business. Spammers are parasitic on e-mail. Griefers in online games are parasitic on more conventional players. File sharers copy music instead of paying for it; they're parasitic on the music industry, getting the benefit of commercial music without giving back any money in return.
Excepting the smallest and simplest cases, every society has parasites living inside it. And there is an evolutionary advantage to being a parasite as long as there aren't too many of them and they aren't too good at it.
Being a parasite is a balancing act. Biological parasites do best if they don't immediately kill their hosts, but instead let them survive long enough for the parasites to spread to additional hosts. Ebola is too successful, so it fails as a species. The common cold does a much better job of spreading itself; it infects, and in the end kills, far more people by being much less “effective.” Predators do best if they don't kill enough prey to wipe out the entire species. Spammers do better if they don't clog e-mail to the point where no one uses it anymore, and rogue banks are more profitable if they don't crash the entire economy. All parasites do better if they don't destroy whatever system they've latched themselves onto. Parasites thrive only if they don't thrive too well.
There's a clever model from game theory that illustrates this: the Hawk-Dove game. It was invented by geneticists John Maynard Smith and George R. Price in 1971 to explain conflicts between animals of the same species. Like most game theory models, it's pretty simplistic. But what it illuminates about the real world is profound.
The game works like this. Assume a population of individuals with differing survival strategies. Some cooperate and some defect. In the language of the game, the defectors are hawks. They're aggressive; they attack other individuals, and fight back if attacked. The cooperators are doves. They're pacific; they share with other doves, and retreat when attacked. You can think about this in terms of animals competing for food. When two doves meet, they cooperate and share food. When a hawk meets a dove, the hawk takes food from the dove. When two hawks meet, they fight and one of them randomly gets the food and the other has some probability of dying from injury.4
Set some initial parameters in the simulation: the value of sharing, the chance and severity of harm if two hawks fight each other, and so on. Program this model into a computer, set proportions for the initial population—50% hawks and 50% doves, for example—and let individuals interact with each other over multiple iterations.
What's interesting about this simulation is that neither strategy is guaranteed to dominate. Both hawks and doves can be successful, depending on the initial parameters. If the value of the food stolen is greater than the risk of death, the whole population becomes hawks. That is, if everyone is starving, people take what they can from each other without worrying about the consequences. Add a single dove, and it immediately starves. But as food gets less valuable (e.g., more plentiful) or fighting gets more dangerous, the population stabilizes into a mixture of hawks and doves. The more dangerous fighting is, the fewer hawks there will be. If food is reasonably plentiful and fighting reasonably dangerous, the population stabilizes into a mixture of mostly doves and fewer hawks. But unless you plug some really unrealistic numbers into the simulation—like starting out with a population entirely of doves—there will always be at least a few hawks in the mix.