Liars and Outliers - Bruce Schneier - E-Book

Liars and Outliers E-Book

Bruce Schneier

0,0
16,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Praise for LIARS & OUTLIERS

"Without trust, nothing can be achieved. Liars and Outliers is a brilliant analysis of the role of trust in society and business."
Klaus Schwab, Founder and Executive Chairman, World Economic Forum

"Schneier absolutely understands how profoundly trust oils the wheels of business and of daily life."
Margaret Heffernan, Author of Uncharted: How to Navigate the Future and Beyond Measure: The Big Impact of Small Changes

"Liars and Outliers is an absolutely fascinating and groundbreaking book. While written years before AI, which is often seen as a catch-all to solve all of humanity's problems, Schneier compellingly shows that in our complex society, there are no simple solutions."
Ben Rothke, Senior Information Security Manager, Tapad

"Brilliantly dissects, classifies, and orders the social dimension of security-a spectacularly palatable tonic against today's incoherent and dangerous flailing in the face of threats from terrorism to financial fraud."
Cory Doctorow, Author of Little Brother and Makers, Co-editor of BoingBoing.net

"Trust is the sine qua non of the networked age and trust is predicated on security. Bruce Schneier's expansive and readable work is rich with insights that can help us make our shrinking world a better one."
Don Tapscott, Co-author of Macrowikinomics: Rebooting Business and the World

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 722

Veröffentlichungsjahr: 2025

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Table of Contents

Praise for Liars and Outliers

Title Page

Copyright

Introduction to the Paperback Edition

1 Overview

Notes

PART I: The Science of Trust

2 A Natural History of Security

Notes

3 The Evolution of Cooperation

Notes

4 A Social History of Trust

Notes

5 Societal Dilemmas

Notes

PART II: A Model of Trust

6 Societal Pressures

Notes

7 Moral Pressures

Notes

8 Reputational Pressures

Notes

9 Institutional Pressures

Notes

10 Security Systems

Notes

PART III: The Real World

11 Competing Interests

Notes

12 Organizations

Notes

13 Corporations

Notes

14 Institutions

Notes

PART IV: Conclusions

15 How Societal Pressures Fail

Notes

16 Technological Advances

Notes

17 The Future

Notes

Acknowledgments

References

Chapter 1

Chapter 1 Notes

Chapter 2

Chapter 2 Notes

Chapter 3

Chapter 3 Notes

Chapter 4

Chapter 4 Notes

Chapter 5

Chapter 5 Notes

Chapter 6

Chapter 6 Notes

Chapter 7

Chapter 7 Notes

Chapter 8

Chapter 8 Notes

Chapter 9

Chapter 9 Notes

Chapter 10

Chapter 10 Notes

Chapter 11

Chapter 11 Notes

Chapter 12

Chapter 12 Notes

Chapter 13

Chapter 13 Notes

Chapter 14

Chapter 14 Notes

Chapter 15

Chapter 15 Notes

Chapter 16

Chapter 16 Notes

Chapter 17

Chapter 17 Notes

About the Author

Index

End User License Agreement

List of Illustrations

Chapter 1

Figure 1: The Terms Used in the Book, and Their Relationships

Chapter 2

Figure 2: The Red Queen Effect in Action

Figure 3: The Red Queen Effect Feedback Loop

Chapter 3

Figure 4: Metaphorical Knobs to Control a Hawk-Dove Game

Chapter 4

Figure 5: Dunbar Numbers

Chapter 6

Figure 6: Societal Pressure Knobs

Figure 7: The Scale of Different Societal Pressures

Figure 8: How Societal Pressures Influence the Risk Trade-Off

Chapter 10

Figure 9: Security’s Diminishing Returns

Chapter 11

Figure 10: Competing Interests in a Societal Dilemma

Figure 11: Scale of Competing Interests

Chapter 14

Figure 12: How Societal Pressures Are Delegated

Chapter 15

Figure 13: Societal Pressure’s Feedback Loops

Chapter 16

Figure 14: Societal Pressure Red Queen Effect

Figure 15: The Security Gap

Guide

Cover

Table of Contents

Praise for Liars and Outliers

Title Page

Copyright

Introduction to the Paperback Edition

Begin Reading

Acknowledgments

References

About the Author

Index

End User License Agreement

Pages

i

ii

iii

v

vi

viii

ix

x

xi

xii

xiii

xiv

xv

xvi

xvii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

17

18

19

20

21

22

23

24

25

27

28

29

30

31

32

33

34

35

36

37

38

39

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

195

196

197

198

199

200

201

202

203

204

205

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

Praise for Liars and Outliers

“A rich, insightfully fresh take on what security really means!”

—DAVID ROPEIK

author of How Risky is it, Really?

“Schneier has accomplished a spectacular tour de force: an enthralling ride through history, economics, and psychology, searching for the meanings of trust and security. A must read.”

—ALESSANDRO ACQUISTI

Associate Professor of Information Systems and Public Policy at the Heinz College, Carnegie Mellon University

“Liars and Outliers offers a major contribution to the understandability of these issues, and has the potential to help readers cope with the ever-increasing risks to which we are being exposed. It is well written and delightful to read.”

—PETER G. NEUMANN

Principal Scientist in the SRI International Computer Science Laboratory

“Whether it’s banks versus robbers, Hollywood versus downloaders, or even the Iranian secret police against democracy activists, security is often a dynamic struggle between a majority who want to impose their will, and a minority who want to push the boundaries. Liars and Outliers will change how you think about conflict, our security, and even who we are.”

—ROSS ANDERSON

Professor of Security Engineering at Cambridge University and author of Security Engineering

“Readers of Bruce Schneier’s Liars and Outliers will better understand technology and its consequences and become more mature practitioners.”

—PABLO G. MOLINA

Professor of Technology Management Georgetown University

“Liars and Outliers is not just a book about security—it is the book about it. Schneier shows that the power of humour can be harnessed to explore even a serious subject such as security. A great read!”

—FRANK FUREDI

Professor Emeritus, School of Social Policy, Sociology and Social Research The University of Kent at Canterbury and author of On Tolerance: A Defence of Moral Independence

“This fascinating book gives an insightful and convincing framework for understanding security and trust.”

—JEFF YAN

Founding Research Director, Center for Cybercrime and Computer Security Newcastle University

“By analyzing the moving parts and interrelationships among security, trust, and society, Schneier has identified critical patterns, pressures, levers, and security holes within society. Clearly written, thoroughly interdisciplinary, and always smart, Liars and Outliers provides great insight into resolving society’s various dilemmas.”

—JERRY KANG

Professor of Law, UCLA

“By keeping the social dimension of trust and security in the center of his analysis, Schneier breaks new ground with an approach that’s both theoretically grounded and practically applicable.”

—JONATHAN ZITTRAIN

Professor of Law and Computer Science, Harvard University and author of The Future of the Internet—And How to Stop It

“Eye opening. Bruce Schneier provides a perspective you need to understand today’s world.”

—STEVEN A. LEBLANC

Director of Collections, Harvard University and author of Constant Battles: Why We Fight

“An outstanding investigation of the importance of trust in holding society together and promoting progress. Liars and Outliers provides valuable new insights into security and economics.”

—ANDREW ODLYZKO

Professor, School of Mathematics, University of Minnesota

“What Schneier has to say about trust—and betrayal—lays a groundwork for greater understanding of human institutions. This is an essential exploration as society grows in size and complexity.”

—JIM HARPER

Director of Information Policy Studies, CATO Institute and author of Identity Crisis: How Identification Is Overused and Misunderstood

“Society runs on trust. Liars and Outliers explains the trust gaps we must fill to help society run even better.”

—M. ERIC JOHNSON

Director, Glassmeyer/McNamee Center for Digital Strategies Tuck School of Business at Dartmouth College

“An intellectually exhilarating and compulsively readable analysis of the subtle dialectic between cooperation and defection in human society. Intellectually rigorous and yet written in a lively, conversational style, Liars and Outliers will change the way you see the world.”

—DAVID LIVINGSTONE SMITH

Associate Professor of Philosophy, University of New England and author of Less Than Human: Why We Demean, Enslave, and Exterminate Others

“Schneier tackles trust head on, bringing all his intellect and a huge amount of research to bear. The best thing about this book, though, is that it’s great fun to read.”

—ANDREW MCAFEE

Principal Research Scientist, MIT Center for Digital Business and co-author of Race Against the Machine

“Bruce Schneier is our leading expert in security. But his book is about much more than reducing risk. It is a fascinating, thought-provoking treatise about humanity and society and how we interact in the game called life.”

—JEFF JARVIS

author of Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live

“Both accessible and thought provoking, Liars and Outliers invites readers to move beyond fears and anxieties about security in modern life to understand the role of everyday people in creating a healthy society. This is a must-read!”

—DANAH BOYD

Research Assistant Professor in Media, Culture, and Communication New York University

“Trust is the sine qua non of the networked age and trust is predicated on security. Bruce Schneier’s expansive and readable work is rich with insights that can help us make our shrinking world a better one.”

—DON TAPSCOTT

co-author of Macrowikinomics: Rebooting Business and the World

“An engaging and wide-ranging rumination on what makes society click. Highly recommended.”

—JOHN MUELLER

Senior Research Scientist, Mershon Center, Ohio State University and author of Overblown: How Politicians and the Terrorism Industry Inflate National Security Threats, and Why We Believe Them

Liars and Outliers

Enabling the Trust That Society Needs to Thrive

 

 

Bruce Schneier

 

 

 

 

 

 

 

 

 

Copyright © 2026 by Bruce Schneier. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada and the United Kingdom.

ISBNs: 9781394375288 (Paperback), 9781118225561 (ePDF), 9781118239018 (ePub)

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.

The manufacturer’s authorized representative according to the EU General Product Safety Regulation is Wiley-VCH GmbH, Boschstr. 12, 69469 Weinheim, Germany, e-mail: [email protected].

Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

Limit of Liability/Disclaimer of Warranty: While the publisher and the author have used their best efforts in preparing this work, including a review of the content of the work, neither the publisher nor the authors make any representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. For product technical support, you can find answers to frequently asked questions or reach us via live chat at https://support.wiley.com.

If you believe you’ve found a mistake in this book, please bring it to our attention by emailing our reader support team at [email protected] with the subject line “Possible Book Errata Submission.”

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Control Number: 2025944964

Cover image: © ioat/ShutterstockAuthor photo: Norbert ShillerCover design: Wiley

A Note for Readers

This book contains both notes and references. The notes are explanatory bits that didn’t make it into the main text. These are indicated by superscript numbers in both the paper and e-book formats. The references are not indicated at all in the main text; they are collected at the back of the book, organized by printed page number and a bit of quoted text.

Introduction to the Paperback Edition

Hello, reader. Welcome to the paperback edition of Liars and Outliers. A lot has happened since I first wrote this book, both in the academic studies of trust and in the broader landscape of Internet technologies. What I want to talk about here is artificial intelligence (AI). In particular, I want to talk about AI and trust, power, and integrity. In this brief introduction, I want to make four basic arguments:

There are two kinds of trust—

interpersonal

and

social

—and we regularly confuse them. What matters here is social trust, which is about reliability and predictability in society.

Our confusion will increase with AI, and the corporations controlling AI will use that confusion to take advantage of us.

This is a

security

problem. This is a

confidentiality

problem. But it is even more of an

integrity

problem. And integrity is going to be the primary security challenge for AI systems of the future.

It’s also a

regulatory

problem, and it is government’s role to enable social trust, which means incentivizing trustworthy AI.

Okay, so let’s break these things down. Trust is a complicated concept, and the word is overloaded with many different meanings.

There’s personal and intimate trust. When we say we trust a friend, it is less about their specific actions and more about them as a person. It’s a general reliance that they will behave in a trustworthy manner. Let’s call this “interpersonal trust.”

There’s also a less intimate, less personal type of trust. We might not know someone personally or know their motivations, but we can still trust their behavior. This type of trust is more about reliability and predictability. We’ll call this “social trust.” It’s the ability to trust strangers.

Interpersonal trust and social trust are both essential in society. This is how it works. We have mechanisms that induce people to behave in a trustworthy manner, both interpersonally and socially. This allows others to be trusting, which enables trust in society. And that keeps society functioning. The system isn’t perfect—there are always untrustworthy people—but most of us being trustworthy most of the time is good enough.

This is what I wrote about in 2012, in this book’s original release. In the coming chapters, you’ll read about four trust-enabling systems: our innate morals, concern about our reputations, the laws we live under, and security technologies that constrain our behavior. You’ll read about how the first two are more informal than the last two, and how the last two scale better and allow for larger and more complex societies. They’re what enable trust among strangers.

What you won’t read about, because I didn’t understand back then, is how different the first two and the last two are. Morals and reputation are person to person, based on human connection. They underpin interpersonal trust. Laws and security technologies are systems that compel us to act trustworthily. They’re the basis for social trust.

“Taxi driver” used to be one of the US most dangerous professions. Uber changed that. I don’t know my Uber driver, but the rules and the technology let us both be confident that neither of us will cheat or attack each other. We are both under constant surveillance, and we are competing for star rankings. But those rules, technology, and confidence only provide so much protection. I will talk about that later.

Lots of people write about the difference between living in high-trust and low-trust societies. That literature is important, but for this introduction, the critical point is that social trust scales better. You used to need a personal relationship with a banker to get a loan. Now it’s all done algorithmically, and you have many more options.

That scale is important. You can ask a friend to deliver a package across town or you can pay the post office to do the same thing. The first choice involves trust based on morals and reputation. You know your friends and how reliable they are. The second is a service, made possible by social trust. And to the extent that it is a reliable and predictable service, it’s primarily based on laws and technologies. Both can get your package delivered, but only the second can become a global package delivery service like FedEx.

Because of how large and complex society has become, we have replaced many of the rituals and behaviors of interpersonal trust with security mechanisms that enforce reliability and predictability: social trust.

But because we use the same word for both, we regularly confuse them. When we do that, we are making a category error. We do it all the time—with governments, with organizations, with systems of all kinds—and especially with corporations.

We might think of them as friends when they are actually services. Corporations are not moral; they are precisely as immoral as they can get away with.

Both language and the law make this an easy category error to make. We use the same grammar for people and for corporations. We imagine we have personal relationships with brands. We give corporations many of the same rights as people. Corporations benefit from this confusion because they profit when we think of them as friends.

We are about to make this same category error with AI. We’re going to think of AI as our friend when it is not.

There is a through line from governments to corporations to AI. Science fiction writer Charlie Stross calls corporations “slow AI.” They are profit-maximizing machines. The most successful ones do whatever they can to achieve that singular goal. David Runciman makes this point more fully in his book, The Handover. He describes governments, corporations, and AIs all as superhuman machines that are more powerful than their individual components. Science fiction writer Ted Chiang claims our fears of AI are basically fears of capitalism and that the paperclip maximizer is basically every start-up’s business plan.

This is the story of the Internet. Surveillance and manipulation are its business models. Products and services are deliberately made worse in the pursuit of profit.

We use these services as if they are our agents, working on our behalf. In fact, they are double agents, also secretly working for their corporate owners. We trust them, but they are not trustworthy. They’re not friends; they’re services.

It’s going to be the same with AI. And the result will be worse, for three reasons.

The first is that these AI systems will be more relational. We will be conversing with them, using natural language. As such, we will naturally ascribe human-like characteristics to them.

I actually think that websites will largely disappear in our AI future. Static websites, where organizations make information generally available, are a recent invention—and an anomaly. Before the Internet, if you wanted to know when a restaurant opened, you would call and ask. Now, you check the website. In the future, you—or your AI agent—will once again ask the restaurant, the restaurant’s AI, or some intermediary AI. It’ll be conversational—the way it used to be.

This relational nature will make it easier for those double agents to do their work. Did your chatbot recommend a particular airline or hotel because it’s the best deal for you? Or was it because the AI company got a kickback from those companies? When you asked it to explain a political issue, did it bias that explanation toward the political party that gave it the most money? The conversational interface will help the AI hide its agenda.

The second reason is power. Sometimes we have no choice but to trust someone or something because they are powerful. We are forced to trust the local police because they’re the only law enforcement authority in town. We are forced to trust some corporations because there aren’t viable alternatives. Or, to be more precise, we have no choice but to entrust ourselves to them. We will be in this same position with AIs. In many instances, we will have no choice but to entrust ourselves to their decision-making.

The third reason to be concerned is that these AIs will be more intimate. One of the promises of generative AI is a personal digital assistant that acts as your advocate to others and as an assistant to you. This requires a greater intimacy than your search engine, email provider, cloud storage system, or phone. It might even have a direct neural interface. You’re going to want it with you 24/7, training on everything you do, so it can work on your behalf most effectively.

And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.

You will default to thinking of it as a friend. It will converse with you in natural language. If it is a robot, it will look humanoid—or at least like an animal. It will interact with the whole of your existence, just like another person would.

And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you respond to.

All of this is a long-winded way of saying we need trustworthy AI: AI whose behavior is understood, whose limitations are understood, whose training is understood, and whose biases are understood and corrected for. AI whose values and goals are understood and that works in your interest. AI that won’t secretly betray your trust to someone else, and that is secure against hacking, so you know it delivers the results it promises.

Social trust is all about reliability and predictability, and we create social trust through laws and technologies. Here we need both, because failures will come from one of two places: the powerful corporations controlling the AIs (we’ve talked about that) or others manipulating the AIs, hacking them.

Almost all AI systems are going to be used in some sort of adversarial environment. By this, I mean someone will have a vested interest in what the AI produces or in the data it uses, which means it will be hacked.

When we think of AI hacks, there are three different levels. First, an adversary is going to want to manipulate the AI’s output (an integrity attack). Failing that, they will want to eavesdrop on it (a confidentiality attack). If that doesn’t work, they will want to disrupt it (an availability attack). Note that integrity attacks are the most critical.

Imagine an AI as an advisor in an international trade negotiation, or as a political strategist, or as a legal researcher. There will be an incentive for someone to hack the AI. Maybe a criminal; maybe a government. And it doesn’t matter how accurate, or capable, or hallucination-free an AI system is. If we can’t guarantee it hasn’t been hacked, it just won’t be trusted. Did the AI give a biased answer because a foreign power hacked it to serve its interests? We’re already seeing Russian attacks that deliberately manipulate AI training data. Or did the AI give a biased answer because a criminal group hacked it to run some scam? That’s coming next.

At the end of the day, AIs are computer programs. They are written in software that runs on hardware, which is attached to networks and interacts with users. Everything we know about cybersecurity applies to AI systems, along with all the additional AI-specific vulnerabilities, such as prompt injection and training-data manipulation.

But people will use—and trust—these systems, even though they’re not trustworthy.

Trustworthy AI requires AI security. And it’s a hard technical problem, because of the way machine learning (ML) systems are created and how they evolve.

We are used to the confidentiality problem and the availability problem. What’s new, and more important, is the integrity problem that runs through the entire AI system.

So let’s discuss integrity and what it means. It means ensuring that no one can modify the data—that’s the traditional security angle—but it’s much more. It encompasses the quality and completeness of the data and the code, over both time and space. Integrity means ensuring data is correct and accurate from the point of collection through all the ways it is used, modified, and eventually deleted.

We tend not to think of it this way, but we already have primitive integrity systems in our computers. The reboot process, which returns a computer to a known good state, is an integrity system. The Undo button, which prevents accidental data loss, is another integrity system. Integrity is also making sure data is accurate when collected and that it comes from a trustworthy sensor or source. Digitally signed data preserves integrity. It ensures that nothing important is missing and that data doesn’t change as it moves from format to format. Any system to detect hard drive errors, file corruption, or dropped packets is an integrity system. Checksums are integrity systems. Tesla manipulating odometer readings is an integrity attack.

And just as exposing personal data on a website is a confidentiality breach even if no one accesses it, failing to guarantee the integrity of data is a breach, even if no one deliberately manipulated that data. Integrity breaches include malicious actions as well as inadvertent mistakes.

Most modern attacks against AI systems are integrity attacks. Putting small stickers on road signs to fool self-driving cars is an integrity attack. Prompt injection is an integrity attack. In both cases, the AI model can’t distinguish between legitimate data and malicious commands. Manipulations of the training data, the model, the input, the output, or the feedback are all integrity attacks.

Integrity is important for personal AIs, but it’s arguably even more important for AIs inside organizations. We can imagine a corporate AI trained on all the organization’s reports, analyzing decisions, and acting on its behalf. Privacy is important, but privacy has always been important. The integrity of that model is critical to the operation of the system. Without it, everything falls apart.

Think of this in terms of the evolution of the Internet. In cybersecurity, we have something called the CIA Triad, for confidentiality, integrity, and availability—the three properties security is supposed to provide.

Web 1.0 of the 1990s and early 2000s was all about availability. Individuals and organizations rushed to digitize their content, and this created the vast repository of human knowledge we know today. Making information available overshadowed all other concerns.

Web 2.0, the current Web, emphasizes confidentiality. This is the read/write Web, where your private data needs to remain private. Think of online banking, e-commerce, social media—anywhere you are an active participant. Confidentiality is paramount.

Web 3.0 is the distributed, decentralized, intelligent Web of tomorrow. Peer-to-peer social networking, distributed data ownership and storage, the Internet of Things, AI agents—all these things require verifiable, trustworthy data and computation: integrity. There is no real-time car-to-car communication without integrity. There is no drone coordination, smart power grid, or reliable mesh networking. And there are no useful AI agents.

I predict that integrity will be the key security problem of the next decade. And it’s a hard problem. Integrity means maintaining verifiable chains of trust from input to processing to output. It’s both data integrity and computational integrity. It’s authentication and secure auditing. Integrity hasn’t gotten a lot of attention, and it needs some real fundamental research.

In another context, I talked about this as a research question that rivals the Internet itself. The Internet was created to answer this question: Can we build a reliable network out of unreliable parts in an unreliable world? That’s an availability question. I have subsequently asked a similar question: Can we build a secure network out of insecure parts in an insecure world? I meant it as a question about confidentiality. Now I want to ask the same thing about integrity: Can we build an integrous system out of non-integrous parts in a non-integrous world? The answer isn’t obviously yes, but it isn’t obviously no, either.

I have been using this question as a call to research: into verifiable sensors and auditable system outputs, into integrity verification systems and integrity breach detection, into ways to test and measure the integrity of a process, into ways to recover from an integrity failure.

And we have a language problem. As security is to secure, as availability is to available, as confidentiality is to confidential, as integrity is to … what? It’s not integral, that’s wrong. There actually is a word, and I just used it a couple of paragraphs ago. It’s “integrous.” It’s a word that the Oxford English Dictionary lists as “rare” and “obsolete.” I am trying to revive the word and start the discipline of integrous system design.

But even with the right research and the right products, the market will not provide social trust on its own. Corporations are profit maximizers at the expense of society. The lures of surveillance capitalism are just too much to resist. They will build systems in their own interests, and they will under-invest in security to protect our interests.

Go back to the example of ride-sharing apps making the equivalent of a taxi driver a safer profession. Catching a ride is more safe than it used to be, but rider and driver ratings and self-imposed rules only go so far for safety. In its own latest US Safety report from 2021 and 2022, Uber reported 36 physical assaults of a passenger or driver that resulted in a fatality. It reported 2,717 sexual assaults. But where governments have proposed regulations to make rideshares safer through fingerprints and more stringent driver background checks in the US states of Rhode Island and Colorado, Uber and Lyft have pushed back with lobbying and threats to leave the states. It’s government that provides the underlying mechanisms for social trust. Think about contract law, laws about property, laws protecting your personal safety, or any of the health and safety codes that let you board a plane, eat at a restaurant, or buy a pharmaceutical without worry.

The more you can trust that your societal interactions are reliable and predictable, the more you can ignore their details.

Government can provide these underlying trust mechanisms with AI. We need AI transparency laws: When is the AI used, how is it trained, and what biases and values does it have? We need laws regulating AI and robotics safety (when and how they are permitted to affect the world). We need laws regulating their behavior as double agents (how much they can spy on us, and when they can manipulate us). We need minimum security standards for the computers AIs are running on and for any AI that interacts with the outside world. We need laws that enforce AI security, which means the ability to recognize when those laws are being broken, and we need penalties sufficiently large to incentivize trustworthy behavior.

Many countries are contemplating AI safety and security laws—the EU AI Act was passed in 2024—but I think they are making a critical mistake. They try to regulate the AIs and not the humans behind them.

AIs are not people; they don’t have agency. They are built, trained, and controlled by people: mostly people working for for-profit corporations. Any AI regulations should place restrictions on those people and corporations. Otherwise, the regulations are making the same category error I’ve been talking about. At the end of the day, there is always a human responsible for whatever the AI’s behavior is. It’s the human who needs to be responsible for what they do and for what their companies do—regardless of whether the action was due to humans, AI, or a combination of both. Maybe that won’t be true forever, but it will be true in the near future. If we want trustworthy AI, we need trustworthy AI controllers.

And we need one final thing: public AI models. These are systems built by academia, nonprofit groups, or government itself that can be run by individuals.

The term “public model” has been thrown around a lot in the AI world, so it’s worth detailing what this means. It’s not a corporate AI model that the public is free to use. It’s not a corporate AI model the government has licensed. It’s not even an open source model the public is free to examine and modify. “Open source” is a complicated term to apply to modern ML systems. They don’t have source code in the same way that conventional software does. Right now, the AI industry is trying to subvert the meaning of “open source” to allow for secret training data and mechanisms. We need models that keep training data private. Imagine medical models trained on everyone’s personal health data. We have ways of ensuring their privacy and integrity, but they’re not open source.

A public model is a model built by the public for the public. It means political accountability, not just market accountability. This means openness and transparency, paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access and a foundation for a free market in AI innovations. The goal isn’t to replace corporate AI but to serve as a counterbalance to corporate AI.

We can never make AIs into our friends. We can make them into trustworthy services—agents and not double agents—but only if government mandates it. We can put limits on surveillance capitalism, set minimum security standards, and define and implement AI integrity—but only if government mandates it.

It is well within government’s power to do this. Most importantly, it is essential for government to do this because the point of government is to create social trust. And although now, in 2025, I am not expecting the US government to do much in this area (they’re doing all they can to make sure the AI industry remains unregulated), other countries can fill the gap.

I began by explaining the importance of trust in society. How interpersonal trust doesn’t scale to larger groups, and how that other, impersonal kind of trust—social trust (reliability and predictability)—is what governments create.

I know this is going to be hard. Today’s governments have a lot of trouble effectively regulating slow AI—corporations. In many countries, they failed to regulate the relatively slow-moving rideshare industry. Why should we expect them to be able to regulate fast AI?

But they must. We need governments to constrain the behavior of corporations and the AIs they build, deploy, and control. Government needs to enforce both predictability and reliability.

So that’s three work streams to facilitate trust in AI. One: AI security, as we know it traditionally. Two: AI integrity, more broadly defined. And three: AI regulations, to align incentives. We need them all, and we need them all soon. That’s how we can create the social trust that society needs in this new AI era.

1Overview

Just today, a stranger came to my door claiming he was here to unclog a bathroom drain. I let him into my house without verifying his identity, and not only did he repair the drain, he also took off his shoes so he wouldn’t track mud on my floors. When he was done, I gave him a piece of paper that asked my bank to give him some money. He accepted it without a second glance. At no point did he attempt to take my possessions, and at no point did I attempt the same of him. In fact, neither of us worried that the other would. My wife was also home, but it never occurred to me that he was a sexual rival and I should therefore kill him.

Also today, I passed several strangers on the street without any of them attacking me. I bought food from a grocery store, not at all concerned that it might be unfit for human consumption. I locked my front door, but didn’t spare a moment’s worry at how easy it would be for someone to smash my window in. Even people driving cars, large murderous instruments that could crush me like a bug, didn’t scare me.

Most amazingly, this worked without much overt security. I don’t carry a gun for self-defense, nor do I wear body armor. I don’t use a home burglar alarm. I don’t test my food for poison. I don’t even engage in conspicuous displays of physical prowess to intimidate other people I encounter.

It’s what we call “trust.” Actually, it’s what we call “civilization.”

All complex ecosystems, whether they are biological ecosystems like the human body, natural ecosystems like a rain forest, social ecosystems like an open-air market, or socio-technical ecosystems like the global financial system or the Internet, are deeply interlinked. Individual units within those ecosystems are interdependent, each doing its part and relying on the other units to do their parts as well. This is neither rare nor difficult, and complex ecosystems abound.

At the same time, all complex ecosystems contain parasites. Within every interdependent system, there are individuals who try to subvert the system to their own ends. These could be tapeworms in our digestive tracts, thieves in a bazaar, robbers disguised as plumbers, spammers on the Internet, or companies that move their profits offshore to evade taxes.

Within complex systems, there is a fundamental tension between what I’m going to call cooperating, or acting in the group interest; and what I’ll call defecting, or acting against the group interest and instead in one’s own self-interest. Political philosophers have recognized this antinomy since Plato. We might individually want each other’s stuff, but we’re collectively better off if everyone respects property rights and no one steals. We might individually want to reap the benefits of government without having to pay for them, but we’re collectively better off if everyone pays taxes. Every country might want to be able to do whatever it wants, but the world is better off with international agreements, treaties, and organizations. In general, we’re collectively better off if society limits individual behavior, and we’d each be better off if those limits didn’t apply to us individually. That doesn’t work, of course, and most of us recognize this. Most of the time, we realize that it is in our self-interest to act in the group interest. But because parasites will always exist—because some of us steal, don’t pay our taxes, ignore international agreements, or ignore limits on our behavior—we also need security.

Society runs on trust. We all need to trust that the random people we interact with will cooperate. Not trust completely, not trust blindly, but be reasonably sure (whatever that means) that our trust is well-founded and they will be trustworthy in return (whatever that means). This is vital. If the number of parasites gets too large, if too many people steal or too many people don’t pay their taxes, society no longer works. It doesn’t work both because there is so much theft that people can’t be secure in their property, and because even the honest become suspicious of everyone else. More importantly, it doesn’t work because the social contract breaks down: society is no longer seen as providing the required benefits. Trust is largely habit, and when there’s not enough trust to be had, people stop trusting each other.

The devil is in the details. In all societies, for example, there are instances where property is legitimately taken from one person and given to another: taxes, fines, fees, confiscation of contraband, theft by a legitimate but despised ruler, etc. And a societal norm like “everyone pays his or her taxes” is distinct from any discussion about what sort of tax code is fair. But while we might disagree about the extent of the norms we subject ourselves to—that’s what politics is all about—we’re collectively better off if we all follow them.

Of course, it’s actually more complicated than that. A person might decide to break the norms, not for selfish parasitical reasons, but because his moral compass tells him to. He might help escaped slaves flee into Canada because slavery is wrong. He might refuse to pay taxes because he disagrees with what his government is spending his money on. He might help laboratory animals escape because he believes animal testing is wrong. He might shoot a doctor who performs abortions because he believes abortion is wrong. And so on.

Sometimes we decide a norm breaker did the right thing. Sometimes we decide that he did the wrong thing. Sometimes there’s consensus, and sometimes we disagree. And sometimes those who dare to defy the group norm become catalysts for social change. Norm breakers rioted against the police raids of the Stonewall Inn in New York in 1969, at the beginning of the gay rights movement. Norm breakers hid and saved the lives of Jews in World War II Europe, organized the Civil Rights bus protests in the American South, and assembled in unlawful protest at Tiananmen Square. When the group norm is later deemed immoral, history may call those who refused to follow it heroes.

In 2008, the U.S. real estate industry collapsed, almost taking the global economy with it. The causes of the disaster are complex, but were in a large part caused by financial institutions and their employees subverting financial systems to their own ends. They wrote mortgages to homeowners who couldn’t afford them, and then repackaged and resold those mortgages in ways that intentionally hid real risk. Financial analysts, who made money rating these bonds, gave them high ratings to ensure repeat rating business.

This is an example of a failure of trust: a limited number of people were able to use the global financial system for their own personal gain. That sort of thing isn’t supposed to happen. But it did happen. And it will happen again if society doesn’t get better at both trust and security.

Failures in trust have become global problems:

The Internet brings amazing benefits to those who have access to it, but it also brings with it new forms of fraud. Impersonation fraud—now called identity theft—is both easier and more profitable than it was pre-Internet. Spam continues to undermine the usability of e-mail. Social networking sites deliberately make it hard for people to effectively manage

their own privacy

. And antagonistic behavior threatens almost every Internet community.

Globalization has improved the lives of people in many countries, but with it came an increased threat of global terrorism. The terrorist attacks of 9/11 were a failure of trust, and so were the government overreactions in the decade following.

The financial network allows anyone to do business with anyone else around the world; but easily hacked financial accounts mean there is enormous profit in fraudulent transactions, and easily hacked computer databases mean there is also a global market in (terrifyingly cheap) stolen credit card numbers and personal dossiers to enable those fraudulent transactions.

Goods and services are now supplied worldwide at much lower cost, but with this change comes tainted foods, unsafe children’s toys, and the outsourcing of data processing to countries with different laws.

Global production also means more production, but with it comes environmental pollution. If a company discharges lead into the atmosphere—or chlorofluorocarbons, or nitrogen oxides, or carbon dioxide—that company gets all the benefit of cheaper production costs, but the environmental cost falls on everybody else on the planet.

And it’s not just global problems, of course. Narrower failures in trust are so numerous as to defy listing. Here are just a few examples:

In 2009–2010, officials of Bell, California,

effectively looted

the city’s treasury, awarding themselves unusually high salaries, often for part-time work.

Some early online games, such as Star Wars Galaxy Quest, collapsed due to

internal cheating

.

The senior executives at companies such as WorldCom, Enron, and Adelphia inflated their companies’ stock prices through fraudulent accounting practices, awarding themselves huge bonuses but destroying the companies in the process.

What ties all these examples together is that the interest of society was in conflict with the interests of certain individuals within society. Society had some normative behaviors, but failed to ensure that enough people cooperated and followed those behaviors. Instead, the defectors within the group became too large or too powerful or too successful, and ruined it for everyone.

This book is about trust. Specifically, it’s about trust within a group. It’s important that defectors not take advantage of the group, but it’s also important for everyone in the group to trust that defectors won’t take advantage.

“Trust” is a complex concept, and has a lot of flavors of meaning. Sociologist Piotr Sztompka wrote that “trust is a bet about the future contingent actions of others.” Political science professor Russell Hardin wrote: “Trust involves giving discretion to another to affect one’s interests.” These definitions focus on trust between individuals and, by extension, their trustworthiness.1

When we trust people, we can either trust their intentions or their actions. The first is more intimate. When we say we trust a friend, that trust isn’t tied to any particular thing he’s doing. It’s a general reliance that, whatever the situation, he’ll do the right thing: that he’s trustworthy. We trust the friend’s intentions, and know that his actions will be informed by those intentions.2

The second is less intimate, what sociologist Susan Shapiro calls impersonal trust. When we don’t know someone, we don’t know enough about her, or her underlying motivations, to trust her based on character alone. But we can trust her future actions.3 We can trust that she won’t run red lights, or steal from us, or cheat on tests. We don’t know if she has a secret desire to run red lights or take our money, and we really don’t care if she does. Rather, we know that she is likely to follow most social norms of acceptable behavior because the consequences of breaking these norms are high. You can think of this kind of trust—that people will behave in a trustworthy manner even if they are not inherently trustworthy—more as confidence, and the corresponding trustworthiness as compliance.4

In another sense, we’re reducing trust to consistency or predictability. Of course, someone who is consistent isn’t necessarily trustworthy. If someone is a habitual thief, I don’t trust him. But I do believe (and, in another sense of the word, trust) that he will try to steal from me. I’m less interested in that aspect of trust, and more in the positive aspects. In The Naked Corporation, business strategist Don Tapscott described trust, at least in business, as the expectation that the other party will be honest, considerate, accountable, and transparent. When two people are consistent in this way, we call them cooperative.

In today’s complex society, we often trust systems more than people. It’s not so much that I trusted the plumber at my door as that I trusted the systems that produced him and protect me. I trusted the recommendation from my insurance company, the legal system that would protect me if he did rob my house, whatever the educational system is that produces and whatever insurance system bonds skilled plumbers, and—most of all—the general societal systems that inform how we all treat each other in society. Similarly, I trusted the banking system, the corporate system, the system of police, the system of traffic laws, and the system of social norms that govern most behaviors.5

This book is about trust more in terms of groups than individuals. I’m not really concerned about how specific people come to trust other specific people. I don’t care if my plumber trusts me enough to take my check, or if I trust that driver over there enough to cross the street at the stop sign. I’m concerned with the general level of impersonal trust in society. Francis Fukuyama’s definition nicely captures the term as I want to use it: “Trust is the expectation that arises within a community of regular, honest, and cooperative behavior, based on commonly shared norms, on the part of other members of that community.”

Sociologist Barbara Misztal identified three critical functions performed by trust: 1) it makes social life more predictable, 2) it creates a sense of community, and 3) it makes it easier for people to work together. In some ways, trust in society works like oxygen in the atmosphere. The more customers trust merchants, the easier commerce is. The more drivers trust other drivers, the smoother traffic flows. Trust gives people the confidence to deal with strangers: because they know that the strangers are likely to behave honestly, cooperatively, fairly, and sometimes even altruistically. The more trust is in the air, the healthier society is and the more it can thrive. Conversely, the less trust is in the air, the sicker society is and the more it has to contract. And if the amount of trust gets too low, society withers and dies. A recent example of a systemic breakdown in trust occurred in the Soviet Union under Stalin.

I’m necessarily simplifying here. Trust is relative, fluid, and multidimensional. I trust Alice to return a $10 loan but not a $10,000 loan, Bob to return a $10,000 loan but not to babysit an infant, Carol to babysit but not with my house key, Dave with my house key but not my intimate secrets, and Ellen with my intimate secrets but not to return a $10 loan. I trust Frank if a friend vouches for him, a taxi driver as long as he’s displaying his license, and Gail as long as she hasn’t been drinking. I don’t trust anyone at all with my computer password. I trust my brakes to stop the car, ATM machines to dispense money from my account, and Angie’s List to recommend a qualified plumber—even though I have no idea who designed, built, or maintained those systems. Or even who Angie is. In the language of this book, we all need to trust each other to follow the behavioral norms of our group.

Many other books talk about the value of trust to society. This book explains how society establishes and maintains that trust.6 Specifically, it explains how society enforces, evokes, elicits, compels, encourages—I’ll use the term induces—trustworthiness, or at least compliance, through systems of what I call societal pressures, similar to sociology’s social controls: coercive mechanisms that induce people to cooperate, act in the group interest, and follow group norms. Like physical pressures, they don’t work in all cases on all people. But again, whether the pressures work against a particular person is less important than whether they keep the scope of defection to a manageable level across society as a whole.

A manageable level, but not too low a level. Compliance isn’t always good, and defection isn’t always bad. Sometimes the group norm doesn’t deserve to be followed, and certain kinds of progress and innovation require violating trust. In a police state, everybody is compliant but no one trusts anybody. A too-compliant society is a stagnant society, and defection contains the seeds of social change.

This book is also about security. Security is a type of a societal pressure in that it induces cooperation, but it’s different from the others. It is the only pressure that can act as a physical constraint on behavior regardless of how trustworthy people are. And it is the only pressure that individuals can implement by themselves. In many ways, it obviates the need for intimate trust. In another way, it is how we ultimately induce compliance and, by extension, trust.

It is essential that we learn to think smartly about trust. Philosopher Sissela Bok wrote: “Whatever matters to human beings, trust is the atmosphere in which it thrives.” People, communities, corporations, markets, politics: everything. If we can figure out the optimal societal pressures to induce cooperation, we can reduce murder, terrorism, bank fraud, industrial pollution, and all the rest.

If we get pressures wrong, the murder rate skyrockets, terrorists run amok, employees routinely embezzle from their employers, and corporations lie and cheat at every turn. In extreme cases, an untrusting society breaks down. If we get them wrong in the other direction, no one speaks out about institutional injustice, no one deviates from established corporate procedure, and no one popularizes new inventions that disrupt the status quo—an oppressed society stagnates. The very fact that the most extreme failures rarely happen in the modern industrial world is proof that we’ve largely gotten societal pressures right. The failures that we’ve had show we have a lot further to go.

Also, as we’ll see, evolution has left us with intuitions about trust better suited to life as a savannah-dwelling primate than as a modern human in a global high-tech society. That flawed intuition is vulnerable to exploitation by companies, con men, politicians, and crooks. The only defense is a rational understanding of what trust in society is, how it works, and why it succeeds or fails.

This book is divided into four parts. In Part I, I’ll explore the background sciences of the book. Several fields of research—some closely related—will help us understand these topics: experimental psychology, evolutionary psychology, sociology, economics, behavioral economics, evolutionary biology, neuroscience, game theory, systems dynamics, anthropology, archaeology, history, political science, law, philosophy, theology, cognitive science, and computer security.

All these fields have something to teach us about trust and security.7 There’s a lot here, and delving into any of these areas of research could easily fill several books. This book attempts to gather and synthesize decades, and sometimes centuries, of thinking, research, and experimentation from a broad swath of academic disciplines. It will, by necessity, be largely a cursory overview; often, the hardest part was figuring out what not to include. My goal is to show where the broad arcs of research are pointing, rather than explain the details—though they’re fascinating—of any individual piece of research.8

In the last chapter of Part I, I will introduce societal dilemmas. I’ll explain a thought experiment called the Prisoner’s Dilemma, and its generalization to societal dilemmas. Societal dilemmas describe the situations that require intra-group trust, and therefore use societal pressures to ensure cooperation: they’re the central paradigm of my model. Societal dilemmas illustrate how society keeps defectors from taking advantage, taking over, and completely ruining society for everyone. It illustrates how society ensures that its members forsake their own interests when they run counter to society’s interest. Societal dilemmas have many names in the literature: collective action problem, Tragedy of the Commons, free-rider problem, arms race. We’ll use them all.

Part II fully develops my model. Trust is essential for society to function, and societal pressures are how we achieve it. There are four basic categories of societal pressure that can induce cooperation in societal dilemmas:

Moral pressure

. A lot of societal pressure comes from inside our own heads. Most of us don’t steal, and it’s not because there are armed guards and alarms protecting piles of stuff. We don’t steal because we believe it’s wrong, or we’ll feel guilty if we do, or we want to follow the rules.

Reputational pressure

. A wholly different, and much stronger, type of pressure comes from how others respond to our actions. Reputational pressure can be very powerful; both individuals and organizations feel a lot of pressure to follow the group norms because they don’t want a bad reputation.

Institutional pressure

. Institutions have rules and laws. These are norms that are codified, and whose enactment and enforcement is generally delegated. Institutional pressure induces people to behave according to the group norm by imposing sanctions on those who don’t, and occasionally by rewarding those who do.

Security systems

. Security systems are another form of societal pressure. This includes any security mechanism designed to induce cooperation, prevent defection, induce trust, and compel compliance. It includes things that work to prevent defectors, like door locks and tall fences; things that interdict defectors, like alarm systems and guards; things that only work after the fact, like forensic and audit systems; and mitigation systems that help the victim recover faster and care less that the defection occurred.

Part III