Trust in Computer Systems and the Cloud - Mike Bursell - E-Book

Trust in Computer Systems and the Cloud E-Book

Mike Bursell

0,0
32,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Learn to analyze and measure risk by exploring the nature of trust and its application to cybersecurity Trust in Computer Systems and the Cloud delivers an insightful and practical new take on what it means to trust in the context of computer and network security and the impact on the emerging field of Confidential Computing. Author Mike Bursell's experience, ranging from Chief Security Architect at Red Hat to CEO at a Confidential Computing start-up grounds the reader in fundamental concepts of trust and related ideas before discussing the more sophisticated applications of these concepts to various areas in computing. The book demonstrates in the importance of understanding and quantifying risk and draws on the social and computer sciences to explain hardware and software security, complex systems, and open source communities. It takes a detailed look at the impact of Confidential Computing on security, trust and risk and also describes the emerging concept of trust domains, which provide an alternative to standard layered security. * Foundational definitions of trust from sociology and other social sciences, how they evolved, and what modern concepts of trust mean to computer professionals * A comprehensive examination of the importance of systems, from open-source communities to HSMs, TPMs, and Confidential Computing with TEEs. * A thorough exploration of trust domains, including explorations of communities of practice, the centralization of control and policies, and monitoring Perfect for security architects at the CISSP level or higher, Trust in Computer Systems and the Cloud is also an indispensable addition to the libraries of system architects, security system engineers, and master's students in software architecture and security.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 696

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Praise for

Trust in Computer Systems and the Cloud

Title Page

Introduction

Notes

CHAPTER 1: Why Trust?

Analysing Our Trust Statements

What Is Trust?

What Is Agency?

Trust and Security

Trust as a Way for Humans to Manage Risk

Risk, Trust, and Computing

Notes

CHAPTER 2: Humans and Trust

The Role of Monitoring and Reporting in Creating Trust

Game Theory

Institutional Trust

Trust Based on Authority

Trusting Individuals

The Dangers of Anthropomorphism

Identifying the Real Trustee

Notes

CHAPTER 3: Trust Operations and Alternatives

Trust Actors, Operations, and Components

Assurance and Accountability

Notes

CHAPTER 4: Defining Trust in Computing

A Survey of Trust Definitions in Computer Systems

Applying Socio-Philosophical Definitions of Trust to Systems

Notes

CHAPTER 5: The Importance of Systems

System Design

“Trusted” Systems

Hardware Root of Trust

The Importance of Systems

Worked Example: Purchasing Whisky

The Importance of Being Explicit

Notes

CHAPTER 6: Blockchain and Trust

Bitcoin and Other Blockchains

Permissioned Blockchains

Permissionless Blockchains and Cryptocurrencies

Notes

CHAPTER 7: The Importance of Time

Decay of Trust

Trusted Computing Base

Notes

CHAPTER 8: Systems and Trust

System Components

Explicit Behaviour

Time and Systems

Defining System Boundaries

Notes

CHAPTER 9: Open Source and Trust

Distributed Trust

How Open Source Relates to Trust

Notes

CHAPTER 10: Trust, the Cloud, and the Edge

Deployment Model Differences

Mutually Adversarial Computing

Mitigations and Their Efficacy

Notes

CHAPTER 11: Hardware, Trust, and Confidential Computing

Properties of Hardware and Trust

Physical Compromise

Confidential Computing

Notes

CHAPTER 12: Trust Domains

The Composition of Trust Domains

Trust Domain Primitives and Boundaries

Notes

CHAPTER 13: A World of Explicit Trust

Tools for Trust

The Role of the Architect

Coda

Note

References

Index

Copyright

Dedication

About the Author

About the Technical Editor

Acknowledgements

End User License Agreement

List of Tables

Chapter 5

Table 5.1: Trust from Internet layer to Link layer in the IP suite

Table 5.2: Trust from the bash shell to the login program

Table 5.3: Trust from kernel to hypervisor

Table 5.4: Trust from hypervisor to kernel

Table 5.5: Trust relationship from web browser to laptop system

Table 5.6: Trust relationship from laptop to DNS server

Table 5.7: Trust relationship from web browser to web server

Table 5.8: Trust relationships from web browser to laptop system

Table 5.9: Trust relationship from web browser to web server

Table 5.10: Trust relationship from web browser to web server

Table 5.11: Trust relationship from web browser to web server

Table 5.12: Trust relationship from web server to web browser

Table 5.13: Trust relationship from web browser to laptop system

Table 5.14: Trust relationship from web browser to web client

Table 5.15: Trust relationship from web browser to laptop system

Table 5.16: Trust relationship from web browser to web server

Table 5.17: Trust relationship from web server to host system

Table 5.18: Trust relationship from web server to host system

Table 5.19: Trust relationship from web server to acquiring bank

Table 5.20: Trust relationship from web server to web browser

Chapter 6

Table 6.1: Shipping company trust relationship without blockchain system

Table 6.2: Shipping company trust relationship with blockchain system

Chapter 8

Table 8.1: Trust offer from a service provider

Table 8.2: Trust requirements from a service consumer

Table 8.3: Trust from server to logging service regarding time stamps

Chapter 9

Table 9.1: Trust from software consumer to software vendor

Chapter 10

Table 10.1: A comparison of cloud and Edge computing

Table 10.2: Host system criteria for cloud and Edge computing environments

Chapter 11

Table 11.1: Examples of physical system attacks

Table 11.2: Trust and data in transit

Table 11.3: Trust and data at rest

Table 11.4: Trust and data in use

Table 11.5: Comparison of data protection techniques

Chapter 12

Table 12.1: Examples of policies in trust domains

Chapter 13

Table 13.1: Example of a trust table

List of Illustrations

Chapter 3

Figure 3.1a: Transitive trust (direct).

Figure 3.1b: Transitive trust (by referral).

Figure 3.2: Chain of trust.

Figure 3.3: Distributed trust to multiple entities with weak relationships....

Figure 3.4: Distributed trust with a single, stronger relationship. A set of...

Figure 3.5: Trust domains.

Figure 3.6: Reputation: collecting information.

Figure 3.7: Reputation: gathering information from multiple endorsing author...

Figure 3.8: Forming a trust relationship to the trustee, having gathered inf...

Figure 3.9: Deploying a workload to a public or private cloud.

Chapter 4

Figure 4.1a: Trying to establish a new trust context with the same trustee....

Figure 4.1b: A circular trust relationship.

Chapter 5

Figure 5.1: Internet Protocol suite layers.

Figure 5.2: OSI layers.

Figure 5.3: Linux layering.

Figure 5.4: Linux virtualisation stack.

Figure 5.5: Linux container stack.

Figure 5.6: A Simple Cloud Virtualisation Stack.

Figure 5.7: Trust pivot—initial state.

Figure 5.8: Trust pivot—processing.

Figure 5.9: Trust pivot—complete.

Chapter 8

Figure 8.1: External time source.

Figure 8.2: Time as a new trust context.

Figure 8.3: Linux virtualisation stack.

Figure 8.4: Virtualisation stack (complex version).

Figure 8.5: Host and two workloads.

Figure 8.6: Isolation type 1—workload from workload.

Figure 8.7: Isolation type 2—host from workload.

Figure 8.8: Isolation type 3—workload from host.

Chapter 9

Figure 9.1: Package dependencies.

Chapter 11

Figure 11.1: TPM—host usage.

Figure 11.2: TPM—guest usage.

Figure 11.3: TPM—software TPM.

Figure 11.4: TPM—vTPM (based on a TPM).

Figure 11.5: Venn diagram of various technologies used to protect data in use...

Figure 11.6: TEE instance (VM-based).

Figure 11.7: TEE instance (generic).

Figure 11.8: Pre-load attestation.

Figure 11.9: Post-load attestation—full workload.

Figure 11.10: Post-load attestation—TEE runtime.

Figure 11.11: Post-load attestation—runtime loader.

Figure 11.12: TEE instance (VM-based)—BIOS from the CSP.

Figure 11.13: TEE trust relationships (ideal).

Figure 11.14: TEE trust relationships (implicit).

Figure 11.15: A complex trust model.

Chapter 12

Figure 12.1: Trust domains in a bank.

Figure 12.2: Trust domains in a bank—2.

Figure 12.3: Trust domains in a bank—C's view.

Figure 12.4: Trust domains in a bank—trust domain view.

Figure 12.5: Trust domains in a bank—NTP view.

Figure 12.6: Trust domains and the cloud—1.

Figure 12.7: Trust domains and the cloud—2.

Figure 12.8: Trust domains and the cloud—3.

Figure 12.9: Trust domains and the cloud—4.

Figure 12.10: Trust domains and the cloud—5.

Guide

Cover Page

Table of Contents

Title Page

Copyright

Dedication

About the Author

About the Technical Editor

Acknowledgements

Introduction

Begin Reading

References

Index

End User License Agreement

Pages

i

iii

xv

xvi

xvii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

309

310

311

312

313

314

315

316

317

318

319

321

322

323

324

325

326

327

328

329

330

331

332

333

334

iv

v

vi

vii

viii

ix

335

Praise for Trust in Computer Systems and the Cloud

“The problem is that when you use the word trust, people think they know what you mean. It turns out that they almost never do.” With this singular statement, Bursell has defined both the premise and the value he expounds in this insightful treatise spanning the fundamentals and complexities of digital trust. Operationalizing trust is foundational to effective human and machine digital relationships, with Bursell leading the reader on a purposeful journey expressing and consuming elements of digital trust across current and future-relevant data lifecycles.

—Kurt Roemer,Chief Security Strategist and Office of the CTO, Citrix

Trust is a matter of context. Specifically, “context” is one of the words most repeated in this book, and I must say that its use is justified in all cases. Not only is the meaning of trust analysed in all possible contexts, including some essential philosophical and psychological foundations, but the concept is also applied to all possible ICT contexts, from basic processor instructions to cloud and edge infrastructures; and different trust frameworks are explored, from hierarchical (CAs) to distributed (DLTs) approaches. A must-read book to understand how one of the bases of human civilization can and must be applied in the digital world.

—Dr. Diego R. Lopez,Head of Technology Exploration, Telefónica and Chair of ETSI blockchain initiative

As we have moved to the digital society, appreciating what and what not to trust is paramount if you use computer systems and/or the cloud. You will be well prepared when you have read this book.

—Professor Peter Landrock, D.Sc. (hon),Founder of Cryptomathic

Trust is a complex and important concept in network security. Bursell neatly unpacks it in this detailed and readable book.

—Bruce Schneier, author ofLiars and Outliers: Enabling the Trust that Society Needs to Thrive

This book needs to be on every technologist's and engineer's bookshelf. Combining storytelling and technology, Bursell has shared with all of us the knowledge we need to build trust and security in cloud computing environments.

—Steve Kolombaris- CISO & Cyber Security Leader with 20+ years experience. Formerly Apple, JP Morgan Chase, Bank of America.

Trust in Computer Systems and the Cloud

 

 

Mike Bursell

 

 

 

Introduction

I am the sort of person who reads EULAs,1 checks the expiry dates on fire extinguishers, examines the licensing notices in lifts (or elevators), and looks at the certificates on websites before I purchase goods from retailers or give away my personal details to sites purporting to be using my information for good in the world. Like many IT security professionals, I have a (hopefully healthy) disrespect for authority—or, maybe more accurately, for the claims made by authorities or those claiming to be authorities in the various fields of interest in which I've found myself involved over the years.

Around 2001, I found myself without a job as my employer restructured, and I was looking for something to do. I had been getting interested in peer-to-peer interactions in computing, based on a project I'd been involved with at a previous company and the question of how trust relationships could be brokered in this sphere. I did a lot of reading in the area and nearly started a doctorate before getting a new job where finding time to do the requisite amount of study was going to be difficult. Not long after, my wife and I started trying for a family, and the advent of children in the household further reduced the amount of time—and concentration—available to study at the level of depth that I felt the subject merited.

Years went by, and I kept an eye on the field as my professional interests moved in a variety of different directions. Around 2013, I joined a group within ETSI (the European Telecommunications Standards Institute) working on network function virtualisation (NFV). I quickly gravitated to the Security Working Group (Sec-WG), where I found several people with similar professional interests. One of those interests was trust, how to express it, how to define it, and how to operate systems that depended on it. We did some interesting work in the group, producing a number of documents that looked at particular aspects of telecommunications and trust, including the place of law enforcement agencies and regulators in the sector. As the telecommunications industry struggled to get its collective head around virtualisation and virtual machines (VMs), it became clear to the members of the security group that the challenges presented by a move to VMs were far bigger—and more complex—than might originally have been expected.

Operators, as telecommunications providers are known in the industry—think Orange, Sprint, or NTT Docomo—have long known that they need to be careful about the hardware they buy and the software they run on it. There were a handful of powerful network equipment providers (NEPs) whose business model was building a monolithic software stack on top of well-defined hardware platforms and then selling it to the operators, sometimes running and monitoring it for them as well. The introduction of VMs offered the promise (to the operators) and the threat (to the NEPs) of a new model, where entrants into the market could provide more modular software components, some of which could run on less-specialised hardware. From the operators' point of view, this was an opportunity to break the NEPs' stranglehold on the industry, so they (the operators) were all for the new NFV world, while the NEPs were engaged in the ETSI process to try to show that they were still relevant.

From the security point of view, we quickly realised that there was a major shift taking place from a starting point where operators were able to manage risk by trusting the one or two NEPs that provided their existing infrastructure. This was beginning to develop into a world where they needed to consider all of the different NFV vendors, the components they supplied, the interactions the components had with each other, and, crucially, the interactions the components had with the underlying infrastructure, which was now not going to be specialised hardware dedicated to particular functions, but generic computing hardware bought pretty much off the shelf. I think the Sec-WG thoroughly exasperated much of the rest of the ETSI NFV consortium with our continuous banging on about the problem, but we were equally exasperated by their inability to understand what a major change was taking place and the impact it could have on their businesses. The trust relationships between the various components was key to that, but trust was a word that was hardly even in the vocabulary of most people outside the Sec-WG.

At about the same time, I noticed a new trend in the IT security vendor market: people were beginning to talk about a new model for building networks, which they called zero trust. I was confused by this: my colleagues and I were spending huge amounts of time and effort trying to convince people that trust was important, and here was a new movement asserting that the best way to improve the security of your networking was to trust nothing. I realised after some research that the underlying message was more sophisticated and nuanced than that, but I also had a concern that the approach ignored a number of important abstractions and trust relationships. That concern has not abated as zero trust has been adopted as a rallying cry in situations where significantly less attention has been paid by those involved.

As virtualisation allowed the growth of cloud computing, and as Linux containers2 and serverless computing have led to public cloud offerings that businesses can deploy simply and quickly, security is becoming more of a concern as organisations move from using cloud computing for the odd application here and there to considering it a key part of their computing infrastructure. The issue of trust, however, has not been addressed. From the (seemingly) simple question, “Do I trust my cloud service provider to run my applications for me?” to more complex considerations around dynamic architectures to protect data in transit, at rest, and in use, trust needs to be central to discussions about risk and security in private and public clouds, telecommunications, finance, government, healthcare, the Edge, IoT, automotive computing, blockchain, and AI.

The subject of trust seems, at first blush, to be simple. As you start delving deeper and examining how to apply the concept—or multiple concepts—to computing, it becomes clear that it is actually a very complex field. As we consider how business A deploys software from software provider B, using libraries from open source community C and proprietary software provider D, for consumption by organisation E and its user group F on hardware supplied by manufacturer G running a BIOS from H, an operating system from I, and a virtualisation stack from J, using storage from K, over a network from L, owned by cloud service provider M, we realise that we are already halfway through the alphabet and have yet to consider any of the humans in the mix. We need, as a security and IT community, to be able to talk about trust—but there is little literature or discussion of the subject aimed at our requirements and the day-to-day decisions we make about how to architect, design, write, deploy, run, monitor, patch, and decommission the systems we manage. This book provides a starting point for those decisions, building on work across multiple disciplines and applying them to the world of computing and the cloud.

Notes

1

End user licenses or license agreements.

2

Popularised by Docker, Inc.

CHAPTER 1Why Trust?

I trust my brother and my sister with my life. My brother is a doctor, and my sister trained as a diving instructor, so I wouldn't necessarily trust my sister to provide emergency medical aid or my brother to service my scuba gear. I should actually be even more explicit because there are times when I would trust my sister in the context of emergency medical aid: I'm sure she'd be more than capable of performing CPR, for example. On the other hand, my brother is a paediatrician, not a surgeon, so I'd not be very confident about allowing him to perform an appendectomy on me. To go further, my sister has not worked as a diving instructor for several years now, so I might consider whether my trust in her abilities should be impacted by that.

This is not a book about human relationships or trust between humans, but about trust in computer systems. In order to understand what that means—or even can mean—however, we need to understand what we mean by trust. Trust is a word that arises out of human interactions and human relationships. Words are tricky. Words can mean different things to different people in different contexts.

The classic example of words meaning different things depending on context is the names of colours—the light frequencies included in the colours I identify as mauve, beige, and ultramarine are very likely different to yours—but there are other examples that are equally or more extreme. If I discuss “scheduling” with an events coordinator, a DevOps expert, and a kernel developer, each person will almost certainly have a different view of what I mean.

Trust is central to the enterprise of this book, and to discuss it, we must come to some shared understanding of what is meant by the word itself.1 The meaning that we carry forward into our discussion of computer systems must be, as far as is possible, shared. We must, to the extent we can, come to agree on a common referent, impossible as this exercise may seem in a post-modern world.2 Our final destination is firmly within the domain of computing, where domain-specific vocabulary is well-established. But since day-to-day usage of the word trust is rooted in a discussion about relationships between humans, this is where we will start.

The sort of decisions that I have described around trusting my sister and brother are ones that humans make all the time, often without thinking about them. Without giving it undue thought, we understand that multiple contexts are being considered here, including:

My relationship to the other person

Their relationship to me

The different contexts of their expertise

The impact that time can have on trust

This list, simple as it is, already exposes several important points about trust relationships to which we will return time and time again in this book: they are asymmetric (trust may be different in one direction to another), they are contextual (medical expertise and diving equipment expertise are not the same), and they are affected by time. As noted earlier, this book is not about human relationships and trust—though how we consider our relationships will be important to our discussions—but about trust in computing systems. Too often, we do not think much about trust relationships between computing systems (hardware, software, and firmware), and when we do, the sort of statements that tend to emerge are “This component trusts the server” or “We connect to this trusted system”. Of course, in the absence of significantly greater levels of artificial intelligence than are currently in evidence at the time of writing, computing systems cannot make the sort of complex and nuanced decisions about trust relationships that humans make; but it turns out that trust is vitally important in computing systems, unstated and implicit though it usually is.

There is little discussion about trust—that is, computer-to-computer or machine-to-machine trust—within the discipline or professional practice of computing, and very little literature about it except in small, specialised fields. The discussions that exist tend to be academic, and there is little to find in the popular professional literature—again, with the exception of particular specialised fields. When the subject of trust comes up in a professional IT or computing setting, however, people are often very interested in discussing it. The problem is that when you use the word trust, people think they know what you mean. It turns out that they almost never do. What one person's view of trust entails is almost always different—sometimes radically different—from that of those to whom they are speaking. Within computing, we are used to talking about things and having a shared knowledge, at least to some degree of approximation. Some terms are fairly well defined in the industry, at least in general conversation: for example, cryptography, virtualisation, and kernel. Even a discussion on more nebulous concepts such as software or networking or authentication generally starts from a relatively well-defined shared understanding. The same is not true of trust, but trust is a concept that we definitely need to get our heads around to establish a core underpinning and begin to frame an understanding of what shared meaning we hope to convey.

Why is there such a range of views around trust? We have already looked at some of the complexity of trust between humans. Let us try to tease out some of the reasons for people's confusion by starting with four fairly innocuously simple-looking statements:

I trust my brother and my sister.

I trust my bank.

My bank trusts its IT systems.

My bank's IT systems trust each other.

When you make four statements like this, it quickly becomes clear that something different is going on in each case. Specifically, the word trust signifies something very different in each of the four statements. Our first step is to make the decision to avoid using the word trust as a transitive verb—a word with a simple object, as in these examples—and instead talk about trust relationships to another entity. This is because there is a danger, when using the word trust transitively, that we may confuse a unidirectional relationship with a bidirectional relationship. In the second case, for example, the bank may well have a relationship with me, but it is how I think of the bank, and therefore how I interact with it, which is the relationship that we want to examine. This is not to say that the relationship the bank has with me is irrelevant to the one I have with it—it may well inform my relationship—but that the bank's relationship with me is not the focus. For the same reason, we will generally talk about the “trust relationship to” another entity, rather than the “trust relationship with” another, to avoid implying a bidirectional relationship. The standard word used to describe the entity doing the trusting is trustor, and the entity being trusted is the trustee—though we should not confuse this word with other uses (such as the word trustee as used in the context of prisons or charity boards).

Analysing Our Trust Statements

The four cases of trust relationships that we have noted may look similar, but there are important differences that will shed light on some important concepts to which we will return throughout the book and that will help us define exactly what our subject matter is.

Case 1: My Trusting My Brother and Sister

   As we have already discussed, this statement is about trust between individual humans—specifically, my trust relationship to my brother, and my trust relationship to my sister. There are two humans involved in each case (both me and whichever sibling we are considering), with all of the complexity that this entails. But we share a set of assumptions about how we react, and we each have tens of thousands of years of genetics plus societal and community expectations to work out how these relationships should work.

Case 2: My Trusting My Bank

   Our second statement is about trust between an individual and an organisation: specifically, my trust relationship to a legal entity with particular services and structure. The basis of the expression of this relationship has changed over the years in many places: the relationship I would have had in the UK with my bank 50 years ago, say, would often have been modelled mainly on the relationship I had with one or more individuals employed by the bank, typically a manager or deputy manager of a particular branch. My trust relationship to the bank now is more likely to be swayed by my views on its perceived security practices and its exercising of fiscal and ethical responsibilities than my views of the manager of my local branch—if I have even met them. There is, however, still a human element associated with my relationship, at least in my experience: I know that I can walk into a branch, or make a call on the phone, and speak to a human.

3

Case 3: The Bank Trusting Its IT Systems

   Our third statement is about an organisation trusting its IT systems. When we follow our new resolution to rephrase this as “The bank having a trust relationship to its IT systems”, it suddenly feels like we have moved into a very different type of consideration from the initial two cases. Arguably, for some of the reasons mentioned earlier about interacting with humans in a bank, we realise that there is a large conceptual difference between the first and second cases as well. But we are often lulled into a false sense of equivalence because when we interact with a bank, it is staffed by people, and it also enjoys many of the legal protections afforded to an individual. There are still humans in this case, though, in that we can generally assume that it is the intention of certain humans who represent the bank to have a trust relationship to certain IT systems. The question of what we mean by “represent the bank” is an interesting one when we consider when we might use this phrase in practice. Might it be in a press conference, with a senior executive saying that the bank “trusts its IT systems”? What might that mean? Or it could be in a conversation between a regulator or auditor with the chief information security officer (CISO) of the bank. Who

is

“the bank” that is being referred to in this situation, and what does this trust mean?

Case 4: The IT Systems Trusting Each Other

   As we move to our fourth case, it is clear that we have transitioned to yet another very different space. There are no humans involved in this set of trust relationships unless we attribute agency to specific systems; and if so, which? What, then, is doing the trusting, and what does the word

trust

even mean in this context? The question of agency raised earlier—about an entity

representing

someone else, as a literary agent represents an author or a federal agent represents a branch of government—may allow us to consider what is going on. We will return to this question later in this chapter.

The four cases we have discussed show that we cannot just apply the same word, trust, to all of these different contexts and assume that it means the same thing in each case. We need to differentiate between them: what is going on, who is trusting whom to do what, and what trust in that instance truly means.

What Is Trust?

What, then, is trust? What do we mean, or hope to convey, when we use this word? This question gets a whole chapter to itself; but to start to examine it, its effects, and the impact of thinking about trust within computing systems, we need a definition. Here is the one we will use as the basis for the rest of the book. It is in part derived from a definition by Gambetta4 and refined after looking at multiple uses and contexts.

Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation.

This is a good start, but we can go a little further, so let us propose three corollaries to sit alongside this definition. We will go into more detail for each later.

First Corollary

   “Trust is always contextual”.

Second Corollary

   “One of the contexts for trust is always time”.

Third Corollary

   “Trust relationships are not symmetrical”.

This set of statements should come as no surprise: it forms the basis for the initial examination of the trust relationships that I have to my brother and sister, described at the beginning of this chapter. Let us re-examine those relationships and try to define them in terms of our definition of trust and its corollaries. First, we deal with the definition:

The entities identified are a) me and b) my siblings.

The actions ranged from performing an emergency appendectomy to servicing my scuba gear.

The expectation was fairly complex, even in this simple example: it turns out that trusting someone “with my life” can mean a variety of things, from performing specific actions to remedy an emergency medical condition, to performing actions that, if neglected or incorrectly carried out, could cause my death.

We find that we have addressed the first corollary—that trust is always contextual:

The contexts included my having a cardiac arrest, requiring an appendectomy, and planning to go scuba diving.

Time, the second corollary, is also covered:

My sister has not recently renewed her diving instructor training, so I might have less trust in her to service my diving gear than I might have done 10 years ago.

The third corollary about the asymmetry of trust is so obvious in human relationships that we often ignore it, but is very clear in our examples:

I am neither a doctor nor a trained scuba diving instructor, so my brother and sister trust me neither to provide emergency medical care nor to service their scuba gear.

Let us restate one of these relationships in the form of our definition and corollaries about trust:

I hold an assurance that my brother will provide me with emergency medical aid in the event that I require immediate treatment.

This is a good statement of how I view the relationship from me to my brother, but what can we gain with more detail? Let us use the corollaries to move us to a better description of the relationship.

First Corollary

   “The medical aid is within an area of practice in which he has trained or with which he is familiar”.

Second Corollary

   “My brother will only undertake procedures for which his training is still sufficiently recent that he feels confident that he can perform them without further detriment to my health”.

Third Corollary

   “My brother does not expect me to provide him with emergency medical aid”.

This may seem like an immense amount of unpacking to do on what was originally presented as a simple statement. But when we move over to the world of computing systems, we need to consider exactly this level of detail, if not an even greater level.

Let us begin moving into the world of computing and see what happens when we start to apply some of these concepts there. We will begin with the concept of a trusted platform: something that is often a requirement for any computation that involves sensitive data or algorithms. Immediately, questions present themselves. When we talk about a trusted platform, what does that mean? It must surely mean that the platform is trusted by an entity (the workload?) to perform particular actions (provide processing time and memory?) whilst meeting particular expectations (not inspecting program memory? maintaining the integrity of data?). But the context of what we mean for a trusted platform is likely to be very different between a mobile phone, a military installation, and an Internet of Things (IoT) gateway. That trust may erode over time (are patches applied? Is there also a higher likelihood that an attacker may have compromised the platform a day, a month, or a year after the workload was provisioned to it?). We should also never simply say, following the third corollary (on the lack of trust symmetry), that “these entities trust each other” without further qualification, even if we are referring to the relationships between one trusted system and another trusted system.

One concrete example that we can use to examine some of these questions is when we connect to a web server using a browser to purchase a product or service. Once they connect, the web server and the browser may establish trust relationships, but these are definitely not symmetrical. The browser has probably established that the web server represents the provider of particular products and services with sufficient assurance for the person operating it to give up credit card details. The web server has probably established that the browser currently has permission to access the account of the user operating it. However, we already see some possible confusion arising about what the entities are: what is the web server, exactly? The unique instance of the server's software, the virtual machine in which it runs (if, in fact, it is running in a virtual machine), a broader and more complex computer system, or something entirely different? And what ability can the browser have to establish that the person operating it can perform particular actions?

These questions—about how trust is represented and to do what—are related to agency and will also help us consider some of the questions that arose around the examples we considered earlier about banks and their IT systems.

What Is Agency?

When you write a computer program that prints out “Hello, world!”, who is “saying” those words: you or the computer? This may sound like an idle philosophical question, but it is more than that: we need to be able to talk about entities as part of our definition of trust, and in order to do that, we need to know what entity we are discussing.

What exactly, then, does agency mean? It means acting for someone: being their agent—think of what actors' agents do, for example. When we engage a lawyer or a builder or an accountant to do something for us, we set very clear boundaries about what they will be doing on our behalf. This is to protect both us and the agent from unintended consequences. There exists a huge legal corpus around defining, in different fields, exactly the scope of work to be carried out by a person or a company who is acting as an agent for another person or organisation. There are contracts and agreed restitutions—basically, punishments—for when things go wrong. Say that my accountant buys 500 shares in a bank with my money, and then I turn around and say that they never had the authority to do so: if we have set up the relationship correctly, it should be entirely clear whether or not the accountant had that authority and whose responsibility it is to deal with any fallout from that purchase.

The situation is not so clear when we start talking about computer systems and agents. To think a little more about this question, here are two scenarios:

In the classic film

WarGames

, David Lightman (Matthew Broderick's character) has a computer that goes through a list of telephone numbers, dialling them and then recording the number for later investigation if they are answered by another machine that attempts to perform a handshake. Do we consider that the automatic dialling Lightman's computer performs is carried out as an act with agency? Or is it when the computer connects to another machine? Or when it records the details of that machine? I suspect that most people would not argue that the computer is acting with agency once Lightman gets it to complete a connection and interact with the other machine—that seems very intentional on his part, and he has taken control—but what about before?

Google used to run automated programs against messages received as part of the Gmail service.

5

The programs were looking for information and phrases that Google could use to serve ads. The company were absolutely adamant that

they

, Google, were not doing the reading: it was just the computer programs.

6

Quite apart from the ethical concerns that might be raised, many people would (and did) argue that Google, or at least the company's employees, had imbued these automated programs with agency so that philosophically—and probably legally—the programs were performing actions on behalf of Google. The fact that there was no real-time involvement by any employee is arguably unimportant, at least in some contexts.

This all matters because in order to understand trust, we need to identify an entity to trust. One current example of this is self-driving cars: whose fault is it when one goes wrong and injures or kills someone? Equally, when the software in certain Boeing 737 MAX 8 aircraft malfunctioned,7 pilots—who can be said to have trusted the software—and passengers—who equally can be said to have trusted the pilots and their ability to fly the aircraft correctly—lost their lives. What exactly was the entity to which they had a trust relationship, and how was that trust managed?

Another example may help us to consider the question of context. Consider a hypothetical automated defence system for a military base in a war zone. Let us say that, upon identifying intruders via its cameras, the system is programmed to play a recording over loudspeakers, warning them to move away; and, in the case that they do not leave within 30 seconds of a warning, to use physical means up to and including lethal force to stop them proceeding any further. The base commander trusts the system to perform its job and stop intruders: a trust relationship exists between the base commander and the automated defence system. Thus, in the language of our definition of trust:

“The base commander holds an assurance that the automated defence system will identify, warn, and then stop intruders who enter the area within its camera and weapon range”.

We have a fair amount of context already embedded within this example. We stated up front that the base is in a war zone, and we have mentioned the range of the cameras and weapons. A problem arises, however, when the context changes. What if, for instance:

The base is no longer in a war zone, and rules of engagement change

Children enter the coverage area who do not understand the warnings or are unable to leave the area

A surge of refugees enters the area—so many that those at the front are unable to move, despite hearing and understanding the warning

These may seem to be somewhat contrived examples, but they serve to show how brittle trust relationships can be when contexts change. If the entity being trusted with defence of the base were a soldier, we would hope the soldier could be much more flexible in reacting to these sorts of changes, or at least know that the context had changed and protocol dictated contacting a superior or other expert for new orders. The same is not true for computer systems. They operate in specific contexts; and unless they are architected, designed, and programmed to understand not only that other contexts exist but also how to recognise changes in contexts and how their behaviour should change when they find themselves in a new context, then the trust relationships that other entities have with them are at risk. This can be thought of as an example of programmatically encoded bias: only certain contexts were considered in the design of the system, which means inflexibility is inherent in the system when other contexts are introduced or come into play.

In our example of the automated defence system, at least the base commander or empowered subordinate has the opportunity to realise that a change in context is possible and to reprogram or switch off the system: the entity who has the relationship to the system can revise the trust relationship. A much bigger problem arises when both entities are actually computing systems and the context in which they are operating changes or, just as likely, they are used in contexts for which they were not designed—or, put another way, in contexts their designers neglected to imagine. How to define such contexts, and the importance of identifying when contexts change, will feature prominently in later chapters.

Trust and Security

Another important topic in our discussion of trust is security. Our core interest, of course, is security in the realm of computing systems, sometimes referred to as cyber-security or IT security. But although security within the electronic and online worlds has its own peculiarities and specialities, it is generally derived from equivalent or similar concepts in “real life”: the non-electronic, human-managed world that still makes up most of our existence and our interactions, even when the interactions we have are “digitally mediated” via computer screens and mobile phones. When we think about humans and security, there is a set of things that we tend to identify as security-related, of which the most obvious and common are probably stopping humans going into places they are not supposed to visit, looking at things they are not supposed to see, changing things they are not supposed to alter, moving things that they are not supposed to shift, and stopping processes that they are not supposed to interrupt. These concepts are mirrored fairly closely in the world of computer systems:

Authorisation

: Stopping entities from going into places

Confidentiality

: Stopping entities from looking at things

Integrity

: Stopping entities from moving and altering things

Availability

: Stopping entities from interrupting processes

Exactly what constitutes a core set of security concepts is debatable, but this is a reasonably representative list. Related topics, such as identification and authentication, allow us to decide whether a particular person should be stopped or allowed to perform certain tasks; and categorisation allows us to decide which things which humans are allowed to alter, or which places they may enter. All of these will be useful as we begin to pick apart in more detail how we define trust.

Let us look at one of these topics in a little more detail, then, to allow us to consider its relationship to trust. Specifically, we will examine it within the context of computing systems.

Confidentiality is a property that is often required for certain components of a computer system. One oft-used example is when I want to pay for some goods over the Web. When I visit a merchant, the data I send over the Internet should be encrypted; the sign that it is encrypted is typically the little green shield or padlock that I see on the browser bar by the address of the merchant. We will look in great detail at this example later on in the book, but the key point here is that the data—typically my order, my address, and my credit card information—is encrypted before it leaves my browser and decrypted only when it reaches the merchant. The merchant, of course, needs the information to complete the order, so I am happy for the encryption to last until it reaches their server.

What exactly is happening, though? Well, a number of steps are involved to get the data encrypted and then decrypted. This is not the place for a detailed description,8 but what happens at a basic level is that my browser and the merchant's server use a well-understood protocol—most likely HTTP + SSL/TLS—to establish enough mutual trust for an encrypted exchange of information to take place. This protocol uses algorithms, which in turn employ cryptography to do the actual work of encryption. What is important to our discussion, however, is that each cryptographic protocol used across the Internet, in data centres, and by governments, banks, hospitals, and the rest, though different, uses the same cryptographic “pieces” as its building blocks. These building blocks are referred to as cryptographic primitives and range from asymmetric and symmetric algorithms through one-way hash functions and beyond. They facilitate the construction of some of the higher-level concepts—in this case, confidentiality— which means that correct usage of these primitives allows for systems to be designed that make assurances about certain properties.

One lesson we can learn from the world of cryptography is that while using it should be easy, designing cryptographic algorithms is often very hard. While it may seem simple to create an algorithm or protocol that obfuscates data—think of a simple shift cipher that moves all characters in a given string “up” one letter in the alphabet—it is extremely difficult to do it well enough that it meets the requirements of real-world systems. An oft-quoted dictum of cryptographers is, “Any fool can create a cryptographic protocol that they can't defeat”; and part of learning to understand and use cryptography well is, in fact, the experience of designing such protocols and seeing how other people more expert than oneself go about taking them apart and compromising them.

Let us return to the topics we noted earlier: authorisation, integrity, etc. None of them defines trust, but we will think of them as acting as building blocks when we start considering trust relationships in more detail. Like the primitives used in encryption, these concepts can be combined in different ways to allow us to talk about trust of various kinds and build systems to model the various trust relationships we need to manage. Also like cryptographic primitives, it is very easy to use these primitives in ways that do not achieve what we wish to achieve and can cause confusion and error for those using them.

Why is all of this important? Because trust is important to security. We typically use security to try to enforce trust relationships because humans are not, sadly, fundamentally trustworthy. This book argues that computing systems are not fundamentally trustworthy either, but for somewhat different reasons. It would be easy to think that computing systems are neutral with regard to trust, that they just sit there and do what they do; but as we saw when we looked briefly at agency, computers act for somebody or something, even when the actions they take are unintended9 or not as intended. Equally, they may be maliciously or incompetently directed (programmed or operated). But worst, and most common of all, they are often—usually—unconsciously and implicitly placed into trust relationships with other systems, and ultimately humans and organisations, often outside the contexts for which they were designed. The main goal of this book is to encourage people designing, creating, and operating computer systems to be conscious and explicit in their actions around trust.

Trust as a Way for Humans to Manage Risk

Risk is a key concept to be able to consider when we are talking about security. There is a common definition of risk within the computing community, which is also shared within the business community:

In other words, the risk associated with an event is the likelihood that it will occur multiplied by the impact to be considered if it were to occur. Probability is expressed as a number between 0 and 1 (0 being no possibility of occurrence, 1 being certainty), and the loss can be explicitly stated either as an amount of money or as another type of impact. The point of the formula is to allow risks to be compared; and as long as the different calculations use the same measure of loss, it is generally unimportant what measure is employed. To give an example, let us say that I am interested in the risk of my new desktop computer failing in the first three years of its life. I do some research and discover that the likelihood of the keyboard failing is 4%, or 0.04, whereas the likelihood of the monitor failing is only 1%, or 0.01. If I were to consider this information on its own, it would seem that I should worry more about the keyboard than the monitor, until I take into account the cost of replacement: the keyboard would cost me $15 to replace, whereas the monitor would cost me $400 to replace. We have the following risk calculations then:

It turns out that if I care about risk, I should be more concerned about the monitor than the keyboard. Once we have calculated the risk, we can then consider mitigations: what to do to manage the risk. In the case of my desktop computer, I might decide to take out an extended manufacturer's warranty to cover the monitor but just choose to buy a new keyboard if that breaks.

Risk is all around us and has been since before humans became truly human, living in groups and inhabiting a social structure. We can think of risk as arising in four categories:

ASSESSMENT

MITIGATION

Easy

If there are predators nearby, they might kill us …

Easy

… so we should run away or hide.

Easy

If our leader gets an infection, she may die …

Difficult

… but we don't know how to avoid or effectively treat infection.

Difficult

If the river floods, our possessions may be washed away …

Easy

… but if we camp farther away from the river, we are safer.

Difficult

If I eat this fruit, it may poison me …

Difficult

… but I have no other foodstuffs nearby and may go hungry or even starve if I do not eat it.

For the easy-to-assess categories, both the probability and the loss are simple to calculate. For the difficult-to-assess categories, either the probability or the loss is hard to calculate. What is not clear from the simple formula we used earlier to calculate risk is that you are usually calculating a risk against something that is generally a benefit. In the case of the risk associated with the river, there are advantages to camping close to it—easy access to water and ability to fish, for example—and in the case of the fruit, the benefit of eating it will be that it may nourish me, and I do not need to trek further afield to find something else to eat, thereby using up valuable energy.

Many of the risks associated with interacting with other humans fit within the last category: difficult to assess and difficult to mitigate. In terms of assessment, humans often act in their own interests rather than those of others, or even of a larger group; and the impact of an individual not cooperating may be small—hurt feelings, for example—or large—inability to catch game—or even retribution towards a member of the group. In terms of mitigation, it is often very difficult to guess what actions to take to encourage an individual, particularly one you do not already know, to ensure that they interact with you in a positive manner. You can, of course, avoid any interactions at all, but that means you lose access to any benefits from such interactions, and those benefits can be very significant: new knowledge, teamwork for hunting, more strength to move objects, safety in numbers, even having access to a larger gene pool, to name just a few.

Humans developed trust to help them mitigate the risks of interacting with each other. Think of how you have grown to know and trust new acquaintances: there is typically a gradual process as you learn more about them and trust them to act in particular ways. As David Clark points out when discussing how we develop trust relationships, this “is not a technical problem, but a social one”.10 We see here both time and various other contexts in which trust relationships can operate. Once you trust an individual to act as a babysitter, for instance, you are managing the risks associated with leaving your children with that person. An alternative might be that you trust somebody to make you a cup of tea in the way that you like it: you are mitigating the chance that they will add sugar to it or, in a more extreme case, poison you and steal all of the loyalty points you have accrued with your local cafe.

Trust is not, of course, the only mitigation technique possible when considering and managing risk. We have already seen that you can avoid interactions altogether,11 but two alternatives that are different sides of the same coin are punishment and reward. I can punish an individual if they do not interact with me as I wish, or I can reward them if they do. Many trust relationships between individuals are arguably built up over time with a combination of these mitigations, even if the punishment is as little as a frown and the reward as little as a smile. What is even more interesting is that the building of the trust relationship is two-way in this case, as the individual being rewarded or punished needs to trust the other individual to be consistent with rewards or punishments based on the behaviour and interactions presented.

Risk, Trust, and Computing

Risk is important in the world of IT and computing. Organisations need to know whether their systems will work as expected or if they will fail for any one of many reasons: for example, hardware failure, loss of power, malicious compromise, poor software. Given that trust is a way of mitigating risk, are there opportunities to use trust—to transfer what humans have learned from creating and maintaining trust relationships—and transfer it to this world? We could say that humans need to “trust” their systems. If we think back to the cases presented earlier in the chapter, this fits our third example, where we discussed the bank trusting its IT systems.

Defining Trust in Systems

The first problem with trusting systems is that the world of trust is not simple when we start talking about computers. We might expect that computers and computer systems, being less complex than humans, would be easier to consider with respect to trust, but we cannot simply apply the concept of trust the same way to interactions with computers as we do to interactions with humans. The second problem is that humans are good at inventing and using metaphors and applying a concept to different contexts to make some sense of them, even when the concept does not map perfectly to the new contexts. Trust is one of these contexts: we think we know what we mean when we talk about trust, but when we apply it to interactions with computer systems, it turns out that the concepts we think we understand do not map perfectly.

There is a growing corpus of research and writing around how humans build trust relationships to each other and to organisations, and this is beginning to be applied to how humans and organisations trust computer systems. What is missing is often a realisation that interactions between computer systems themselves—case four in our earlier examples—are frequently modelled in terms of trust relationships. But as these models lack the rigour and theoretical underpinnings to allow strong statements to be made about what is really going on, we are left without the ability to allow detailed discussion of risk and risk mitigation.

Why does this matter, though? The first answer is that when you are running a business, you need to know that all the pieces are correct and doing the correct thing in relationship to each other. This set of behaviours and relationships makes up a system, and the pieces its components, a subject to which we will return in Chapter 5