Rewired -  - E-Book

Rewired E-Book

0,0
59,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Examines the governance challenges of cybersecurity through twelve, real-world case studies Through twelve detailed case studies, this superb collection provides an overview of the ways in which government officials and corporate leaders across the globe are responding to the challenges of cybersecurity. Drawing perspectives from industry, government, and academia, the book incisively analyzes the actual issues, and provides a guide to the continually evolving cybersecurity ecosystem. It charts the role that corporations, policymakers, and technologists are playing in defining the contours of our digital world. Rewired: Cybersecurity Governance places great emphasis on the interconnection of law, policy, and technology in cyberspace. It examines some of the competing organizational efforts and institutions that are attempting to secure cyberspace and considers the broader implications of the in-place and unfolding efforts--tracing how different notions of cybersecurity are deployed and built into stable routines and practices. Ultimately, the book explores the core tensions that sit at the center of cybersecurity efforts, highlighting the ways in which debates about cybersecurity are often inevitably about much more. * Introduces the legal and policy dimensions of cybersecurity * Collects contributions from an international collection of scholars and practitioners * Provides a detailed "map" of the emerging cybersecurity ecosystem, covering the role that corporations, policymakers, and technologists play * Uses accessible case studies to provide a non-technical description of key terms and technologies Rewired: Cybersecurity Governance is an excellent guide for all policymakers, corporate leaders, academics, students, and IT professionals responding to and engaging with ongoing cybersecurity challenges.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 650

Veröffentlichungsjahr: 2019

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Notes on Contributors

Acknowledgments

Introduction

I.1 Making Sense of Cybersecurity Governance

I.2 Connective Tissue: Common Themes

1 Cybersecurity Information‐Sharing Governance Structures

1.1 Introduction

1.2 Taxonomy of Information‐sharing Governance Structures and Policies

1.3 Discussion and Conclusions

Acknowledgments

2 Cybersecurity Governance in the GCC

2.1 Introduction

2.2 Why the GCC?

2.3 Key Cybersecurity Incidents

2.4 Government Organizations

2.5 Strategies, Laws, and Standards

2.6 The Cybersecurity Industry

2.7 Conclusion

Acknowlegments

3 The United Kingdom's Emerging Internet of Things (IoT) Policy Landscape

3.1 Introduction

3.2 The IoT's Risks and Uncertainties

3.3 Adaptive Policymaking in the Context of IoT

3.4 The UK Policy Landscape

3.5 The IoT and its Governance Challenges

3.6 Conclusion

4 Birds of a Feather

4.1 Introduction: The Challenge of Ecosystem Risk

4.2 Progress So Far

4.3 Aviation's Tools for Cyber Risk Governance

4.4 The Path Forward

4.5 Conclusion

5 An Incident‐Based Conceptualization of Cybersecurity Governance*

5.1 Introduction

5.2 Conceptualizing Cybersecurity Governance

5.3 Case Studies

5.4 Utility and Limitations

5.5 Conclusion

6 Cyber Governance and the Financial Services Sector

6.1 Introduction

6.2 Governance, Security, and Critical Infrastructure Protection

6.3 Financial Services Information Sharing and Analysis Center

6.4 Financial Services Sector Coordinating Council

6.5 Financial Systemic Analysis and Resilience Center

6.6 Lessons for Cybersecurity Governance

6.7 Conclusion

Acknowledgments

7 The Regulation of Botnets

7.1 Introduction

7.2 Cybersecurity

7.3 Botnets

7.4 Governance Theory

7.5 Discussion: Governance Theory Applied to Botnet Mitigation

7.6 Conclusion

Acknowledgment

8 Governing Risk

8.1 Introduction

8.2 Where Did Cyber Insurance Come From?

8.3 Security Standards in the Governance Process

8.4 The Key Role of Risk

8.5 Enforcing Standards: Insurance Becomes Governance

8.6 Conclusion and Implications

9 Containing Conficker

9.1 Introduction

9.2 The Conficker Infection

9.3 A Public Health Alternative

9.4 A Public Health Approach to Conficker

9.5 Conclusion

10 Bug Bounty Programs

10.1 Introduction: Conspicuously Absent

10.2 Scope and Aims

10.3 A Market for Flaws: Bug Bounty Programs

10.4 Conclusion

11 Rethinking Data, Geography, and Jurisdiction

11.1 Introduction

11.2 The Challenge of Extraterritorial Data

11.3 The Threat of Data Localization

11.4 A New Approach to Data Flow Controls

11.5 Recommendations

11.6 Additional Challenges

11.7 Conclusion

Acknowledgments

12 Private Ordering Shaping Cybersecurity Policy

12.1 Introduction

12.2 Are Bug Bounties Operating as a “Private” Safe Harbor? Key Findings of the Legal Terms Survey

12.3 Policy Recommendations: Toward a Private Safe Harbor

12.4 Conclusion

Acknowledgments

Bibliography

Index

End User License Agreement

List of Tables

Chapter 1

Table 1.1. Taxonomy of information‐sharing models.

Chapter 5

Table 5.1 Examples of cybersecurity governance efforts.

Table 5.2 Summary of modes of governance.

Chapter 6

Table 6.1 Comparison of FS‐ISAC, FSSCC, and FSARC.

Table 6.2 FBBIC membership.

Chapter 10

Table 10.1 Google's reward programs: an overview.

List of Illustrations

Chapter 5

Figure 5.1 Attack timeline and report findings.

Chapter 8

Figure 8.1 Gross premiums in cyber insurance industry. All values (except 2012)...

Guide

Cover

Table of Contents

Begin Reading

Pages

iii

iv

xi

xii

xiii

xiv

xv

xvi

xvii

xviii

xix

xx

xxi

xxii

xxiii

xxiv

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

Rewired

Cybersecurity Governance

Edited by

Ryan EllisVivek Mohan

This edition first published 2019© 2019 John Wiley & Sons, Inc.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Ryan Ellis and Vivek Mohan to be identified as the editors of the editorial material in this work has been asserted in accordance with law.

Registered OfficeJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA

Editorial Office111 River Street, Hoboken, NJ 07030, USA

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of WarrantyIn view of ongoing research, equipment modifications, changes in governmental regulations, and the constant flow of information relating to the use of experimental reagents, equipment, and devices, the reader is urged to review and evaluate the information provided in the package insert or instructions for each chemical, piece of equipment, reagent, or device for, among other things, any changes in the instructions or indication of usage and for added warnings and precautions. While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Cataloging‐in‐Publication data is applied for

Hardback ISBN: 9781118888216

Cover design: WileyCover image: “U.S. Army Photo” of Two Women Operating ENIAC from the archives of the ARL Technical Library, Historic Computer Images is in the Public Domain.

Notes on Contributors

Samantha A. Adams was a political scientist with additional background in gender studies and STS. She was Associate Professor of eHealth Governance and Regulation at the Tilburg Institute for Law, Technology, and Society (TILT), Tilburg University. She worked on medical informatics, medical sociology, qualitative research methods, and external cyberattacks on health systems.

Jason Blackstock is Associate Professor of Science and Global Affairs at University College London (UCL) and cofounder of UCL Department of Science, Technology, Engineering and Public Policy (STEaPP) which he led as Head of Department from 2013 to 2018. He has a unique background spanning quantum physics research, Silicon Valley technology development, international public policy, and higher education innovation and leadership.

Irina Brass is Lecturer in Regulation, Innovation and Public Policy and Deputy Lead of the MPA Programme in Digital Technologies and Public Policy at University College London (UCL) Department of Science, Technology, Engineering and Public Policy (STEaPP). Her research focuses on the regulation of disruptive technologies, especially digital technologies. She is working closely with policymakers and standards development communities.

Madeline Carr is Associate Professor of International Relations and Cyber Security at University College London (UCL) Department of Science, Technology, Engineering and Public Policy (STEaPP) and Director of its Digital Policy Lab. She has a strong interest in the international policy challenges posed by cybersecurity and is coinvestigator for Standards, Policy and Governance Stream of the PETRAS IoT Research Hub.

Jim Dempsey is Executive Director of the Berkeley Center for Law & Technology. From 1997 to 2014, he was at the Center for Democracy & Technology, including as Executive Director. He served as a Senate‐confirmed Member of the Privacy and Civil Liberties Oversight Board from 2012 to January 2017. He is coauthor (with David Cole) of Terrorism & the Constitution (New Press 2006) and coeditor (with Fred Cate) of Bulk Collection: Systematic Government Access to Private‐Sector Data (Oxford 2017).

Karine e Silva LL.M. is PhD candidate at TILT, Tilburg University on the NWO‐funded BotLeg project. Her research interest is in botnets since the launch of the EU Advanced Cyber Defense Centre (ACDC) in early 2013. Her research involves legal issues surrounding botnet mitigation and the role of public and private sectors.

Jacqueline Eggenschwiler is a doctoral researcher at the University of Oxford. Her research interests include cybersecurity governance and norm‐construction. She holds degrees in International Affairs and Governance, International Management, and Human Rights from the University of St Gallen and the London School of Economics and Political Science.

Amit Elazari Bar On is a Doctoral Law Candidate at UC Berkeley School of Law and a Berkeley Center for Long‐Term Cybersecurity Grantee, as well as a Lecturer at Berkeley's School of Information Master in Cybersecurity Program. She graduated Summa Cum Laude three prior degrees in law and business (B.A., LL.B., LL.M.). Her research in the field of technology law and policy has been published and featured in leading journals and conferences, as well as popular press.

Ryan Ellis is an Assistant Professor of Communication Studies at Northeastern University. His research and teaching focuses on topics related to communication law and policy, infrastructure politics, and cybersecurity. He is the author of the upcoming Letters, Power Lines, and Other Dangerous Things: The Politics of Infrastructure Security (MIT Press).

Miles Elsden spent his early academic career in Europe and the next 10 years providing advice to the UK government most recently as Chief Scientist in Transport. He now works as a consultant at the boundary between policy, technology, and strategy.

Trey Herr is a visiting fellow with the Hoover Institution at Stanford University working on international cybersecurity and risk. His research focuses on the role of nonstate actors in cybersecurity governance, the proliferation of malware, and the evolving character of risk in cyber insurance. He is also a senior security strategist with Microsoft where he handles cloud‐computing security and supply‐chain risk for the Global Security Strategy and Diplomacy team.

Jonah Force Hill is Senior Cyber Policy Advisor at the U.S. Secret Service, where he advises on a range of cybercrime policy and strategy matters. He is also a (non‐resident) Cybersecurity Fellow at New America and a Term Member at the Council on Foreign Relations. He came to the Secret Service after several years at the U.S. Commerce Department, where he focused on global digital economy policy. He holds an MTS and MPP from Harvard University and a BA from UCLA.

Bert‐Jaap Koops is Professor of Regulation & Technology at the Tilburg Institute for Law, Technology, and Society (TILT), Tilburg University. His main research interests are cybercrime, cyber‐investigation, privacy, and data protection. He is also interested in DNA forensics, identity, digital constitutional rights, techno‐regulation, and regulation of human enhancement, genetics, robotics, and neuroscience.

Andreas Kuehn is a Senior Program Associate within the EastWest Institute's Global Cooperation in Cyberspace program. As a Cybersecurity Fellow, Dr. Kuehn conducted research on cybersecurity policy, vulnerability markets and disclosure arrangements at Stanford University's Center for International Security and Cooperation and was an adjunct researcher at the RAND Corporation, where he worked on cyber risk and the cyber insurance industry.

Aaron Martin is a Postdoctoral Research Fellow at the Tilburg Law School in the Netherlands. He was previously a Vice President of Cyber Policy at JPMorgan Chase in New York (2015–2018). He is also an Oxford Martin Associate at the University of Oxford's Global Cyber Security Capacity Centre.

Vivek Mohan is an attorney in private practice based in Northern California. Vivek entered private practice from the Privacy, Data Security, and Information Law group at Sidley Austin LLP, where he counseled clients in the technology, telecommunications, healthcare, and financial services sectors. Vivek is the coeditor and author of the PLI treatise “Cybersecurity: A Practical Guide to the Law of Cyber Risk” (3d. 2018). Vivek has worked as an attorney at Microsoft, at the Internet Bureau of the New York State Attorney General (under a special appointment), and at General Electric's corporate headquarters (on secondment). For five years, Vivek was a resident fellow and later a nonresident associate with the Cybersecurity Project at the Harvard Kennedy School. Vivek holds a JD from Columbia Law School and a BA from the University of California, Berkeley.

Matthew Noyes is the cyber policy & strategy director for the U.S. Secret Service and a Major in the U.S. Army assigned to the Office of Secretary of Defense for Cyber Policy. His work focuses on law enforcement efforts to counter transnational cyber crime and related policy topics. Matt holds a Mater in Public Policy from the Harvard Kennedy School and a BS in Computer Science and Applied Computational Mathematics from the University of Washington.

[Emilian Papadopoulos is president of Good Harbor, a boutique consultancy advising Boards, CEOs, and government leaders on cyber security. He is an adjunct lecturer at Georgetown University and previously worked for the Government of Canada in Ottawa and Washington. He is a graduate of the University of Toronto and of Harvard University’s Kennedy School, where he also serves as the elected chair of the global Alumni Board.

Valeria San Juan is an Analyst at Fundbox in San Francisco, CA. She was previously a Cyber Policy Analyst at JPMorgan Chase in New York (2017).

Elaine Sedenberg is a PhD Candidate at the UC Berkeley School of Information and Affiliate at the Harvard Berkman Klein Center. She previously served as the codirector of the Center for Technology, Society & Policy (CTSP). Her research examines information‐sharing arrangements for public good uses including security, public health, and research activities.

James Shires is a Research Fellow with the Cybersecurity Project at the Belfer Center for Science and International Affairs, Harvard Kennedy School. His research focuses on cybersecurity issues in the Middle East.

Evan Sills is a Director with Good Harbor, where he advises corporate executives on governance, risk management, cybersecurity incident response, and legislative and regulatory activities. He was a Global Governance Futures 2027 Fellow and is a graduate of The George Washington University Law School and Elliott School of International Affairs.

Leonie Maria Tanczer is Lecturer in International Security and Emerging Technologies at UCL’s Department of Science, Technology, Engineering and Public Policy (STEaPP). She is member of the Advisory Council of the Open Rights Group, affiliated with UCL’s Academic Centre of Excellence in Cyber Security Research, and former Fellow at the Alexander von Humboldt Institute for Internet and Society in Berlin. She is interested in the intersection points of technology, security, and gender.

Michael Thornton is a PhD candidate in History and Philosophy of Science at the University of Cambridge. He uses the philosophy of public health to reframe how we think about digital networks and information. Before Cambridge, Michael was a Director of Product Management at Truaxis, a MasterCard company.

Bart van der Sloot specializes in questions revolving around privacy and Big Data. He works as a senior researcher at TILT, Tilburg University, is General Editor of the European Data Protection Law Review, coordinator of the Amsterdam Platform for Privacy Research, and scientific director of the Privacy and Identity Lab.

Acknowledgments

The idea for this book started when we sat at adjoining desks inside the same office at the Harvard Kennedy School; the book was finished many years later while we sit over three thousand miles apart. Along the way a number of individuals and institutions helped make this book possible. First and foremost, Venkatesh (Venky) Narayanamurti served as our academic mentor during our time at the Harvard Kennedy School's Belfer Center for Science and International Affairs. Venky provided unfailing support and encouragement to us when we were new fellows at the Belfer Center and has remained a generous supporter in the succeeding years. Simply put, without Venky this book would not exist. We owe him a significant debt. A number of other faculty members and colleagues at Harvard were instrumental in shaping our thinking about the topics covered in the pages that follow. Joe Nye warmly welcomed us into his cyber seminar series and offered us both the opportunity to hear from a number of policy and academic heavyweights and, perhaps most importantly, catch his sharp and probing questions. Jim Waldo provided an invaluable perspective – what a technologist thinks about policy – and offered enough wisdom to fill at least another book. Michael Sulmeyer encouraged this project and kindly kept us engaged with the Center as its interest in cyber policy continues to grow and thrive. Colleagues associated with the joint MIT and Harvard project, “Explorations in Cyber International Relations,” including Nazli Choucri and Michael Siegel, provided important insight. Early‐stage preparatory work on this volume was funded, in part, by the Office of Naval Research under award number N00014‐09‐1‐0597. Any opinions, findings, and conclusions, or recommendations expressed in this publication are those of the author and do not necessarily reflect the views of the Office of Naval Research.

Emily Silk Marie provided expert editorial assistance in preparing the draft manuscript. At Wiley, Beryl Mesiadhas, Michael Leventhal, and Bob Esposito offered patience and care in assembling the volume.

Ryan would also like to thank his current colleagues at Northeastern University's Department of Communication Studies, the Global Resilience Institute, and the School for Public Policy and Urban Affairs. Northeastern has provided a creative and supportive intellectual environment. Previous colleagues at the Naval Postgraduate School and Stanford's Center for International Security and Affairs also helped lay the seeds for this project. Additionally, staff support at the Belfer Center and Northeastern's Communication Department was vital. Karin Vander Schaaf, Patricia McLaughlin, Sarah Donahue, and Angela Chin assisted with issues both big and small during the preparation of the book. Their efforts were instrumental in making this book a reality. Ryan thanks his family for their love and encouragement.

Vivek would like to thank Jack Goldsmith of Harvard Law School, whose encouragement and mentorship over the years provided needed focus; and spurred the curiosity and passion to explore both the practice and the learning of the law. Alan Raul and Ed McNicholas of Sidley Austin LLP, and through their introduction, Peter Lefkowitz and Jane Horvath, for teaching Vivek the law, and how to practice it; and to all of the above for their continued friendship. Of course, Vivek would like to thank his infinitely patient wife Ariana, who has provided loving support and has acted as a sounding board for many of editorial comments and perspectives contained herein.

Finally, we would like to dedicate this book to one of the contributing authors – Samantha Adams. Samantha tragically passed away during the production of this book. The chapter included here is one of her last pieces of finished work.

Introduction

I.1 Making Sense of Cybersecurity Governance

On 23 September 1982, Representative Don Edwards, a longtime member of the United States House of Representatives, presided over a congressional hearing to consider a new type of crime – “computer‐related crime.” Edwards set the scene:

As the use of computers expands in our society, the opportunity to use computers to engage in or assist in criminal activities also expands. In response to this perceived problem, a number of States has enacted legislation specifically aimed at computer fraud. The Federal Bureau of Investigation offers its agents specialized training in computer fraud. Private industry is attempting to enhance the security of its computer facilities.1

Edwards' statement would, with slight tweaking here and there, more or less be repeated like boilerplate for the better part of the next three‐and‐a‐half decades. Repeatedly, various policymakers sitting in subcommittee meetings, policy forums, and other public venues would note that computers were increasingly ubiquitous and that their diffusion was, among other things, leading to new types of harm that call for new types of solutions. At times, the claimed harms were speculative or theoretical; equally often, the calls for solutions followed publicized incidents that increasingly resonated in the public consciousness.

A little over 15 years after Edwards introduced the hearing on computer‐related crime, US Senator Fred Thompson introduced a similar hearing to examine the public risks presented by weak computer security. Thompson could have been reading from Edwards' prepared remarks. He noted that, “[c]omputers are changing our lives faster than any other invention in our history. Our society is becoming increasingly dependent on information technologies that which are changing at an amazing rate.”2 Thompson would go on to note that these trends create new vulnerabilities that we must now confront.

In time, the lexicon slid into the expansive and ill‐defined catch‐all of “cybersecurity,” a term initially loathed by technical experts but embraced with such vigor within policy‐circles that it appears to be here to stay. “Cybersecurity” issues are repeatedly framed as an eternally new problem – something that is just peaking over the horizon that must be confronted now. This framing is attractive: it captures the sense that new technologies create new problems and dilemmas; and it freights the problem with a sense of urgency – we must act now before it is too late. This frenetic energy – which has escalated to a fever pitch over the last decade and shows no sign of abating – imbues discussants with cause and reason to reject incrementalism. At times, this provides the necessary fora to be receptive to novel or transformative ideas.

But this presentation obscures as much as it illuminates. Presenting cybersecurity as a fundamentally new and unaddressed problem elides the long history of security interventions. It shoves to the side the lattice of institutions – laws, organizational practices, and formal and informal rules – that have been built over time to address the myriad challenges associated with the rise of networked computers. Some of these practices have been useful, others have been hopeless dead‐ends. But, ignoring them and assuming that we are confronting a new problem and need to invent a new set of tools and approaches ignores the stubborn reality: we have been confronting these challenges in various forms for decades.

Cyberspace is not an ungoverned space outside of laws, institutions, and power. As Joe Nye, Laura DeNardis, Jack Goldsmith, Tim Wu, and others have usefully pointed out, there is a rich thicket of organizations and institutions that provide structure, shape, and limits in cyberspace.3 There are vital and enduring analogs between the cyber and physical domains.4 The world of digital devices and networks is dotted with points of control.5 This insight is equally useful when it comes to examining the narrower question of cybersecurity governance. The security space is not a free‐for‐all. Far from it. It is a space defined by competing organizations and institutions that seek to impose some form of control in the name of security. What exactly is meant by security is always an open and contested question. In some settings it might mean the protection of devices, data, or networks; in others, security might be translated into control of forms of speech or expression that are seen as politically unpalatable; in still other arenas, security might mean protection from non‐state actors, but say little about governmental efforts to subvert technical protections over personal data. Questions about cybersecurity – just like questions about security in a broader sense – are always open to multiple interpretations. Two questions always hang in the air either explicitly or implicitly: Security of what? And security from whom?

This collection looks to make sense of the governance of cybersecurity. It explores through various case studies some of the competing organizational efforts and institutions that are attempting to secure cyberspace. The book looks not to the future – to hypothetical new possibilities to confront a new set of previously unknown problems – but to the recent past and the present. It examines some of the in‐place and unfolding institutional and organizational efforts to confront the challenges of cybersecurity. Rather than examining these efforts through a purely narrow normative lens – does it work? – it considers the broader implications of these efforts. It traces how different notions of cybersecurity are deployed and built into stable routines and practices, what can be termed the “bureaucratization of risk.” In doing so, the chapters collected here share a set of common interest: how are fears over cyber‐insecurity being distilled into organizational efforts and institutional frameworks? Importantly, what are the larger implications – for workers, firms, the public, and competing sets of values – of these organizational practices and frameworks? Security is, and has long been, a key axis upon which decisions about communications technologies and networks sit. Looking closely at these efforts as forms of governance – efforts to control and manage something seen as unruly – helps draw into clear relief what is at stake: Cybersecurity efforts are (and have been for quite some time) remaking the digital technologies that are the foundations of contemporary life. Examining more closely the various efforts documented in the chapters that follow offers a partial portrait at some of the ways that these efforts are unfolding and what we are gaining and losing in the process.

In the pages that follow, readers are encouraged to consider the deep engagement of various communities working to define and respond to cybersecurity issues. At the same time, readers may consider the impact and import of the siloed verticals that define many of the case studies. As the number of cybersecurity professionals continues to grow at exponential rates, the risk of failing to learn from not only our recent past, but what is happening right beside us, becomes ever more evident. That is not to say that these silos must in each case be broken down – while enterprising readers may be able to stitch together their own “Grand Unified Field Theorem” for cybersecurity policy, the editors are hopeful (and view it as perhaps far more likely) that these deep dives present useful lenses into different policy, legal, and technical approaches to various facets of the “cybersecurity problem.” The case studies intentionally take different approaches in their commentary, but three shared thematic threads run through the book.

I.2 Connective Tissue: Common Themes

I.2.1 Cybersecurity is Contextual

Cybersecurity does not exist in a vacuum. It is always contextual. Cybersecurity efforts are rooted in the specifics of time and place. These efforts are molded by the preexisting outlines of political organizations and institutions, industrial ecosystems, and larger regional and international political rivalries and alliances. To understand how certain issues are framed as cybersecurity challenges and how certain approaches to these challenges are developed and deployed, it is important to ground these efforts within these larger contexts. Elaine Sedenberg and Jim Dempsey's “Cybersecurity Information Sharing Governance Structures: An Ecosystem of Diversity, Trust, and Trade‐offs” (Chapter 1) offers a sober account of what happens when context is ignored. In their analysis of cybersecurity information sharing efforts and the Cybersecurity Information Sharing Act of 2015 (CISA), Sedenberg and Dempsey argue that policy lacking historical memory is doomed to fail. CISA was an ambitious attempt to kick‐start new information sharing efforts. But, it ignored the institutional labyrinth and information‐sharing mechanisms that already existed. Information sharing is much more than a technical problem. As the chapter notes, the failure to account for this broader context limits the efficacy of CISA.

In “Cybersecurity Governance in the GCC” (Chapter 2), James Shires offers a detailed account of cybersecurity in the six states of the Gulf Cooperation Council (GCC). Shires illustrates how national and regional politics shape cybersecurity governance. In drawing an overview of regional incidents, key government organizations and cybersecurity firms, and relevant strategies, laws, and standards, the chapter makes the case that cybersecurity is regionally specific. The contours of cybersecurity are influenced by larger circulating cultural notions and pressures, but national and regional politics plays a decisive role in shaping how cybersecurity is both understood and confronted. Shires work serves as a call for regional and national specialization. This call is ably answered by Leonie Maria Tanczer, Irina Brass, Miles Elsden, Madeline Carr, and Jason Blackstock in “The United Kingdom's Emerging Internet of Things (IoT) Policy Landscape” (Chapter 3). Tanczer and coauthors explore how the United Kingdom is confronting the security challenges of IoT, a sea change in the deployment of sensors and connected technologies that emerged quickly and largely with little regulatory guidance. They explore how UK IoT efforts are linked to and defined by a dense institutional landscape. They offer a tantalizing note: as the United Kingdom prepares to exit the European Union, it is unclear how this political realignment will upset existing cybersecurity efforts.

Understanding the relationship between context and cybersecurity is not only a matter of mapping existing political institutions. Emilian Papadopoulos and Evan Sills' “Birds of a Feather: Strategies for Collective Cybersecurity in the Aviation Ecosystem” (Chapter 4) examines the interplay between industrial ecology and cybersecurity. Focusing on cybersecurity and aviation, they observe a complex industry that includes thousands of organizations, from global giants, such as Lufthansa and United Airlines, to smaller or more obscure players, such as regional airports, the manufacturers of In‐Flight Entertainment systems, and luggage management organizations. This knot of organizations creates shared cybersecurity risks, collective risks that cannot be adequately addressed by a single firm or organization. Papadopoulos and Sills discover that the unique nature of the aviation industry is leading to new collective approaches to risk management. Their insights offer a useful reminder: cybersecurity cannot be stripped from a larger political, economic, and organization context. For both practitioners looking to develop workable policies and scholars examining cybersecurity critically, focusing on context is vital.

I.2.2 Cybersecurity is Dynamic

Cybersecurity joins together government and industry in a set of contingent relationships. The interplay between the public and private sector is not easy to pin down. At some moments, they are willing and engaged partners working hand in glove; at others, they are adversaries working at cross‐purposes. The chapters that follow chart all manner of public and private configurations. Jacqueline Eggenschwiler's “An Incidents‐Based Conceptualization of Cybersecurity Governance” (Chapter 5) describes various formal approaches to cybersecurity governance. In looking at three different cases – a 2016 cyberespionage case involving RUAG, a key Swiss defense contractor; the collaborative containment activities of the Conficker Working Group (CWG); and Symantec's cybersecurity practices – Eggenschwiler fleshes out the contours of hierarchical, multi‐stakeholder, and market‐based modes of cybersecurity governance. The chapter concludes that there is no one‐size‐fits‐all approach to cybersecurity governance.

Eggenschwiler's observation echoes across a number of chapters. Valeria San Juan and Aaron Martin's “Cyber Governance and the Financial Services Sector: The Role of Public–Private Partnerships” (Chapter 6) looks at the cooperation challenges within the financial services sector. Calling for public–private partnerships to tackle the thorny problems of cybersecurity is a familiar and evergreen recommendation: Who could possibly argue against cooperation? But, such efforts can also be something of an empty promise: a recommendation that shirks defining lines of responsibility and accountability and, in their place, leaves an ill‐defined commitment to work together without thinking through the difficult mechanics of putting these into practice. Looking at the financial services sector, San Juan and Martin provide an up‐close examination of three different public–private partnerships. They find both cause for optimism and caution in the multi‐stakeholder model of public–private cooperation. In their telling, neither industry or government can confront the challenges of cybersecurity alone. They argue that public–private efforts stumble when attempting to address systemic risk.

The challenges of confronting long‐term and systemic risk reappears in Samantha A. Adams, Karine e Silva, Bert‐Jaap Koops, Bart van der Sloot's “The Regulation of Botnets: How Does Cybersecurity Governance Theory Work When Everyone is a Stakeholder?” (Chapter 7). Adams and coauthors examine the coordination challenges that emerge when a cross‐national mix of public and private players join together to combat botnets. To work in practice, the type of polycentric governance efforts that Adams and coauthors document call for either a supranational or key nation to act as a coordinating mechanism. Transnational criminal justice efforts to date, however, have largely been reactive, focusing on immediate challenges while leaving long‐term issues unaddressed (Tanczer and coauthors also see a similar challenge in the United Kingdom's IoT strategy in Chapter 3).

Trey Herr's investigation of the cybersecurity insurance market, “Governing Risk: The Emergence of Cyber Insurance” (Chapter 8), uncovers another configuration of public and private. Herr finds a useful interplay between the insurance industry's development of cybersecurity policies and the enforcement of standards. While the federal government has largely, though not exclusively, taken a voluntary approach to developing and implementing cybersecurity standards, insurers have the power to transform these standards into binding and enforceable rules. This model of governance skirts the often politically unpalatable prospect of direct regulation, with a model that is led by the market with significant space for input from both public and private standards bodies.

Michael Thornton's “Containing Conficker: A Public Health Approach” (Chapter 9) examines the limits of purely private approaches to cybersecurity governance. Thornton examines how “the cabal,” an ad hoc group of experts that would be later renamed the CWG, came together to respond to the Conficker worm. Thornton finds an argument in favor of hierarchy and government. Members of the CWG referenced the informality of the group as a key strength, but, as the chapter notes, this model can significantly diverge from or even thwart larger public goals. Popular accounts framed CWG as superheroes that swooped in to save the day – the private sector rescuing the public from nasty malware. But, as Thornton wryly remarks, “[t]he problem with the X‐Men is that sometimes they save the planet and sometimes they start a civil war.” Thornton argues that in praising or adopting these informal and ad hoc (and nongovernmental) approaches, we sacrifice accountability and larger ethical considerations. In place of purely private efforts, Thornton argues for the adoption of a public health approach to confronting cybersecurity that carves out a key space for government participation.

The public and private sector are not only willing or even tentative allies, occasionally they are adversaries. Andreas Kuehn and Ryan Ellis examine the rise of the market for software flaws in “Bug Bounty Programs: Institutional Variation and the Different Meanings of Security” (Chapter 10). As Google, Microsoft, Facebook, and hundreds of other companies rush to start purchasing flaws in their software and services, they are drawn into competition with intelligence agencies, militaries, and others that also seek to purchase flaws in order to exploit them for gain. Here, as Kuehn and Ellis show, the private sector is attempting to use the market to improve software security and, to some degree, keep flaws out of the hands of those that want to use them for surveillance, sabotage, or crime. The institutional model of bug bounty programs is still forming. As the authors note, multiple different bounty models are currently being tried and tested. In each case, within these efforts there is a tension between the desire to improve the broader software ecosystem and the desire of governments to use the holes in this ecosystem for law enforcement, intelligence, or military purposes. The public and private sectors are not simply allies: they are at times direct competitors.

I.2.3 Cybersecurity is Never Value‐Free

Cybersecurity is a way of ordering competing values. Cybersecurity efforts explicitly and implicitly arrange different and at times oppositional goals. Security efforts always bump against other important values. Jonah Force Hill and Matthew Noyes examine the tension between state sovereignty and globalized data flows in “Rethinking Data, Geography, and Jurisdiction: A Common Framework for Harmonizing Global Data Flow Controls” (Chapter 11). Modern data storing slices data into fine‐grained portions – “sharded” – and distributes it across the globe. As Hill and Noyes detail, the fragments then slosh across legal jurisdictions, moving from one geography to another, as cheaper storage become available elsewhere. Here, we see tensions that can emerge within cybersecurity. How do we reconcile globalized data with the needs of law enforcement, local or regional privacy laws, and more generally core questions of national sovereignty? Hill and Noyes argue that it is time to radically rethink the piecemeal approach to solving these sorts of questions. Developing a common framework for global data flows, as they show, requires facing head on the competing values at play.

Amit Elazari Bar On visits the world of bug bounties in “Private Ordering Shaping Cybersecurity Policy: The Case of Bug Bounties” (Chapter 12). Elazari Bar On provides the first comprehensive analysis of bug bounty legal terms. The chapters find a raw tension between software security and the security of hackers participating in these budding programs. The use of form‐contracts in bounty programs can – and does – leaves security researchers in legal jeopardy. While bounty programs prioritize fixing software and improving security, they create legal precarity or insecurity for market participants. As Elazari Bar On argues, a legal regime that hopes to foster ethical hacking must work to offer researchers better legal safeguards.

Conflict and competition among competing values and interests sit at the heart of much of cybersecurity governance. Indeed, this core theme appears repeatedly across the pages of the book. Shires (Chapter 2) sheds light onto how cybersecurity can be reinterpreted for political purposes. Security is elastic, it can be stretched to serve all manner of ends. Even when cybersecurity is not deliberately repurposed to instrumentally serve larger political ends, it cannot but help implicate other values. Tanczer and coauthors (Chapter 3) see the United Kingdom's IoT strategy as a veiled referendum on privacy. Thornton (Chapter 9) shows how security efforts raise vital questions about how we balance security with accountability. In these and many other of the cases that follow questions about security are always about something larger: They are about the values we hold dear and the difficult work of mapping and acknowledging trade‐offs between competing interest. It is our hope that the cases assembled in the book will help shed some light on the sorts of bargains we are making in the name of cybersecurity and allow interested readers to start sorting out the wise from the foolhardy.

Notes

1

Don Edwards. United States House of Representatives Committee on the Judiciary, United States Subcommittee on Civil and Constitutional Rights, House of Representatives. “Federal Computer Systems Protection Act of 1981.” September 23, 1982, 1.

2

Fred Thompson, United States Senate Committee on Governmental Affairs. “Weak Computer Security in Government: Is the Public at Risk?” May 19, 1998, 1.

3

Joseph S. Nye, Jr., “The Regime Complex for Managing Global Cyber Activities,” 2014.

https://www.cigionline.org/sites/default/files/gcig_paper_no1.pdf

; Laura DeNardis, The Global War for Internet Governance (New Haven: Yale University Press, 2014); Jack Goldsmith and Tim Wu, Who Controls the Internet? Illusions of a Borderless World (New York: Oxford University Press, 2006).

4

See

e.g. Jack L. Goldsmith, “Against Cyberanarchy,” University of Chicago Law Review 65, no. 4 (1998): 1199–1250; Joseph S. NyeJr. “Nuclear Lessons for Cyber Security,” Strategic Studies Quarterly Winter (2011): 18–38.

5

David Clark, “Control Point Analysis,” 2012 TRPC (September 10, 2012),

https://ssrn.com/abstract=2032124

or

http://dx.doi.org/10.2139/ssrn.2032124

.

1Cybersecurity Information‐Sharing Governance Structures: An Ecosystem of Diversity, Trust, and Trade‐offs

Elaine Sedenberg1 and Jim Dempsey2

1 School of Information, University of California, Berkeley, CA, USA

2 Berkeley Center for Law & Technology, School of Law, University of California, Berkeley, CA, USA

1.1 Introduction

Policymakers and corporate representatives have frequently discussed cybersecurity information sharing as if it were a panacea. The phrase itself refers to many different activities and types of exchanges, but from about 2009 to the end of 2015, the cybersecurity policy debate in Washington, DC, was dominated by calls for greater information sharing. 1 Influenced in part by the post‐9/11 theme of “connecting the dots,” both policymakers and the private sector commonly accepted that improved cybersecurity depended on – and would flow inexorably from – expanded information sharing within the private sector and between the private sector and the federal government.2 This view seemed to rest upon the assumption that with more information, systems may be made more secure through prevention measures or rapid remediation. Policymakers, reluctant to regulate cybersecurity standards, viewed voluntary information sharing as a tangible coordination activity that could be incentivized through policy intervention and sometimes directly facilitated by federal government roles.3 The policy debate culminated with the 2015 passage of the Cybersecurity Information Sharing Act (CISA).4 The law sought to encourage information sharing by the private sector by alleviating concerns about liability for sharing otherwise legally restricted information. It also sought to improve sharing within the federal government and between the government and the private sector.

CISA was debated and adopted after several decades of efforts within law enforcement and national security agencies to coordinate and increase information sharing with and within the private sector. The US Secret Service (USSS) established the New York Electronic Crimes Task Force (ECTF) in 1995 to facilitate information exchanges among the private sector, local and national law enforcement, and academic researchers. In 2001, the USA PATRIOT Act mandated that the USSS create a nationwide network of ECTFs, which eventually consisted of over 39 regional hubs.5 In 1998, Presidential Decision Directive 63 (PDD‐63) authorized the Federal Bureau of Investigation (FBI) to create a National Infrastructure Protection Center (NIPC) as a focal point for gathering and disseminating threat information both within the government and with the private sector.6 PDD‐63 simultaneously directed the national coordinator for infrastructure protection to encourage the private sector to create an Information Sharing and Analysis Center (ISAC).7 The role of the private sector center was to collect and analyze private‐sector information to share with the government through the NIPC, but also to combine both private‐sector information and federal information and relay it back out to industry.8 Although PDD‐63 anticipated that there would be one national ISAC, various sectors ultimately formed their own ISACs focused on industry‐specific security needs.9

Over time, additional federal agencies also developed their own information‐sharing systems and procedures. For instance, US Computer Emergency Readiness Team (US‐CERT) – an organization that took over many of NIPC's functions after it was dissolved following a transfer to the Department of Homeland Security (DHS) – releases vulnerability information and facilitates response to particular incidents. Various other information exchanges and feeds – each with its own scope, access policies, and rules – were established across federal agencies charged with securing aspects of cyberspace. For example, in 2001 the FBI formally announced its “InfraGard” project, designed to expand direct contacts with private‐sector infrastructure owners and operators, as well as to share information about cyber intrusions, exploited vulnerabilities, and infrastructure threats.10

In addition to these piecemeal federal efforts to expand cyber information sharing, private‐sector information‐sharing arrangements also proliferated. Antivirus software companies agreed to share virus signatures with each other, essentially deciding to differentiate themselves on platform usability and support instead of competing for data.11 Additionally, security researchers and individual corporate professionals formed ad hoc arrangements around critical responses to major incidents such as the Conficker worm and the Zeus botnet – threats that required coordination of response as well as exchange of information.12

Consequently, even before CISA was enacted, an ecosystem of information exchanges, platforms, organizations, and ad hoc groups had arisen to respond to increasingly pervasive and complex security threats within all industries. Today, this ecosystem of information‐sharing networks is characterized by a high degree of diversity – the result of years of evolving policies and cooperative models, driven by both the federal government and private sector. Information‐sharing models and structures operate in different niches – working sometimes in silos, occasionally duplicating efforts, and sometimes complementing each other.13

CISA attempted to advance information sharing on four dimensions: within the private sector, within the federal government, from the private sector to the government, and from the government to the private sector. However, the legislation was enacted without first fully mapping the ecosystem that had developed in the preceding years. Little effort was made to identify what was working effectively and why, or to de‐conflict existing federal programs. Instead, the private sector repeatedly stated – and policymakers accepted – that concerns over legal liability (mainly arising, it was asserted, from privacy laws) were inhibiting information sharing. Therefore, one of CISA's major provisions was liability protection for private sector organizations as an incentive for more information sharing.

CISA's usefulness and impact on the information‐sharing ecosystem has yet to be demonstrated. On the contrary, our study suggests that the law did little to improve the state of information sharing. If anything, it only added more hurdles to federal efforts by mandating that the federal portal include unnecessary technical details (free‐field text entry) and cumbersome submission methods (e‐mail). The law lacked specificity on how federal efforts would work with each other and with already existing information‐sharing networks in the private sector. Focusing almost solely on the private sector's liability concerns, it failed to address other key factors associated with sharing, including trust management, incentives, reciprocation, and quality control. In sum, CISA was a policy intervention divorced from existing sharing mechanisms and lacking a nuanced view of important factors that could enable agile exchanges of actionable information.

This chapter focuses on cybersecurity information within the private sector and between the private sector and federal government (leaving to others the issue of sharing within the federal government itself). It examines how governance structures, roles, and associated policies within different cybersecurity information‐sharing organizations impact what information is shared (and with whom) and the usefulness of the information exchanged. This research is based on a qualitative analysis of 16 semi‐structured interviews with cybersecurity practitioners and experts. Using these interviews and other available information on cybersecurity sharing, we have created a taxonomy of governance structures that maps the ecosystem of information‐sharing organizations – each of which fills particular security needs and is enabled by different policy structures. This chapter discusses the implications of these policies and structures for values that directly impact sharing, particularly the trade‐off between trust and scalability. This research illustrates how different governance models may result in different degrees of success within the complex and changing cybersecurity ecosystem. Our findings point to lessons – mainly cautionary ones – for policymakers seeking to encourage improvements in cybersecurity. This chapter focuses on information sharing within the United States, but given the multinational nature of many private sector companies, some findings may be relevant internationally.

The types of cybersecurity‐related information that could be shared to improve cybersecurity defenses and incident response include incidents (including attack methods), best practices, tactical indicators, vulnerabilities, and defensive measures. Generally, the organizations we describe in this chapter are engaged in sharing tactical indicators, often called “indictors of compromise” (IOCs). An IOC can be defined as an artifact that relates to a particular security incident or attack. IOCs may be filenames, hashes, IP addresses, hostnames, or a wide range of other information. Cybersecurity defenders may use IOCs forensically to identify the compromise or defensively to prevent it.14

1.2 Taxonomy of Information‐sharing Governance Structures and Policies

Over time, different cybersecurity information‐sharing structures have arisen to address particular needs or challenges. Given the wide range of information types, federal roles, industry sectors, and information sensitivities at issue, it is perhaps inevitable that an array of information arrangements has formed, each serving particular perceived needs, each with its own priorities and challenges, and each with its own respective membership policies and governance structures. Our research identified at least seven information‐sharing models:

Government‐centric

Government‐prompted, industry‐centric

Corporate‐initiated, peer‐based (organizational level)

Small, highly vetted, individual‐based groups

Open‐source sharing platforms

Proprietary products

Commercialized services

To understand these governance models, our taxonomy articulates different policy and organizational approaches to sharing, as well as their impact on mission, participation, risk/benefit trade‐offs, and efficacy (Table 1.1).

Table 1.1. Taxonomy of information‐sharing models.

Classification

Organizational units

Example organizations

Governance types

Government‐centric

Government operated; private‐sector members can be corporations, private sector associations (e.g. ISACs), nonprofits (e.g. universities), or individuals

DHS AIS; US‐CERT; ECTF; FBI's e‐guardian; ECS

Federal laws and policies; voluntary participation;rules range from open sharing subject to traffic light protocol or FOUO (for official use only) to classified information restrictions (ECS)

Government‐prompted, industry‐centric

Sector or problem specific

ISACs; ISAOs

Sector or problem specific; voluntary participation; generally organized as nonprofits, use terms of service or other contractual methods to enforce limits on re‐disclosure of information

Corporate‐initiated, peer‐based (organizational level)

Specific private companies

Facebook ThreatExchange; Cyber Threat Alliance

Reciprocal sharing; closed membership; information controlled by contract (e.g. ThreatExchange Terms and Conditions)

Small, highly vetted, individual‐based groups

Individuals join, take membership with them through different jobs

OpSec Trust; secretive, ad hoc groups

Trust based upon personal relationships and vetting of members; membership and conduct rules

Open‐source sharing platforms

Spamhaus Project

Information published and open to all; no membership but may be formed around community of active contributors and information users; one organization may manage platform infrastructure

Proprietary products

Organization or individuals participate by purchasing the product

AV and firewall vendors

Information via paid interface; responsibility and security management still in‐house

Commercialized services

Organizations purchase service

Managed security service providers

Outsourcing of security

1.2.1 Government‐centric Sharing Models

The cybersecurity policy of the US federal government is simultaneously oriented towards many different goals, ranging from national security, to protecting federal IT systems, to investigating and punishing cybercrime, with the overarching goal of ensuring a healthy and productive US economy through the protection of American critical infrastructures and intellectual property. Each goal results in different information‐sharing priorities.15 Given the number of federal agencies involved in some aspect of cybersecurity, the growth of information‐sharing systems is not surprising – even if it is frustrating to information consumers. Federal information‐sharing programs range from the FBI’s eGuardian and InfraGard, to DHS's Automated Information Sharing (AIS) program and its narrowly tailored Enhanced Cybersecurity Services (ECS) program, USSS ECTF alerts, and US‐CERT alerts and tips.

The role of the federal government in improving cybersecurity may be viewed from a public good perspective, whereby federal investment in cybersecurity would adjust for underinvestment by individuals and the private sector.16 However, for such public investment to be effective would require first an understanding of what the private sector lacks and whether the government has what is lacking or could effectively acquire it and make it available in a timely fashion. In fact, leaving aside the question of whether the private sector really suffers from a lack of cybersecurity information, there are limitations to the federal government's ability to quickly and efficiently share information. Accordingly, there are significant challenges associated with expecting the federal government to fulfill a role as central information collector and disseminator.

Given the network of national security, intelligence, and law‐enforcement entities, some government‐held information becomes trapped within classification restrictions, involving extensive security standards for personnel, IT networks, and physical facilities, and severely limiting recipients and methods of disbursement. The Pentagon' Defense Industrial Base (DIB) cybersecurity program and DHS's ECS program were developed to disseminate such classified data within special security agreements. These programs trade limited access for greatly improved information quality. As implemented, they appear not to be intended to support dissemination to a wide number of recipients. Instead, they disseminate information to just a handful of communications service providers (AT&T, CenturyLink, and Verizon) plus Leidos (formerly SAIC), entities that provide cybersecurity services to a multitude of customers and have the capability to ingest and act upon the information provided. In contrast, the reach of DHS's Cyber Information Sharing and Collaboration Program (CISCP) is broader (although still restricted), but it focuses on sharing analytic products with the private sector and therefore trades speed for context. (CISCP offers a range of products, including indicator bulletins intended to support faster action to thwart attacks and remediate vulnerabilities.17)

At the other end of the spectrum, membership requirements for organizations such as the USSS ECTFs are much less strenuous, requiring a referral by someone already in the organization. The ECTFs disseminate information mainly by e‐mail (and in‐person meetings). Information shared on the listserv is regulated using the traffic light protocol, where each color defines how it may be used and re‐disclosed.18 Only the USSS sends information to the ECTF listservs, although the information may originate from many different sources.

Several interviewees discussed a hesitation after the Snowden revelations to share information with any US government agency, regardless of the formal governance mechanisms. They cited general cultural unease, as well as fear of negative publicity if and when the sharing came to light. One federal employee involved in information sharing commented that “post‐Snowden, and almost certainly now post‐WikiLeaks, [getting the private sector to share] is going to become more difficult for us. We are battling a lot of perception.” Internationally, for any company subject to European regulation, these cultural and reputational concerns are heightened and augmented by the assumption that sharing information with the US government would violate the 2016 EU General Data Protection Regulation (GDPR).

The regulatory and law‐enforcement powers of the federal government at times may discourage sharing from the private sector. Yet, the existence of those powers may also incentivize sharing, at least on a case‐by‐case basis, for they represent capabilities to act against cyberthreats in ways not available to the private sector.19 One cybersecurity practitioner commented: “Law enforcement [are] the people who are able to take special action to identify and attribute this information to individuals, who have authority to utilize rule of law, court orders, subpoenas, everything that's required essentially to take authoritative action and prosecute these individuals. Nothing pulls them [attackers] out of the ecosystem quite as well as putting them in jail for their crimes.”

Even when legal barriers and the government's negative reputation are mitigated, sharing with the government can be difficult. Interviewees complained that there is a high barrier to participation in DHS's AIS due to the technical requirements for setting up the sharing interface.

The fact that the US government's role in information sharing remains fractured among many different agencies – each with its own respective priorities to share inside or outside of the government itself – is not necessarily undesirable. It might be effective to have different agencies play different roles. However, it is not clear that there is a unified policy or strategy for the proliferation of federal information‐sharing programs with broadly defined and overlapping missions. What we see is a failure in both directions: private entities share relatively little information with the government, and what information the government shares is outdated or otherwise not actionable. Outside of specialized sharing arrangements such as the ECS, there are weak incentives for the private sector to take on the reputational risks and the administrative and technical burdens of sending information to the government.

Contrasted with the publicly endorsed but not yet realized goal of large‐scale, large‐volume sharing arrangements, the most effective reciprocal sharing between the private sector and the federal government may occur on an ad hoc