Table of Contents
Title Page
Copyright Page
About the Authors and Contributors. . .
Foreword
Prologue
Chapter 1 - The World of Cyber Security in 2019
General Review of Security Challenges
Cyber Security as the Friction and Latency of Business and Government
Protecting Web 2.0 Data
The Present Models for Cyber Security are Broken
Chapter 2 - The Costs and Impact of Cyber Security
The Economics of Security
The Security Value Life Cycle
Security Costs at the Point of Creation
Security Costs at the Point of Purchase - Service Creation
Security Cost at Point of Service
Impact of Security Costs on Security Decisions and Investments: Network ...
Chapter 3 - Protecting Web 2.0: What Makes it so Challenging?
Defining Web 2.0
The Challenges of Web 2.0 Security
Securing the Web 2.0 Network
The Wireless Data Challenge
Securing the Web 2.0 Applications and Content
Chapter 4 - Limitations of the Present Models
Aftermarket Security - A Broken Model
Standards and Regulations
Regulate Yourself into Good Security?
Silos of Risk
Absence of Metrics to Define Trust
The Current Model is Broken - Now What?
Chapter 5 - Defining the Solution - ITU-T X.805 Standard Explained
The ITU-T X.805 Standard Explained: Building a foundation for the Security ...
Coupling to the ISO/IEC 27000 Series Standard: Complementary Standards that ...
Enterprise Risk and IT Management Frameworks
Chapter 6 - Building the Security Foundation Using the ITU-T X.805 Standard: ...
The standard made operational
Key lesson: Complexity breeds insecurity
Key lesson: The cloud has entered the building
Key lesson: Address common vulnerabilities
Key lesson: Not all vulnerabilities are created equal
Key lesson: What is reportable and when is it reportable?
Key lesson: Security mitigation is also a business risk management decision
Key lesson: Performing the assessment with confidence in the results
Key lesson: Convince the product unit
Closing thoughts on the key lessons
Chapter 7 - The Benefits of a Security Framework Approach
Convincing the CFO
Chapter 8 - Correcting Our Path - What Will it Take?
The Power of the Customer to Transform an Industry
Summary and Conclusions
Appendix A - Building Secure Products and Solutions
Appendix B - Using the Bell Labs Security Framework to Enhance the ISO ...
Appendix C
Glossary
Index
This edition first published 2009
© 2009, John Wiley & Sons, Ltd
Registered office
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.
The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
eISBN : 978-0-470-97108-6
Set in 10/13 Optima by Thomson Digital
About the Authors and Contributors. . .
Taking the challenge to write this book it was clear to me that it would need the contributions of many ideas, many hands. These ideas and concepts, and much of the actual writing are a composite of these hands and minds. Dr. Mike Schabel, Ty Sagalow, Bob Thornberry, Marco Raposo and Aleksei Resetko were contributors to Chapters 2 and 3. Mike, in particular, lent his expertise to the topic of wireless broadband communications.
Dr. Jim Kennedy wrote Chapter 4. Uma Chandrashekhar, Andrew McGee, Rao Vasireddy with others at Bell Laboratories were the developers of the Bell Labs Security Framework that became the ITU-T X.805 Recommendation. Their ideas and writings are central to this book particularly with Chapters 5 and 6.
Bob West of Echelon One with the support of Eric Green and Kirsten Francissen contributed throughout bringing the message to conclusion in Chapters 7 and 8. A special mention of Rod Beckstrom and Ty Sagalow; their contributions will open a new area of investigation to understanding the economics of cyber security.
There were a number of reviewers; John Reece in particular added great insight.
Leaving Wyatt Starnes to last is intended to single out his particular contribution. He will see his ideas throughout this book; in effect the central message of this book has been his life’s work. We all owe him a great deal of gratitude for his quiet but forceful campaign to get the message through about metrics, about root of creation, about aftermarket security as an ineffective approach. Thank you Wyatt, and thank you to all that made these important contributions.
To close, we give special acknowledgement to Dan Geer for his foreword. His prose is unmatched - we stand in awe.
—Carlos Solari
Foreword
Perhaps it does not need saying yet again, but security is a means, not an end. For this reason, and because technological advance is growing faster, the “means” that comprise security today are likely to be short lived, yet means short-lived-ness is not a free pass to ignore them, to put no effort into evolving them. Ends are not short lived.
Most of us who earn our keep in the security trade are well aware of the essentialness of constant adaptation. This constant adaptation is a prerequisite to getting one’s job done; ironically, constant adaptation applies to both Bad Guys and Good Guys. Our problem is that the Bad Guys enjoy a structural advantage over the Good Guys: where in the physical world it is the crook who must engineer the perfect crime and the police who have all the time they need, in the digital world it is the policeman who has to be perfect and the crook who can be patient.
That the Good Guys are at a disadvantage is not a first-principles deduction by some logician - it is merely an observation. Looking back over the last decade, it is easy to observe that the amount of treasure and labor being expended on security has risen very fast indeed. At the same time, the loss of goods and control engineered by the opposition has risen. We are many. They are few. We are losing. They are winning. The reason is structural.
When you are at a structural disadvantage, the first choice might be to just get out of the game. Who wants to play baccarat against a crooked croupier? Or take a spitball when the umpire works for the other team? Better to play at another casino. Better to stand on another diamond. Sadly or not, getting out of the digital security game is not in the cards.
Something else has to happen.
We are dependent on the kind of networked cooperation made possible such a short time ago with the appearance of Mosaic (March 14, 1993, to be precise). The rate of change, even in the short retrospect of sixteen years, proves that predicting future change is an unlikely business. The one prediction that seems assured is that we may think we are dependent on networked communications today, but we ain’t seen nothin’ yet! Web 2.0 will see to that because, if nothing else, it is already doing so - a kind of proof-by-demonstration that William Gibson’s famous bon mot embraces, “the future is already here, just unevenly distributed.” If we are going to be so dependent on Web 2.0 that society literally could not survive without it, and do that in a world where the opposition has an all-but-permanent structural advantage, it really is time to get serious. As the 44th President said in his Inaugural Address, “In the words of Scripture, the time has come to set aside childish things.”
This book is about setting aside childish things, such as assuming that somehow we’ll muddle through. Marcus Ranum may have sounded cynical to some ears when he said: “Will the future be more secure? It’ll be just as insecure as it possibly can, while still continuing to function. Just like today.” But he didn’t sound cynical to my ear. The difference is that the complexity of the Web 2.0 + world and our dependence on it makes the core of Ranum’s remark, “while still continuing to function,” the core of whatever debate there still is.
(Look,) It is entirely clear that convergence of nearly all communications-based functions in the economy and in society to Internet-based communications is inevitable if not already true. It is entirely unarguable that increasing quantities of data that make all this convenience work are held not on one’s desk but on the Web itself. It is entirely predictable that the more dependent we are on something, the more its vulnerabilities matter and the more our opponents will invest in R&D aimed at it. So, Points #1 and #2: Web 2.0 is irresistible so long as it works, and the only real failure would be a loss of trust after some unignorable security shortcoming - everything else is fungible.
There is a joking restatement of the Three Laws of Thermodynamics that goes like this:
You can’t win
You can’t break even
You can’t get out of the game
That is where we are: we cannot get out of the security game because we cannot get out of the Web 2.0 game, even if we wanted to. (Which we don’t.) That we are at a structural disadvantage is just a restatement that we can’t win. That we can’t break even says that what we do for security will be judged as all risk management is judged: by what did not happen as much as by what did. Them’s the breaks.
Behavioral psychologists will tell you that you begin to change outcome the minute you begin visibly taking data. If security is a process in its operation and a mindset otherwise, then it is time we took some data. In a structural disadvantage where success is when nothing happens, our aim is to be a less attractive target than someone else so that the things that must happen, happen to that someone else. This isn’t jaded. This is Real Politik.
The authors of this book have set out to do a difficult thing, and that is to transmit what they know about how to think. In a complex world addicted to convenience, how to think often seems like an expensive hobby compared to what button to press, what exactly to do. As complexity grows, what button to press may be the only thing all but the few can do. How to think is not so quick, and it is never cut-and-dried. How to think doesn’t tell you what button to press, and knowing what button to press proves nothing except that you can follow instructions. Knowing what button to press is nevertheless good enough when you don’t have sentient opponents, only accidents and stray alpha particles. Knowing what button to press is useless when the opponent is sentient and is gaming you. When sentient opponents are what you are up against, you need to be able to think. You need to be able to out-think.
We all know from long experience that (1) there are never enough experts to go around, and (2) that security must be built-in rather than bolted-on. In our current world situation, it is probably fair to say that the demand for security expertise so outstrips supply that the charlatan fraction is rising. As such, some way to extend the reach of the expertise we do have would be a Very Good Thing. Because we all know that an ounce of built-in security is worth many, many pounds of field upgrades. No rational observer would argue other than that the scarce expertise absolutely must be deployed at the earliest possible stage of development, which is to say where the supply-demand imbalance is least and the leverage on what supply we do have is greatest.
Thus we come to the point of this book. By whatever precise definition you choose, Web 2.0 is the future, it is already here if unevenly distributed, and it needs security built-in, not bolted on. The best expertise we have needs to be in the front end of every Web 2.0 construction. Sure, some constructions have already been done, and, let us hope, done well. But there is a lot more to come and it needs our collective best skill if we are not to create something really bad. But how?
The answer is discipline, and discipline in the form of standards and, even, Standards. Sure, standards (or Standards) are sometimes just so much bureaucracy and self-flattery. That is not the case here. Yes, there are people who are so good at what they do that standards (or Standards) just get in the way.
There are too few of those folks to matter, and they won’t live forever. If there is anything the last six months in finance have shown, it is that we humans are abundantly capable of building systems more complex than we can understand when in operation. As Mike O’Dell used to say, “Left to themselves creative engineers will deliver the most complex system they think they can debug.” Given the stakes in security for Web 2.0, we have to do better, we have to get security right up front, or it is game-over.
Getting it right means using the all-too-rare skills to lay down the path of discipline, using discipline to build security in, and using built-in security to make the world safe for Web 2.0 and all it promises. That’s what this book is about - taking the skill now encoded in a Standard, using that Standard to operationalize discipline, and using that discipline to build some security in.
If you have a better idea, all I can say is “Let’s hear it” and, maybe, “Where have you been?”
—Daniel E. Geer, Jr., ScD
Prologue
We live in an age of great uncertainty - a period of unprecedented technical innovation that is transforming our lives. It is innovation that accelerates even as we harbor an unquiet sense of the unknown destination; where does all this new technology take us and what becomes of us in the process? Ray Kurzweil, a pre-eminent technology innovator spoke to this point of innovation acceleration at Harvard University, mindful he said of the “intertwined nature of the risks and benefits”. It was February 2005. If only it could be slowed down enough that we can better understand the promise of its benefits and calculate the severity of its risks.
But innovation cannot be slowed; it runs along its own course with a gathering momentum fuelled by competitive global markets and not beholden to any other law than the one that states simply: “technology begets technology at an ever-increasing rate.”
Nowhere is the uncertainty associated with accelerating innovation more pronounced than in the world of cyberspace, where information technology insinuates itself into every nook and corner and then transforms itself with blinding speed. In the world of cyberspace, we are faced with the challenge of trying to secure new territory without having entirely figured out how to protect the present - the cyber security dimension of cyberspace.
It is perhaps easiest to illustrate the challenge we face by recalling the well-known story of the frog in the cauldron of boiling water. A frog that is dropped into a cauldron of boiling water will immediately leap out to save itself. However, if this same frog is placed in a cauldron filled with tepid water that is then only gradually brought to a boil its reaction is very different. Because the increase in temperature is gradual, the frog stays put not realizing its predicament until the water reaches the boiling point and by then it is too late.
Consider in this story similarities with Security in a Web 2.0+ World. The present networks remain unprotected; mastery of the security paradigm remains an elusive target. So what is this ill-defined world of Web 2.0?1 What is the risk today, and how can one address the growing risk tomorrow? The temperature is rising, yet complacency rules. It is time to sense the growing danger and make the necessary response.
There is a dilemma, however, in discussing the topic of cyber security - a problem of communication where policy makers and technologists speak, but in a language that fails to inform one to the other and fails to inject a sound understanding. Simple questions go unasked and unanswered. How serious is the problem of cyber security? Are the issues correctable, and how much time is there to take corrective measures? While risk assessments are done daily, the metrics of assessing the vulnerability of new technologies are not consistently agreed upon and not well practiced.
“We have not been able to easily discern what threats we would face, what the tools of influence would be, or who would become our opponents. The outcome has been a kind of strategic indecision that puts the United States at risk.”2
There is general agreement on a few points, yet, these same points also illustrate why the answers are not easily forthcoming. Security is not intrinsically separate from the business functions; it is a measure of overall business risk represented in the terms of cost. What does it cost the company to lose access to the functions supported by the network and by this determination how much should be spent in security to protect against this loss? This question, addressed in Chapter 2, needs to be answered in order to better calculate business risk. Security metrics, the science of measuring security, remains undefined and so it is not well practiced. There is more to lose in financial terms and in tarnished reputations, but how much, and to what degree of impact remains a degree of conjecture.
To begin to answer these questions requires putting in place the foundational constructs of technical and process metrics, the economics of loss in the era of “cyber-value”, and to communicate the concepts of cyber security from policy to technology clearly. In the absence of these constructs, one can anticipate what is already happening: policy disconnected from reality and bureaucracy that exacerbates rather than remedies. There are many already arguing this point with Sarbanes-Oxley 3 and the California Senate Bill 1386 (SB 1386).4 Policy without the metrics to determine its effectiveness often ends up creating a spiral of increasing costs without the intended benefits.
To better understand and communicate the issues of cyber security between policy maker and technologist requires an effort to speak to both in a manner that each can understand. With this intention, each chapter in this book begins with its own executive summary; speaking to the policy maker: the business executive, the academician, and government executive. Transitioning to the body of each chapter, the target audience shifts. It is meant not just for the security professional, but for all makers and developers of the information communications technology (ICT) systems, a term applied in this book encompassing traditional “IT” or information technology (thought of with data networks) and telecommunications systems (thought of with telephony and video systems). To embed security in the ICT systems, will require first that one begin with explaining the principles of good practice for security design to the engineers who make the products and systems.
The target audience is thus a broad population, ranging from those who need to know enough about cyber security to make effective policy decisions to the engineers who design the ICT systems. The book does not cover how to encrypt data, but where it should be considered and in what measure it should be applied. In this manner, it aims to lessen the mystery surrounding cyber security and present it as sound engineering principles that need to be applied in the right measure.
Three key points will be stated and reinforced in later chapters. The first is that there is not much time; years cannot be spent to begin the process of embedding security into current and future systems. The second is that there is a need for models that allow one to measure security in the design stage, in deployment and in production. With the use of better security models, one can expect a lessening of the dependency on cyber security experts and transform the practice of security more to the science of metrics, baselines and business-rational remediation. This book proposes two models that can help make this transformation - the X.805 standard5 and the security value life cycle. Both of these models will work toward creating greater transparency as a way to bring a more finely grained trust context into computing transactions.
The final point is that the stakes could not be higher. This will be said repeatedly: Information communications technology is embedded in the whole of technology and becoming more so with each day that we automate to improve operational efficiency and compete in the global markets.
To understand the issue of how much time, one needs to look no further than the convergence of technology and the emergence of Web 2.0 computing. Convergence is the move from separate infrastructures and technologies for voice, video and data to one technology platform-Internet Protocol (IP) - and toward a unified infrastructure, not separate plants.
Convergence is happening around the world - one can recognize it in the marketing speak of triple play6 and IPTV,7 as two examples. When the convergence is done, it will be too late and too expensive to redesign these systems and protect them against a hostile environment of hackers working with organized crime
There is little time to ensure that security is engineered into the systems that the wonderful benefits of convergence and Web 2.0 computing are designed to withstand the rigors of the inherent risk. As an example, “new pay-TV market data indicates that IPTV will grow by an esti mated 32 percent annual ly over the next six years to nearly 79 million subscribers globally by the end of 2014.”8 The dependency is deep and more intertwined in everyday life.
1
The World of Cyber Security in 2019
“The semantic Web - what is called Web 3.09 - is commonplace in 2019. The start of the Internet and the World Wide Web is the stuff of legacy and lore. Amid the concerns of ICT security is another dimension - the clash of virtual realities such as between the Second Life® virtual world and the physical lives. Decisions in the virtual world drive material reactions in the real world - as they are now one world with no safeguards in place.”
Executive Summary
It is 2019 AD or 28 AW (after the Web), counting in years after the introduction of the World Wide Web.10 Contrary to some predictions, ICT systems continue to be one of the primary agents of change in our lifetimes and in the history of humankind. The pace of change has been nothing short of spectacular. There have been many winners and losers as the exponential growth of technology gives rise to new and wider social divisions. This change ripples through societies, cultures and nations with unintended consequences that are too numerous to count.
In hindsight, one can see where things went right and where they have gone terribly wrong. Protecting ICT systems has been one of the great challenges. With 12 years of history, Web 2.0 continues to serve, transform and interconnect the world’s cultures. Nothing is left untouched by the Web 2.0 generation as worlds that were once physically and logically separate are now inextricably linked. Generation Y and Generation Z (also known as Millenials), born in the age of computers and the Internet, run the physical and virtual worlds. It is a new world, but is it “brave” or is it “foolhardy.”
The threats to cyber security in 2019 are many. How did things get to this point? In hindsight, the answer is all too clear. It just happened degree by degree, like the slow-rising temperature in the cauldron. The gradual slide was something that happened even as it is clear that we could have and should have integrated security into our ICT systems. It is not that the technical know-how was missing, nor was it something that came as a surprise. It was a ripening awareness of the vulnerabilities. By the year 2009, it was understood that security had to be an integral part of system design yet by the absence of forethought, understanding and leadership, the vulnerabilities in ICT systems were left unaddressed. It is 2019 and it’s time to pay the piper.
It was a sword that cut both ways; the standardization on all-IP systems is what allowed the world of data, voice and video to blend in ways that created the value of next-generation systems. Web 2.0 applications would not have achieved its broad appeal without the convergence of IP systems. It also meant that the vulnerabilities were many and were both transmuted11 across the different media and infrastructure domains and replicated across the many nodes in the complexity of the Web 2.0 world. Encryption can be broken with powerful computers. Quantum computing is in our midst; even strongly encrypted national systems are at risk.
Figure 1.1 Internet Mapping
Copyright © Lumeta Corporation 2009. All Rights Reserved
It is a situation that could have been avoided; the challenge now is to find a way to fix an installed and complex array of systems that are used for almost every type of business. Unfortunately, the complexity of system management and data stored in a dizzying range of formats cannot be remedied without starting over. Bill Cheswick’s Internet mapping from 2009 shows a picture of this technology galaxy as ganglions interconnected like a constellation of stars (Figure 1.1). Today, with its accelerated growth, it looks more like a round brown blob - the number of nodes so large that one cannot see space between their connecting points.
Security in complex systems implemented after they are in production is at best a patchwork fix. However, patchwork security is ill-suited to counter the means, motive and opportunity; the deadly triad law enforcement recognizes as the source for crime. The opportunities are endless with global online access. Gone are the constraints of physical separation. The notion of nation-states means little in the global Internet; even parallel private versions of the Internet can be breached.
Vulnerabilities are so commonplace that in the period from January 1, 2007 to December 31, 2007, the IC3 (Internet Crime Complaint Center) Website received 206,884 complaint submissions.12
People continue to be the weakest link in the chain, the underlying fact in the social engineering schemes. Crime follows money, and with e-commerce and businesses dependent on online transactions, there is plenty of money-motivation.13 Politics and world tensions are also motivating factors. Demonstrations have now moved online. Citizen unrest that used to make itself heard in the streets is now expressed through distributed denial of service (DDoS) attacks.14 It is a very difficult state of affairs. The remedies available are appearing as items on a menu of poor choices dependent upon detecting and responding to a “zero-second” threat. It takes practically no time to form and launch an attack. The average password can be broken in less than ten minutes; the break-in, undetected, is only a prelude to the actual attack.15 How does one detect and respond to “zero-second” attacks?
Thankfully, it is not the year 2019 as of this writing. 2019 is still some years in the future, and Web 2.0 is still taking shape, as are the next-generation networks that will be the underpinnings of the latest applications and services. What steps can be taken now that will yield a more positive outcome; one where security is a central part of the system design and applied in a balanced approach to the risk? How much time is there? Is there a tipping point when it becomes too late? How close is that point? Interesting questions, indeed and they need immediate answers.
A recent article in CSO Magazine stated that, “the most risky mobile device is the laptop computer and the number one concern is the inability to properly identify and authenticate remote users.”16
The concern is with what can be done now using the methods and the technologies already available to set in place the idea that security can be designed in to the complex networks that are getting installed now and that will exist in 2019. Web 2.0 is still evolving and it remains the next great technology promise. There is still a chance to correct the path and design in a more secure destiny.
Figure 1.2 The Security Triad
Consider another triad - the security triad of prevent-detect-respond as the context for all security functions (Figure 1.2). The prevent part of security is where the technologies around designing in security fit in and is the focus of this book. Prevention includes another word, overused perhaps, but still significant to this discussion. The word is trust. Every day people make decisions about whom they should trust. It remains to be seen whether the makers of the ICT companies will design in the security to achieve trustworthiness as a measurable attribute.
On the question of time, the point of no return after which it will be nearly impossible to achieve a positive outcome for Web 2.0 security is rapidly approaching. IPTV is already gaining a foothold and Voice over IP (VoIP) is already strongly embedded in the corporate world. Video in all its manifestations is being transmitted over IP networks. Separate infrastructures for voice, video and data are collapsing into one flat IP world.
There is also the question of risk. The paradox of Web 2.0 is that many millions of individuals are willing to incur a potential loss of privacy by opting into social networking sites in spite of the apparent risk of identity theft and other abuses that come from sharing personal information on these Web sites. Those who engage in social networking clearly believe that the benefits outweigh the potential risks
Although this book is indirectly concerned with the question of responsibility, it is directly concerned with the questions of what can be done and how to protect the new Web 2.0 environment, a set of issues that are addressed in Chapter 2. Before embarking on a path that will lead to better security, one must first discover how to measure security and then implement the systems that accomplish this measurement. This process should be based on actual measurements; and be more science than art. “There cannot be a greater mistake than that of looking superciliously upon practical applications of science. The life and soul of science is its practical application.”17 Trust can be measured, given a score, and improvements made on that score while making more informed judgments about levels of access on the basis of this score in real time. This is the value of prevention in the security triad and the point of focus.
Product developers and security professionals possess the know-how to achieve more secure environments. This book presents a set of fairly straightforward rules, and introduces a framework for security design developed in 2003 by scientists at Bell Laboratories.10 These scientists began by asking themselves some very basic questions about how to measure, baseline and integrate security into complex ICT networks. Finding the answers unsatisfactory, the scientists decided to develop a framework to solve this problem. The framework measures security, identifies the gaps and implements remedies with consistency, rigor and practicality, focusing on such issues as “just enough” security. It is time to get started - time is of the essence.
General Review of Security Challenges
There are new security challenges each time someone invents a way to automate or integrate human activities with ICT systems. In the world of finance, this point was made clear with the scale and speed of the losses that occurred at Société Générale in 2008.18 In ICT systems, unlike the physical world of vaults and walls, the impact can occur so much faster and reverberate with much greater damage.
Web 2.0 poses the latest of these challenges. The repercussions of loss in the cyber world are nonetheless physical; people can lose their jobs, and the public is harmed. Consider these challenges as they evolve in the services and applications of Web 2.0.
Content is king
Much attention has been paid recently to content protection. Most of this concern around content is directed at end-user applications, such as spreadsheets or word processing files. Content-filtering products have been primarily about “gate-checking” to make sure protected content does not leak outside the network. Still, content is found in all layers of the network and not just in a format that is recognizable to end-users. In the network infrastructure, content can take the form of account information such as billing. In services applications, it can include profile information used in target marketing. In other applications the content is the data stored in the databases and presented in application servers. Yet, no matter in what form it appears it is all content and it can all be lost, tampered with and subverted to harm people and damage systems.
Consider further the meta-data 19 content in the infrastructure and services as one example.
Target marketing makes use of business intelligence to match the right marketing information with the right target population or even the right individual. Its criminal equivalent is “spear phishing” that applies “business intelligence” gathered about wealthy people but for malicious purposes. It is still, relatively speaking, a low-level problem. What if more aggressive criminal organizations or governments were to apply these very same “business intelligence” techniques, using the meta-data content to target populations, with the purpose of keeping power, gaining power or stifling dissent? Content protection is more than just keeping business files from leaking outside the network perimeter. Consider also the background information (the meta-data) about the data, which can be as simple as the demographics of Web surfing being used for constructive or criminal purposes. Content even in the form of meta-data is king and it needs to be protected.
Network criminals target another form of content, the network architecture to determine detailed information about the operating systems, patching levels, and location of critical assets. By burrowing deeper into the network, the attacker can determine the access controls, break those controls and initiate the final phase of the attack. The final stage of the attack can take place in a few seconds. It may involve efforts to steal, modify, or even to encrypt the content or disrupt the service. Using database encryption as a denial of service technique an intruder can keep a business from accessing its database and disrupt its operations. This can be devastating to a business in the real-time and global online environment where even seconds of downtime can translate into millions of dollars in lost revenue.
Broadband wireless security
Fourth-generation (4G )20 broadband wireless communications and all it promises for creating ubiquitous communications is under development. The taste of this promise is already present in 3G21 systems. For anyone carrying a 3G wireless card, there is much to complain about, but just try to take their 3G card away and one will find that “stickiness” has already developed. The wait for 4G is filled with great anticipation. One can envision a great range of business activities that will blossom from this freedom to connect anywhere with high-capacity bandwidth that will truly enable open (non-wall gardened)15 Web services. Has the security required for 4G systems been considered?
There is, in fact, much to consider. 4G in all its versions seems poised for success, and will undoubtedly create a demand that is only in the beginning stages. 4G will have to be highly available, reliable and secure to meet expected demand.
With expanded accessibility and capacity will come expanded use of personal, business and government applications, and these will gain critical mass that is far reaching. From a security perspective, tens of millions of 4G subscribers added to hundreds of millions of sensors (machine-to-machine accounts) require systems that must scale in size, in features and that must be assured. Simply put, there is an inherent degree of fragility in a highly shared, highly limited RF channel that is used for wireless communications. This fragility is not there in the same measure for wire line systems that can have high bandwidth dedicated to the subscriber at the aggregation point.
Cyber Security as the Friction and Latency of Business and Government
The value of ICT is to enable businesses to compete on the basis of agility and scale, allowing the business to adapt to market conditions faster and with greater efficiency to bring the right products or services to market at the right time. Agility is, in large measure, about a reduction in process latency and friction. Although the world is highly interconnected, the reality is that interconnectivity is still in its early stages.
As rapidly as these new capabilities that interconnect technology are entering mainstream, cybercrime is growing at an even more alarming rate.
Governments are not immune as the public demands e-government accessibility and efficiency. Yet there are numerous examples of government systems that have been compromised when sensitive data has been lost, and the trust between government and its people breached.
Web 2.0 is the next step in the maturation of the Internet, but is there sufficient understanding of the risks and the impact that can occur when systems operate without the necessary protections?
Will security incidents ultimately choke off the success to the point where outages make customers reluctant to move to more advanced online services? If not the incidents themselves, the burden of over-compliance is another form of friction; security not in the service of the business but acting as nothing more than sand in the machinery. There is a need for prudent regulatory requirements: the number of existing regulations will remain - they are not going away. Additional regulatory requirements can be anticipated in response to the public’s increasing concerns that companies are not safeguarding information as they should. Many argue that cumbersome regulations, such as California’s SB 1386, are already in place as regulators respond with legislative instruments and penalties for accountability.22 Passed in 2003, SB 1386 was the first legislation that was enacted to protect against security breaches. Since then most other states in the United States have passed similar laws.
There are unintended consequences that result from passing this type of legislation, such as diminished business agility. United States businesses subject to the Sarbanes-Oxley regulation are already smarting from the high overhead costs such regulation engenders. It isn’t just public companies, but virtually any company that conducts business in the United States is impacted. Many blame over-regulation on the tectonic shift of securities exchange listings from the U.S. to the exchanges of London, Singapore and other major global financial centers.
Impact also comes in the form of losses created by security incidents. This is latency and friction in its worst form. Efforts to quantify losses reveal how difficult a task it is to get companies to collect and report this information. The CSI annual cybercrime survey 23 repeatedly discusses the dilemma of too few companies willing to report cybercrime information. This is also friction - the grit that breaks down the ability to clearly express the problem to policy leaders.
Protecting Web 2.0 Data
The information flow in the Web 2.0 model has specific risks beyond the general risks with IP-based systems and the Internet discussed up to this point. These risks go hand in hand with what makes Web 2.0 a more challenging environment to protect. It’s a virtual place where conventional boundaries don’t always apply and where the spirit of open exchange may conflict with privacy concerns. Chapter 3 examines in some detail what makes Web 2.0 security particularly challenging. Three issues are of particular concern: control of the data, control of identity and privacy, and the value of virtual assets.
The discussion first considers content stored on public sites. These may also include software as a service (SaaS) sites that, together with consumer-driven sites, may have an implied, if not explicit, expectation for using the stores of data for target marketing. A variety of questions, issues and challenges stem from this condition of open exchange and they begin with the question of control. Data that is used in a Web 2.0 application provides a great advantage to the online service provider when it is data provided without strings attached. In some cases this data is the business. Take away this control and the business model of target marketing starts to unravel.
Who owns the data and how should control be handled? In the first point of view, it is the organization providing the service, whether it is the Web 2.0 company, a hospital, a government agency or a financial company that controls the data. The opposing view, more closely represented in European countries, is that companies storing the data can only use it in a very narrow and strictly controlled role. The end-user controls the information, and the end-user must expressly authorize any further use of the data.
Despite privacy statements provided by U.S. companies to their customers, the present balance of control tilts almost exclusively to the advantage of the company. In this instance, the end-users have given up their rights to control. Many systems are in fact designed with few, if any, opt-in end-user controls. It becomes clear after reading the fine print but few people take the time to do so.
If information is power, then there is a power base growing in the Web 2.0 + world and in every large organization that is collecting data either directly or indirectly as in the meta-data discussed earlier. The end-users have given up control. Where is the balance? Is a medical file containing an x-ray taken at a hospital safe from abuse by employees, insurance firms, hospitals and pharmaceutical companies? Should we trust that the company will protect this medical information adequately?
In the law enforcement triad, the means exists in the tools of the criminal world, the motives are many and the opportunities abound. The opportunities are found with the inherent vulnerabilities that exist in complex systems and the absence of a legitimate basis for trust. Until the many dimensions of the cyber security problems can be measured the problem cannot be corrected.
Information governance in an enterprise is hard enough. In the Web 2.0 + world, who safeguards the interests of the end-user when the business model is explicitly designed to support the application of information for target marketing or other similar purposes? This question is difficult to answer, because no one has clear governance over the information produced. Trust in the cyber world must be measured or it is nothing more than marketing and should not be considered a proxy for making governance decisions about finance, health or privacy.
Protecting information in the Web 2.0 + world, where it is about protecting the value of virtual presence is paramount for personal and financial risk management. A Web 2.0 company’s value is not in its physical plant, but in its Web presence and infrastructure. Insuring physical assets is relatively simple. Insuring a cyber presence is radically different because it is usually more difficult to quantify. The value of an online company is almost wholly dependent upon brand value, the services offered, how well the information and its technology systems function and how well they are protected. The physical assets have, by comparison, negligible value.
When an e-commerce company has a market capitalization in the tens of billions of dollars, understanding how to protect this virtual world is exceedingly important. In the real world property is valued in terms of physical assets. In the Second Life world virtual property is sold with real money.24 How is the physical asset to be insured? It is all about protecting Web presence, an ephemeral notion that does not fit the model of insuring physical assets and where cyber-value and cyber security is paramount.
The Present Models for Cyber Security are Broken
The current practice of cyber security is lacking in many regards, but it is not possible to address the problems until the root causes are understood. The identification of root causes starts with ICT systems sold to a market that places the responsibility for security on the end-user. This condition is consistent no matter whether the end-user is a consumer or a company providing services that because of the size of the market makes up a part of the national infrastructure. At an individual level, consumer-owned personal computers could hardly be considered part of the national infrastructure. Taken in large numbers they are the end-tools used for all forms of online transactions and in a national emergency may even serve as the primary means to conduct government business (in a health crisis situation government employees will be expected to work from home connected to the government data centers). It is not just home PCs that have to get patched, it is also the hundreds of thousands of computers and servers in government agencies and in utility services.