22,99 €
Stopping Losses from Accidental and Malicious Actions Around the world, users cost organizations billions of dollars due to simple errors and malicious actions. They believe that there is some deficiency in the users. In response, organizations believe that they have to improve their awareness efforts and making more secure users. This is like saying that coalmines should get healthier canaries. The reality is that it takes a multilayered approach that acknowledges that users will inevitably make mistakes or have malicious intent, and the failure is in not planning for that. It takes a holistic approach to assessing risk combined with technical defenses and countermeasures layered with a security culture and continuous improvement. Only with this kind of defense in depth can organizations hope to prevent the worst of the cybersecurity breaches and other user-initiated losses. Using lessons from tested and proven disciplines like military kill-chain analysis, counterterrorism analysis, industrial safety programs, and more, Ira Winkler and Dr. Tracy Celaya's You CAN Stop Stupid provides a methodology to analyze potential losses and determine appropriate countermeasures to implement. * Minimize business losses associated with user failings * Proactively plan to prevent and mitigate data breaches * Optimize your security spending * Cost justify your security and loss reduction efforts * Improve your organization's culture Business technology and security professionals will benefit from the information provided by these two well-known and influential cybersecurity speakers and experts.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 590
Veröffentlichungsjahr: 2020
Cover
Title Page
Introduction
What Is Stupid?
Do You Create Stupidity?
How Smart Organizations Become Smart
Not All Industries Are as Smart
Deserve More
Reader Support for This Book
I: Stopping Stupid Is Your Job
1 Failure: The Most Common Option
History Is Not on the Users’ Side
Today's Common Approach
We Propose a Strategy, Not Tactics
2 Users Are Part of the System
Understanding Users' Role in the System
Users Aren't Perfect
“Users” Refers to Anyone in Any Function
Malice Is an Option
What You Should Expect from Users
3 What Is User-Initiated Loss?
Processes
Culture
Physical Losses
Crime
User Error
Inadequate Training
Technology Implementation
UIL Is Pervasive
II: Foundational Concepts
4 Risk Management
Death by 1,000 Cuts
The Risk Equation
Risk Optimization
Risk and User-Initiated Loss
5 The Problems with Awareness Efforts
Awareness Programs Can Be Extremely Valuable
Check-the-Box Mentality
Training vs. Awareness
The Compliance Budget
Shoulds vs. Musts
When It's Okay to Blame the User
Awareness Programs Do Not Always Translate into Practice
Structural Failings of Awareness Programs
Further Considerations
6 Protection, Detection, and Reaction
Conceptual Overview
Protection
Detection
Reaction
Putting It All Together
7 Lessons from Safety Science
The Limitations of Old-School Safety Science
Most UIL Prevention Programs Are Old-School
The New School of Safety Science
Putting Safety Science to Use
Safety Culture
The Need to Not Remove All Errors
When to Blame Users
We Need to Learn from Safety Science
8 Applied Behavioral Science
The ABCs of Behavioral Science
Engineering Behavior vs. Influencing Behavior
9 Security Culture and Behavior
ABCs of Culture
Types of Cultures
Subcultures
What Is Your Culture?
Improving Culture
Behavioral Change Strategies
Is Culture Your Ally?
10 User Metrics
The Importance of Metrics
The Hidden Cost of Awareness
Types of Awareness Metrics
Day 0 Metrics
Deserve More
11 The Kill Chain
Kill Chain Principles
Deconstructing the Cyber Kill Chain
Other Models and Frameworks
Applying Kill Chains to UIL
12 Total Quality Management Revisited
TQM: In Search of Excellence
Other Frameworks
COVID-19 Remote Workforce Process Activated
Applying Quality Principles
III: Countermeasures
13 Governance
Defining the Scope of Governance for Our Purposes
Traditional Governance
Security and the Business
Analyzing Processes
Grandma's House
14 Technical Countermeasures
Personnel Countermeasures
Physical Countermeasures
Operational Countermeasures
Cybersecurity Countermeasures
Nothing Is Perfect
Putting It All Together
15 Creating Effective Awareness Programs
What Is Effective Awareness?
Governance as the Focus
Where Awareness Strategically Fits in the Organization
The Goal of Awareness Programs
Changing Culture
Defining Subcultures
Interdepartmental Cooperation
The Core of All Awareness Efforts
Metrics
Gamification
Getting Management's Support
Enforcement
Experiment
IV: Applying Boom
16 Start with Boom
What Are the Actions That Initiate UIL?
Metrics
Governance
Awareness
Feeding the Cycle
Stopping Boom
17 Right of Boom
Repeat as Necessary
What Does Loss Initiation Look Like?
What Are the Potential Losses?
Preventing the Loss
Detecting the Loss
Mitigating the Loss
Determining Where to Mitigate
Avoiding Analysis Paralysis
Your Last Line of Defense
18 Preventing Boom
Why Are We Here?
Reverse Engineering
Step-by-Step
19 Determining the Most Effective Countermeasures
Early Prevention vs. Response
Start with Governance
Prioritize Potential Loss
Define Governance Thoroughly
Matrix Technical Countermeasures
Define Awareness
It's Just a Start
20 Implementation Considerations
You've Got Issues
Business Case for a Human Security Officer
It Won't Be Easy
21 If You Have Stupid Users, You Have a Stupid System
A User Should Never Surprise You
Perform Some More Research
Start Somewhere
Take Day Zero Metrics
UIL Mitigation Is a Living Process
Grow from Success
The Users Are Your Canary in the Mine
Index
End User License Agreement
Chapter 8
Table 8.1: E-TIP Table Example
Chapter 11
Table 11.1: Kill Chain Comparison
Chapter 15
Table 15.1: Quarterly Plan
Chapter 4
Figure 4.1: The risk equation
Figure 4.2: Cost of countermeasures compared to vulnerabilities
Figure 4.3: The risk optimization point
Chapter 8
Figure 8.1: The relationship between antecedents, behavior, and consequences...
Chapter 9
Figure 9.1: The ABCs of culture
Chapter 12
Figure 12.1: The PDCA cycle of the ISO 9001:2015 clauses
Chapter 17
Figure 17.1: A mind map
Figure 17.2: A mind map for User Clicked Malicious Link
Chapter 19
Figure 19.1: Sample countermeasure matrix
Chapter 20
Figure 20.1: The Kubler-Ross Change Curve
Figure 20.2: The J-Curve of Adoption
Figure 20.3: The chasm in the J-Curve
Figure 20.4: How good change management influences productivity and performa...
Cover Page
Table of Contents
Begin Reading
iii
xxvii
xxviii
xxix
xxx
xxxi
xxxii
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
325
326
327
328
329
330
331
332
333
334
335
iv
v
vii
ix
xi
xiii
xiv
336
Ira Winkler
Dr. Tracy Celaya Brown
We believe that the title of a book is perhaps its most critical characteristic. We acknowledge that the title, You Can Stop Stupid is controversial. We had considered other possible titles, such as Stopping Human Attacks, but such a title does not convey the essence of this book. Although we do intend to stop attacks that target your users, the same methodology will stop attacks by malicious insiders, as well as accidents.
The underlying problem is not that users are the targets of attacks or that they accidentally or maliciously create damage, but that users have the ability to make decisions or take actions that inevitably lead to damage.
That is the fundamental issue this book addresses, and it makes a critical distinction: The problem lies not necessarily in the user, but also in the environment surrounding the people performing operational functions.
Managers, security specialists, IT staff, and other professionals often complain that employees, customers, and users are stupid. But what is “stupid”? The definition of “stupid” is having or showing a great lack of intelligence or common sense.
First, let's examine the attribute of showing a great lack of intelligence. When your organization hires and reviews people, you generally assess whether they have the requisite intelligence to perform the required duties. If you did hire or retain an employee knowing that they lacked the necessary intelligence to do the job, who is actually stupid in this scenario: the employee or the employer?
Regarding a person who shows a great lack of common sense, there is a critical psychological principle regarding common sense: You cannot have common sense without common knowledge. Therefore, someone who is stupid for demonstrating a great lack of common sense is likely suffering from a lack of common knowledge. Who is responsible for ensuring that the person has such common knowledge? That responsibility belongs to the people who place or retain people in positions within the organization.
In general, don't accuse someone in your organization of being stupid. Instead, identify and adjust your own failings in bad employment or training practices, as well as the processes and technologies that enable the “stupidity.”
When people talk about employee, customer, and other user stupidity, they are often thinking of the actions those users take that cause damage to your organization. In this book, we refer to that as user-initiated loss (UIL). The simple fact is that a user can't initiate loss unless an organization creates an environment that puts them in a position to do so. While organizations do have to empower employees, customers, and other users to perform their tasks, in most environments, there is little thought paid to proactively reducing UIL.
It is expected that users will make mistakes, fall for tricks, or purposefully intend to cause damage. An organization needs to consider this in its specification of business practices and technological environments to reduce the potential for user-initiated loss.
Even if you reduce the likelihood for people to cause harm, you cannot eliminate all possibilities. There is no such thing as perfect security, so it is folly to rely completely on prevention. For that reason, wise organizations also embed controls to detect and reduce damage throughout their business processes.
Consider that large retail stores, such as Target, have a great deal to lose from a physical standpoint. Goods can be physically stolen. Cashiers can potentially steal money. These are just a couple of common forms of loss in retail environments.
To account for the theft of goods, extensive security controls are in place. Cameras monitor areas where goods are delivered, stored, and sold. Strict inventory control systems track everything. Store associates are rewarded for reporting potential shoplifters. Security guards, sometimes undercover, patrol the store. High-value goods are outfitted with sensors, and sensor readers are stationed at the exits.
From a cash perspective, cashiers receive and return their cash drawers in a room that is heavily monitored. They have to “count in” the cash and verify the cash under the watchful eyes of the surveillance team. The cash registers keep track of and report all transactions. Accounting teams also verify that all cash receipts are within a reasonable level of expected error. Also, as important, the use of credit cards reduces the opportunity for employees to mishandle or steal cash.
Despite all of these measures, there are still losses. Some loss is due to simple errors. A cashier might accidentally give out the wrong change. There might be a simple accounting error. Employees might figure out how to game the system and embezzle cash. Someone in the self-checkout line might accidentally not scan all items. Criminals may still be able to outright steal goods despite the best controls. Regardless, the controls proactively mitigate and detect large amounts of losses. There are likely further opportunities for mitigating loss, and new studies can always be consulted to determine varying degrees to which they might be practical.
An excellent example of an industry that intelligently mitigates risk is the scuba diving industry. Author Ira Winkler is certified as a Master Scuba Diving Trainer and first heard the expression “you can't stop stupid” during his scuba instructor training. The instructor was telling all the prospective instructors that there will always be some students who do not pay attention to safety rules. It is true that scuba diving provides for an almost infinite number of ways for students to do something potentially dangerous and even deadly.
Despite this, scuba diving is statistically safer than bowling. When you consider how that may be, you have to understand that most scuba instruction involves safety protocols. Reputable dive operators are affiliated with professional associations, such as the Professional Association of Diving Instructors (PADI). PADI examines how dive accidents have occurred and works with members to develop safety protocols that all members must follow.
For example, when Ira would certify new divers, all students had to take course work specifying safe diving practices. They also had to go through a health screening process and demonstrate basic swimming skills and comfort in the water. They then had to demonstrate the required diving skills in a pool.
When it comes to certifying people in open water, all equipment is inspected by the students and instructors prior to diving. The potential dive location is chosen based upon the calmness and clarity of the water and limited depth so that students don't accidentally go too deep. Before the dive, there is a complete dive briefing, so students know what to expect, as well as safety precautions and instructions about what to do if a diver runs into trouble. The instructors are familiar with the location and any potential hazards. The number of students is limited, and dive master assistants accompany the group as available to ensure safety. Additionally, instructors are required to ensure there is a well-equipped first aid kit, an emergency oxygen supply, and information about the nearest hospital and hyperbaric chamber.
To become an instructor, Ira went through hundreds of hours of training, especially including detailed training about how to handle likely and unlikely problems. This training includes extensive first aid training. From a risk mitigation strategy, instructors maintain personal liability insurance. Similarly, the sponsoring school maintains liability insurance while also paying for supplemental insurance to cover potential injuries to students. The dive facilities, be they pools, boats, quarries, or so on, also maintain liability insurance.
Essentially, PADI and other professional associations have proactively examined where potential injuries may occur and determined how to prevent them as best as possible. Although some accidents will inevitably occur, there is extensive preparation for those incidents, and the result is that diving is a comparatively safe activity.
Retail loss prevention and dive instruction have clearly created comprehensive strategies for preventing and mitigating loss that accounts for human error and malfeasance. Unfortunately, many industries, and ironically even many practices within the same industries that are otherwise relatively secure, are not dealing with human error well. For example, Target, which generally has an outstanding loss prevention practice, failed when it came to a data breach where 110,000,000 credit records were stolen.
When an organization fails to account for humor error and malfeasance, and fails to put in sufficient layers of controls, the losses can be devastating. When organizations fail to implement an effective process of risk mitigation to account for user-initiated loss, there is a great deal of blame to go around, but organizations tend to point to the “stupid user” who made a single error.
No case is more notorious for this than the massive Equifax data breach. When Richard Smith, former CEO of Equifax, testified to Congress regarding the infamous data breach, he laid the blame for the data breach squarely on an administrator for not applying a critical patch for a vulnerability in a timely manner. Not immediately applying a patch is not uncommon for organizations the size of Equifax. However, a detailed investigation showed that there was a gross systemic failure of Equifax's security posture.
After all, not only did Equifax allow the criminal in, the criminal was able to explore the network undetected for six weeks, breach dozens of other systems, and download data for another six weeks. The attack was detected only after Equifax renewed a long-expired digital certificate that was required to run a security tool.
This type of scenario is common in computer-related incidents. Whether it is the failing of an individual user or someone on the IT team, a single action, or failure to act, can initiate a major loss. However, for there to be a major loss, there has to be a variety of failures to allow an attack to be successful.
Similar failures happen in all operational units of organizations. Any operational process that does not analyze where and how people can intentionally or unintentionally cause potential loss enables that loss.
The goal of this book is to help the reader identify and mitigate actions where users might initiate loss, and then detect the actions initiating loss and mitigate the potential damage from the harmful acts.
Just as the diving and loss prevention industries have figured out how to effectively mitigate risk arising from human failures, you can do the same within your environment. By adopting the proper sciences and strategies laid out in this book, you can effectively mitigate user-initiated loss.
When we consult with organizations, we find that one of the biggest impediments to adequately addressing user-initiated loss is not getting the required resources to do so. The underlying reason is that all too frequently, people responsible for loss reduction fail to demonstrate a return on investment. In short: You get the budget that you deserve, not the budget that you need. You need to deserve more.
If people believe scuba diving is dangerous, the scuba industry will collapse. If accounting systems fail, public companies can suffer dire consequences. These industries recognize these dangers, and they take steps to demonstrate their value and viability. However, many other professions do not adequately address risk and prove their worth.
The common strategy of dealing with user-initiated loss is to focus on awareness and letting people know how not to initiate a loss. Clearly, this fails all too frequently. Therefore, money put into preventing the loss appears wasted. There is no clear sense of deserving more resources.
It is our goal that you will be able to apply our strategies and show you are deserving of the resources you need to properly mitigate the potential losses that you face.
We appreciate your input and questions about this book. You can contact us at www.YouCanStopStupid.com.
If you believe you've found a mistake in this book, please bring it to our attention. At John Wiley & Sons, we understand how important it is to provide our customers with accurate content, but an error may occur even with our best efforts.
To submit your possible errata, please email it to our Customer Service Team at [email protected] with the subject line “Possible Book Errata Submission.”
Ira Winkler can be reached through his website at www.irawinkler.com. Dr. Tracy Celaya Brown can be reached through her website at DrTre.com. Additional material will be made available at the book's website, www.youcanstopstupid.com.
While professionals bemoan how users make their job difficult, the problem is that this difficulty should be considered part of the job. No matter how well-meaning or intelligent a user may be, they will inevitably make mistakes. Alternatively, the users might have malicious intent and intend to commit acts that cause loss. Considering the act “stupid” assists a malicious party in getting away with their intent.
Fundamentally, you don't care about an individual action by a user; you care that the action may result in damage. This is where professionals need to focus. Yes, you want to have awareness so users are less likely to initiate damage. However, you have to assume that users will inevitably make a potentially harmful action, and your job is to mitigate that action in a cost-effective way.
Part I lays the groundwork for being able to address the potential damage that users can initiate. The big problem that we perceive regarding the whole concept of securing the user—as some people refer to it, creating the human firewall—is that people think that the solution to stopping losses related to users is awareness. To stop the problem, you have to understand that awareness is just one tactic among many, and the underlying solution is that you need a comprehensive strategy to prevent users from needing to be aware, to create a culture where people behave appropriately through awareness or other methods, and to detect and mitigate loss before it gets out of hand.
Any individual tactic will be ineffective at stopping the problem of user-initiated loss (UIL). As you read the chapters in Part I, you should come away with the holistic nature of the problem and begin to perceive the holistic solutions required to address the problem.
As security professionals, we simultaneously hear platitudes about how users are our best resource, as well as our weakest link. The people contending that users are the best resource state that aware users will not only not fall prey to the attacks, they will also respond to the attacks and stop them in their tracks. They might have an example or two as well. Those contending that the users are the weakest link will point to the plethora of devastating attacks where users failed, despite their organizations’ best efforts. The reality is that regardless of the varying strengths that some users bring to the table in specific circumstances, users generally are still the weakest link.
Study after study of major data breaches and computer incidents show that users (which can include anyone with access to information or computer assets) are the primary attack vector or perpetrator in an overwhelming percentage of attacks. Starting with the lowest estimate, in 2016, a Computer Technology Industry Association (CompTIA) study found that 52 percent of all attacks begin by targeting users (www.comptia.org/about-us/newsroom/press-releases/2016/07/21/comptia-launches-training-to-stem-biggest-cause-of-data-breaches). In 2018, Kroll compiled the incidents reported to the UK Information Commissioner's Office and determined that human error accounted for 88 percent of all data breaches (www.infosecurity-magazine.com/news/ico-breach-reports-jump-75-human/). Verizon's 2018 Data Breach Investigations Report (DBIR) reported that 28 percent of incidents were perpetrated by malicious insiders (www.documentwereld.nl/files/2018/Verizon-DBIR_2018-Main_report.pdf). Although the remaining 72 percent of incidents were not specifically classified as resulting from an insider mistake or action, their nature indicates that the majority of the attacks perpetrated by outsiders resulted from user actions or mistakes.
Another interesting finding of the 2018 DBIR is that any given phishing message will be clicked on by 4 percent of people. Initially, 4 percent might sound extremely low, but an attack needs to fool only one person to be successful. Four percent means that if an organization or department has 25 people, one person will click on it. In an organization of 1,000 people, 40 people will fall for the attack.
NOTE The field of statistics is a complex one, and real-world probabilities vary compared to percentages provided in studies and reports. Regardless of whether the percentages are slightly better or worse in a given scenario, this user problem obviously needs to be addressed.
Even if there are clear security awareness success stories and a 96 percent success rate with phishing awareness, the resulting failures clearly indicate that the user would normally be considered the weakest link. That doesn't even include the 28 percent of attacks intentionally perpetrated by insiders.
It is critical to note that these are not only failures in security, but failures in overall business operations. Massive loss of data, profit, or operational functionality is not just a security problem. Consider, for example, that the WannaCry virus crippled hospitals throughout the UK. Yes, a virus is traditionally considered a security-related issue, but it impacted the entire operational infrastructure.
Besides traditional security issues, such as viruses, human actions periodically result in loss of varying types and degrees. Improperly maintained equipment will fail. Data entry errors cause a domino effect of troubles for organizational operations. Software programming problems along with poor design and incomplete training caused the devastating crashes of two Boeing 737 Max airplanes in 2019 (as is discussed in more detail in Chapter 3, “What Is User-Initiated Loss?”). These are not traditional security problems, but they result in major damage to business operations.
No user is immune from failure, regardless of whether they are individual citizens, corporations, or government agencies. Many anecdotes of user failings exist, and some are quite notable.
The Target hack attracted worldwide attention when 110,000,000 consumers had their personal information compromised and abused. In this case, the attack began when a Target vendor fell for a phishing attack, and then the attacker used the stolen credentials to gain access to the Target vendor network. The attacker was then allowed to surf the network and inevitably accomplish their thefts.
While the infamous Sony hack resulted in disaster for the company, causing immense embarrassment to executives and employees, it also caused more than $150,000,000 in damages. In this case, North Korea obtained its initial foothold on Sony's network with a phishing message sent to the Sony system administrators.
From a political perspective, the Democratic National Committee and related organizations that were key in Hillary Clinton's presidential campaign were hacked in 2016 when a Russian intelligence GRU operative sent a phishing message to John Podesta, then chair of Hillary Clinton's campaign. The resulting leak of the email was embarrassing and was strategically released through Wikileaks.
In the Office of Personnel Management (OPM) hack, 20,000,000 U.S. government personnel had their sensitive information stolen. It is assumed that Chinese hackers broke into systems where the OPM stored the results of background checks and downloaded all of the data. The data contained not just the standard name, address, Social Security number, and so on, but information about their health, finances, mental illnesses, among other highly personal information, as well as information about their relatives. This information was obtained through a sequence of events that began by sending a phishing message to a government contractor.
From a physical perspective, the Hubble Space Telescope was essentially built out of focus, because a testing device was incorrectly assembled with a single lens misaligned by 1.3 mm. The reality is that many contributing errors led to not only the construction of a flawed device but the failure to detect the flaws before it was launched.
In an even more extreme example, the Chernobyl nuclear reactor had a catastrophic failure. It caused the direct deaths of 54 people, another approximately 20,000 other people contracted cancer from radiation leaks, and almost 250,000 people were displaced. All of this resulted from supposed human error, where technicians violated protocols to allow the reactor to run at low power.
These are just a handful of well-known examples where users have been the point of entry for attacks. The DBIR also highlights W-2 fraud as a major type of crime involving data breaches. Thousands of businesses fall prey to this crime, which involves criminals pretending to be the CEO or a similar person and sending emails to human resources (HR) departments, requesting that an HR worker send out copies of all employee W-2 statements to a supposedly new accounting firm. The criminals then use those forms to file fraudulent tax refunds and/or perform other forms of identity theft. Again, these attacks are successful because some person makes a mistake.
NOTE If you are unfamiliar with U.S. tax matters, W-2 statements are the year-end tax reports that companies send to employees.
Other human failures can include carelessness, ignorance, lost equipment, leaving doors unlocked, leaving sensitive information insecure, and so on. There are countless ways that users have failed. Consequently, sometimes technology and security professionals speciously condemn users as being irreparably “stupid.” Of course, if technology and security professionals know all of the examples described in this section and don't adequately try to prevent their recurrence, are they any smarter? The following sections will examine the current approach to this problem and then how we can begin to improve on it.
There are a variety of ways to deal with expected human failings. The three most prevalent ways are awareness, technology, and governance.
As the costs of those failings have risen into the billions of dollars and more failings are expected, the security profession has taken notice. The general response has been to implement security awareness programs. This makes sense. If users are going to make mistakes, they should be trained not to make mistakes.
Just about all security standards require that users receive some form of awareness training. These standards are supposed to provide some assurance for third parties that the organizations certified, such as credit card processors and public companies, provide reasonable security protections. Auditors then go in and verify that the organizations have provided the required levels of security awareness.
Unfortunately, audit standards are generally vague. There is usually a requirement that all employees and contractors have to take some form of annual training. This traditionally means that users watch some type of computer-based training (CBT) that is composed of either monthly 3- to 5-minute sessions or a single annual 30- to 45-minute session. CBT learning management systems (LMSs) usually provide the ability to test for comprehension. Reports are then generated to show the auditors to prove the required training has been completed.
As phishing attacks have grown in prominence, auditors started to require that phishing simulations be performed. Organizations also unilaterally decided that they want phishing simulations to better train their users. Phishing simulations do appear to decrease phishing susceptibility over time. These simulations vary greatly in quality and effectiveness. As previously stated, this optimistically results in a 4 percent failure rate.
In general operational settings, training is provided, but there are few standards or requirements for such training. There may or may not be a safety briefing. There are sometimes compliance requirements for how people are to do their jobs, such as in the case of handling personally identifiable information (PII) in certain environments covered by regulations or requirements, such as the Healthcare Insurance and Portability and Accountability Act (HIPAA) and the Payment Card Industry Data Security Standard (PCI DSS). The PCI DSS even requires that programmers receive training in secure programming techniques. NIST 800-50, “Building an Information Technology Security Awareness and Training Program,” even attempts a more rigorous structure in the context of the Federal Information Security Management Act (FISMA).
Unfortunately, awareness training, security-related or otherwise, is poorly defined and broadly fails at creating the required behaviors.
Independent of awareness efforts, IT or security technology professionals implement their own plans to try to reduce the likelihood of humans falling for attacks or otherwise causing damage. For the most part, these are preventative in nature. For example, a user cannot click on a phishing message if the message never gets to the user. For that reason, organizations acquire software that filters incoming email for potential attacks.
There are also different technologies that can stop attacks from being completed. For example, data leak prevention (DLP) software reviews outgoing data for potentially sensitive information. An example would be if a file attached to an email contains Social Security numbers or other PII, DLP software should catch the email before it goes outside the organization.
The purchase of these technologies is generally random to the organization. While awareness and phishing simulation programs are generally accepted as a best practice, there are no universally accepted best practices for many specific technologies, with a few notable exceptions such as for anti-malware software, which is a staple of security programs.
Cloud providers like Google and Microsoft are becoming increasingly proficient at building effective anti-phishing capabilities into their platforms like Gmail and Office 365. As a result, many organizations are considering whether purchasing third-party solutions is even necessary. Either way, every software solution has its limitations, and no single tool (or collection of tools) is a panacea.
Although we discuss governance in more detail in Chapter 13, “Governance,” for an initial introduction it is sufficient to know that governance is supposed to be guidance or specification of how organizational processes are to be performed. The work of governance professionals involves the specification of policies, procedures, and guidelines, which are embodied in documents.
These documents typically reflect best practices in accordance with established laws, regulations, professional associations, and industry standards. In theory, governance-related documents are expected to be living documents and used for enforcement of security practices, but it is all too common that governance documents only see the light of day during a yearly ritual of auditors reviewing them for completeness in the annual audit.
In an ideal world, governance documents should cover how people are to do their jobs in a way that does not make them susceptible to attacks and in a way that their work processes do not result in losses. This includes how specific actions are to be taken and how specific decisions are to be made in performing job functions.
That ideal world represents the embodiment of a system. A good example of this is McDonald's. Generally, McDonald's expects to hire minimally qualified people to deliver a consistent product anywhere in the world. This involves specifying a process and using technology to consistently implement that process. Although people may be involved in performing a function, such as cooking and food preparation, technology is now driving those processes. A person might put the hamburgers on a grill, but the grill is automated to cook the hamburgers for a specific time at a given temperature. The same is true for french fries. Even the amount of ketchup that goes on a hamburger is controlled by a device. Robots control the drink preparation. McDonald's is now distributing kiosks to potentially eliminate cashiers. Although a fast-food restaurant might not seem to be technology-related, the entire restaurant has become a system, driven by governance that is implemented almost completely through technology.
We described in the book's introduction how the scuba and loss prevention industries look at the concept of mitigating loss as a comprehensive strategy. When organizations fail to do this, they attempt to implement random tactics that are not cohesive and supporting of each other. For example, if you think the fact that users create loss is an awareness failing and that the solution is better awareness, you are focusing on a single countermeasure. This approach will fail.
A comprehensive strategy is required to mitigate damage resulting from user actions. This book provides such a strategy. This strategy is something that should be applied to all business functions, at all levels of the organization. Wherever there can be a loss resulting from user actions or inactions, you need to proactively determine whether that loss is worth mitigating and then how to mitigate it.
NOTE Implementing the strategy across the entire business at all levels doesn't mean that every user needs to actively know and apply the depth and the breadth of the entire strategy. (The fry cook doesn't need to know how the accounting department works, and vice versa.) The team that implements the strategy coordinates its efforts in a way that informs, directs, and empowers every user to accomplish the strategy in whichever ways are most relevant for their role.
In an ideal world, you will always look at any user-involved process and determine what damage the user can initiate and how the opportunity to cause damage may be removed, as best as possible. If the opportunity for damage cannot be completely removed, you will then look to specify for the user how to make the right decisions and take the appropriate actions to manage the possibility of damage. You then must consider that some user will inevitably act in a way that leads to damage, so you consider how to detect the damaging actions and mitigate the potential for resulting loss as quickly as possible.
Minimally, when you come across a situation where a user creates damage, you should no longer think, “Just another stupid user.” You should immediately consider why the user was in a position to create damage and why the organization wasn't more effective in preventing it.
Users inevitably make mistakes. That is a given. At the same time, within an environment that supports good user behavior, users behave reasonably well. The same weakest link who creates security problems and damages systems can also be an effective countermeasure that proactively detects, reports, and stops attacks.
While the previous statements are paradoxically true, the reality is that users are inconsistent. They are not computers that you can expect to consistently perform the same function from one occurrence to the next. More important, all users are not alike. There is a continuum across which you can expect a range of user behaviors.
It is a business fact that users are part of the system. Some users might be data entry workers, accountants, factory workers, help desk responders, team members performing functions in a complex process, or other types of employees. Other users might be outside the organization, such as customers on the Internet or vendors performing data entry. Whatever the case, any person who accesses the system must be considered a part of the system.
Clearly, you have varying degrees of authority and responsibility for each type of user, but users remain autonomous, and you never have complete authority over them. Therefore, to consider users to be anything other than a part of the system will overlook their capacity to introduce errors and cause security breaches and thus lead to failure. The security and technology teams must consider the users to be one more part of the system that needs to be facilitated and secured. However, without absolute authority, from a business perspective, you must never consider users to be a resource that can be consistently relied upon.
It is especially critical to note that the technology and security teams rarely have any control over the hiring of users. Depending upon the environment, the end users might not be employees, but potentially customers and vendors over whom there is relatively little control. The technology and security teams have to account for every possible end user of any ability.
Given the limited control that technology and security teams have over users, it is not uncommon for some of these professionals to think of users as the weakest link in the system. However, doing so is one of the biggest copouts in security, if not technology management as a whole.
Users are not a “necessary evil.” They are not an annoyance to be endured when they have questions. Looking down upon users ignores the fact that they are a critical part of the system that security and technology teams are responsible for. In some cases, they might be the reason that these teams have a job in the first place.
It is your job to ensure that you proactively address any expected areas of loss in the system, including users. Users can only be your weakest link if you fail to mitigate expected user-related issues such as user error and malfeasance.
Perhaps one of the more notable examples is that of the B-17 bomber troubles. Clearly, a pilot is a critical part of flying the airplane. They are not just a “user” in the most limited sense of the term. When the B-17 underwent the first test flights in 1935, it was the most complex airplane at that time. The pilots chosen as the test pilots were among the top pilots in the country. Yet, these top test pilots crashed the plane. The reason was that they failed to disengage a locking mechanism on the flight controls.
It was determined that the pilots were overwhelmed by the complexity and made a simple mistake. As the pilots were a critical part of the system, removing them was not an option. They were highly experienced and trained professionals, so the problem was not that they were poorly trained. The government could have sent the pilots for additional training, but retraining top pilots in the basics of how to fly the plane was not going to be an efficient approach. Instead, they recognized that the problem was that the complexity of the airplane was overwhelming.
The solution was the implementation of a checklist to detail every basic step a pilot had to take to ensure the proper functioning of the airplane. Similar problems have since been solved for astronauts and surgeons, among countless other critical “pieces of the system.”
Users can be both a blessing and a curse. For the most part, if the rest of the system is designed appropriately, users will behave accordingly. At the same time, you must understand and anticipate that despite your best efforts, users sometimes do the wrong thing.
For example, in one case that is unfortunately not isolated, author Ira Winkler ran a phishing simulation against a large company. Employees were sent a message that contained a link to a résumé. The sender claimed that one of the company's owners suggested they contact the recipient for help in referring them to potential jobs. If the employees clicked on the fake résumé, they received a message explaining that the email was fake and how they should have recognized it as such. In at least one case, the user replied, still believing it was a real message, saying there was a problem with the attached résumé. This sort of phishing training exercise can improve some user behaviors, but it is certainly far from making users a foolproof part of the system.
Anticipating how people will behave helps you design better systems to capitalize on predictable behaviors, leading to better security. Even though people make mistakes, good systems should anticipate that and not break when they do.
When we use the term users throughout the book, it might seem that we are implying end users or low-level workers. The reality is that we mean anyone with any function. This can be managers, software developers, system administrators, accountants, security team members, and so on.
Anyone who has a job function or access that can result in damage is technically a user. Administrators can accidentally delete data and cause system outages. Security guards can leave doors open. Managers can leave sensitive documents in public places. Auditors can make accounting errors. Everyone is a user at some level.
Our use of the term users can also include outside contractors, customers, suppliers, cloud service providers, or anyone else who interacts with your organization. If they can take an action that can potentially cause harm to your organization, they must be considered in your risk model.
Cloud services and remote workers create additional concerns, where you potentially lose control over your information and users. For example, if a user goes into Starbucks and uses the free WiFi to connect to your network, that user creates a whole new class of users, increasing the risk profile. Cloud services change the profile of your users, given that access control methods change to allow for someone to theoretically log in from anywhere in the world. The risk can be mitigated, but you have to plan for it.
Perhaps some of the more overlooked groups of users are the people who are responsible for mitigating risk. They tend to look at the errors caused by others and believe that they themselves would have never caused the errors. This causes two types of problems.
The first is that if they don't conceive that an error can occur, they cannot proactively mitigate it. We have been on software test teams and found problems with potential uses of the software and told the developers. The developers have often responded that “nobody would ever do that” and fought us on implementing the fixes.
The second issue is that the risk mitigation teams, like information security, IT, physical security, operations, and so on, don't perceive themselves as being the source of errors. They do not believe they will make mistakes. They can have tremendous privileges and access, which provides the capabilities for their errors to create more damage than any normal user would.
Although the natural assumption is that user-initiated loss happens through ignorance or carelessness, a great deal of damage is caused by malicious users. The 2018 DBIR found that 28 percent of incidents result from malicious insiders who have clear intent to either steal something of value or create other forms of damage. That is a staggering number.
More critical is that malicious insiders typically know the best ways to access whatever it is they are trying to steal or destroy. Additionally, if they are intelligent in their planning and execution, they might be able to identify and bypass your protection, detection, and reaction capabilities.
When malice is involved, awareness efforts can sometimes even work against an organization. Awareness efforts typically educate people about how malicious actors accomplish their goals. This provides your malicious employees with information about how they too can commit those types of crimes. It also gives them ideas about how and where you allocate defensive resources and what countermeasures they need to bypass. Clever malicious insiders use this information to improve their own attacks.
As a percentage of overall users, the number who will launch malicious attacks, let alone succeed at them, is fortunately small. Even so, the reality is that malicious users exist, so you must account for them. There have been various studies that have shown that a small percentage of users create the most damage. This is intuitively obvious. Such users will always exist. The best you can do is acknowledge this reality and prepare for them.
Users need to perform their jobs properly in a fundamentally safe and secure manner. You need to ensure that security is embedded in job functions and that people know how to perform those functions properly. This should be well defined, and just like any other job function, you should set the expectation for those users to follow those definitions. We would love to say that you should also expect users to be fundamentally aware of security concerns beyond what is specifically defined, but that will not likely happen on a consistent basis.
Therefore, businesses should factor the users' limited awareness into their risk management calculations and plans. You should provide awareness training and opportunities to further reduce risk. Although we don't want organizations to rely too strongly on awareness, it is a critical component of any security program to reduce risk.
Although user ignorance can be partially improved with training, carelessness is another matter. Assuming you have properly instructed users in how they should perform their functions, if some users still consistently violate policies and cause damage, you may need to take disciplinary action against them.
Beyond ignorance and carelessness, you also must account for malicious actions. We discussed this in the previous section, and we will explore options to address it as we discuss security measures throughout the book.
It is important to follow our recommended strategies to ensure that your systems reduce the opportunities for users to make errors or cause malicious damage and then mitigates any remaining potential harm. Then regardless of whether the harmful actions are due to malice, ignorance, or carelessness, your environment should be far more powerfully positioned to minimize or even stop the resulting damage.
Users are expected to, and do, make mistakes, and some attempt to maliciously cause damage. However, those actions do not have to result in damage. There is a tendency to place all of the blame for mistakes on users. Instead, a better approach is to recognize the relationship between users and loss and work to improve the system in which they exist.
For this reason, we will use the term user-initiated loss (UIL), which we define as loss, in some form, that results from user action or inaction. As Chapter 2, “Users Are Part of the System,” discussed, users are not just employees but anyone who interacts with and can have an effect on your system. These actions can be a mistake, or they can be a deliberate, malicious act. Obviously, sometimes the system is attacked by an external entity, so the attack itself is not user-initiated. But when the user initiates an action that enables the attack to succeed, the user's action has initiated the actual loss.
It is important to also note that not all mistakes or malicious acts result in loss, and not all loss happens when the action takes place.
First, we must consider that some actions might not be sufficient to result in loss, or the loss may be prevented. For example, if a person clicks to open a ransomware program in a phishing message, if the user does not have admin privileges on their system, the ransomware should not be able to encrypt the system.
Then we must consider that should there be loss, the loss may or may not happen immediately. Consider that the data entry error may take years to create a problem, if at all, like the iconic error with the Hubble Space Telescope referred to in Chapter 2, where the error wasn’t realized until the telescope was already in orbit and ultimately required $150,000,000 in repairs. This error was years in the making.
The Target, Sony, OPM, and Equifax hacks all happened over a period of time. They each resulted in some form of user action or inaction as the initial attack vector. However, none of them had to result in massive damage from the single user failing. Yes, an Equifax employee was slow in patching a new vulnerability, but the massive data breach did not have to occur if there weren't the systematic technical failings within the Equifax infrastructure, especially given that the thefts took months to complete.
These examples begin to imply some potential solutions for UIL. However, before we begin exploring solutions, we intend to set a foundation of understanding the types of losses that may be initiated through user actions. With this foundation, we can then discuss how to avoid putting users in a position where they might initiate loss, instruct them how to take better actions, and then prevent their actions from resulting in loss. We will also explore how to take the opportunity away from malicious actors, as well as how to detect and mitigate the malicious acts.
Because there are an infinite number of user actions and inactions that can result in loss, it is helpful to categorize those actions. This allows you to identify which categories of user error and malice to consider in your environment and what specific scenarios to plan to mitigate. This chapter will examine some common categories where UIL occurs. We'll begin by considering processes, culture, physical losses, crime, user error, and inadequate training. Then we'll move on to technology implementation. Future chapters will explore ways of mitigating UIL.
Although this might seem to have no direct relationship to the users, how your organization specifies work processes is one of the biggest causes of UIL. Every decision you make about your work processes determines the extent to which you are giving the user the opportunity to initiate loss.
Clearly, the user has to perform a business function. If you can theoretically remove people from processes, you can reduce all UIL associated with those processes. For example, in fast-food restaurants, cashiers have the ability to initiate loss in multiple categories. A cashier can record the order incorrectly. This causes food waste and poor customer satisfaction, which can reduce profit and impede future sales. A cashier can also make mistakes in the handling of cash. They might miscount change, steal money, or be tricked by con artists. These are just a few of the problems. Restaurant chains understand this and implement controls within the process to reduce these losses. McDonald's, however, is going even further to control the process by implementing kiosks where customers place their orders directly into a computer system. This removes all potential loss associated directly with the cashiers.
Obviously, there are a variety of potential losses that are created by removing a human cashier from the process (such as loss of business from customers who find interacting with a kiosk too complicated), but those are ideally accounted for within the revised process. The point is that the process itself can put the user in the position to create UIL, or it can remove the opportunity for the user to initiate loss.
A process can be overly complicated and put well-intentioned users in a position where it is inevitable that they will make mistakes. For example, when you have users implement repetitive tasks in a rapid manner, errors generally happen. Such is the case with social media content reviewers. Facebook, for example, through outside contractors, pays content moderators low wages and has them review up to 1,000 reported posts a day. (See “Underpaid and Overburdened: The Life of a Facebook Monitor,” The Guardian, www.theguardian.com/news/2017/may/25/facebook-moderator-underpaid-overburdened-extreme-content.) This can mean that legitimate content is deleted, while harmful content remains. The situation is ripe for UIL and also for causing significant harm to the content moderators, who have stress both from the working conditions and from reviewing some of the most troubling content on the Internet.
A process may also be poorly defined and give users access to more functionality and information than they require to perform their jobs. For example, companies used to attach credit card numbers to an entire sales record, and the credit card numbers were available to anyone in the entire fulfillment process, which included people in warehouses. Payment Card Industry Data Security Standard (PCI DSS) requires that only people who need access to the credit card numbers can actually access the information. Removing access to the information from all but those with a specific requirement to access it reduces the potential for those people to initiate a loss, maliciously or accidentally.
Processes can also lack checks and balances that ensure that when a loss is initiated, it is mitigated. For example, well-designed financial processes regularly have audits to ensure transactions are validated. A financial process that does not have sufficient audits is ripe for abuse by insiders and crime from outsiders. For example, we worked with a nonprofit organization and found that they paid thousands of dollars to criminals who sent the organization invoices that looked real. However, when we asked what the invoices were specifically for, it turns out that nobody knew. They modified the process to ensure that future invoices required internal approval by a stakeholder familiar with the charges. Clearly, establishing proper checks and balances is equally important for anyone who has access to data and information services as well.
All processes need to be examined to ensure that users are provided with minimum ability to create loss. Additionally, all organizations should have a process in place to prevent, detect, and mitigate the loss should a user initiate it.
Establishing a great process is awesome. However, as stated in the famous Peter Drucker quote, “Culture eats strategy for breakfast.”
Consider all of the security rules that exist in an organization. Then consider how many are usually followed. There are generally security rules that are universally followed and those that are universally ignored.
As consultants, we are frequently issued badges when we arrive at a client's facility. We diligently don the badges, at least until we walk around and determine that we are the only people actually wearing a badge. While we intend to adhere to security policies, we also have a need to fit in with and relate to the people inside the organization. Badge wearing is a symptom of a culture where security policies inspire people to ignore them.
Conversely, if many people in an office lock their file cabinets at the end of the day or whenever they leave their desk, most of their colleagues will generally do the same. Culture is essentially peer pressure about how to behave at work. No matter what the defined parameters of official behavior are within the organization, people learn their actual behavior through mirroring the behavior of their peers.
Culture is very powerful, enabling vast amounts of UIL and facilitating losses in other categories. If your culture doesn't adequately support and promote your processes, training, and technology implementation, then crime, physical losses, and user errors all increase as a consequence. Let's consider some examples where culture can be shown to have a direct relationship to UIL.
