The Active Defender - Catherine J. Ullman - E-Book

The Active Defender E-Book

Catherine J. Ullman

0,0
19,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Immerse yourself in the offensive security mindset to better defend against attacks In The Active Defender: Immersion in the Offensive Security Mindset, Principal Technology Architect, Security, Dr. Catherine J. Ullman delivers an expert treatment of the Active Defender approach to information security. In the book, you'll learn to understand and embrace the knowledge you can gain from the offensive security community. You'll become familiar with the hacker mindset, which allows you to gain emergent insight into how attackers operate and better grasp the nature of the risks and threats in your environment. The author immerses you in the hacker mindset and the offensive security culture to better prepare you to defend against threats of all kinds. You'll also find: * Explanations of what an Active Defender is and how that differs from traditional defense models * Reasons why thinking like a hacker makes you a better defender * Ways to begin your journey as an Active Defender and leverage the hacker mindset An insightful and original book representing a new and effective approach to cybersecurity, The Active Defender will be of significant benefit to information security professionals, system administrators, network administrators, and other tech professionals with an interest or stake in their organization's information security.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 473

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Foreword

Preface

Who Is This Book For?

Introduction

Defense from a Different Perspective

Where We Are Now

How Did We Get Here?

Active Defense

What Keeps Us Stuck?

The Missing Piece

What Is Covered in This Book?

Notes

CHAPTER 1: What Is an Active Defender?

The Hacker Mindset

Traditional Defender Mindset

Getting from Here to There

Active Defender Activities

Active Defense

Active > Passive

Active Defender > Passive Defender

Return to the Beginning

Summary

Notes

CHAPTER 2: Immersion into the Hacker Mindset

Reluctance

A Leap of Faith

Finding the Community

An Invitation

Summary

Notes

CHAPTER 3: Offensive Security Engagements, Trainings, and Gathering Intel

Offensive Security Engagements

Offensive Security Trainings

Gathering Intel

Summary

Notes

CHAPTER 4: Understanding the Offensive Toolset

Nmap/Zenmap

Burp Suite/ZAP

sqlmap

Wireshark

Metasploit Framework

Shodan

Social‐Engineer Toolkit

Mimikatz

Responder

Cobalt Strike

Impacket

Mitm6

CrackMapExec

evil‐winrm

BloodHound/SharpHound

Summary

Notes

CHAPTER 5: Implementing Defense While Thinking Like a Hacker

OSINT for Organizations

Threat Modeling Revisited

LOLBins

Threat Hunting

Attack Simulations

Summary

Note

CHAPTER 6: Becoming an Advanced Active Defender

The Advanced Active Defender

Automated Attack Emulations

Using Deceptive Technologies

Working with Offensive Security Teams

Purple Teaming – Collaborative Testing

Summary

Notes

CHAPTER 7: Building Effective Detections

Purpose of Detection

Funnel of Fidelity

Building Detections: Identification and Classification

Overall Detection Challenges

The Pyramids Return

Testing

Summary

Notes

CHAPTER 8: Actively Defending Cloud Computing Environments

Cloud Service Models

Cloud Deployment Environments

Fundamental Differences

Cloud Security Implications

Cloud Offensive Security

Defense Strategies

Summary

Note

CHAPTER 9: Future Challenges

Software Supply Chain Attacks

Counterfeit Hardware

UEFI

BYOVD Attacks

Ransomware

Frameworks

Living Off the Land

API Security

Everything Old Is New Again

Summary

Notes

Index

Copyright

Dedication

About the Author

About the Technical Editor

Acknowledgments

End User License Agreement

List of Illustrations

Chapter 4

Figure 4.1: Screenshot of Nmap

Figure 4.2: Screenshot of Burp Suite

Figure 4.3: Screenshot of sqlmap

Figure 4.4: Screenshot of Wireshark

Figure 4.5: Screenshot of Metasploit

Figure 4.6: Screenshot of Shodan

Figure 4.7: Screenshot of Social‐Engineer Toolkit

Figure 4.8: Screenshot of Mimikatz

Figure 4.9: Screenshot of Responder

Figure 4.10: Screenshot of Cobalt Strike

Figure 4.11: Screenshot of Impacket

Figure 4.12: Screenshot of mitm6

Figure 4.13: Screenshot of CrackMapExec

Figure 4.14: Screenshot of evil‐winrm

Figure 4.15: Screenshot of BloodHound

Chapter 7

Figure 7.1: Pyramid of Pain

Figure 7.2: TTP Pyramid

Chapter 8

Figure 8.1: Cloud service types

Figure 8.2: ABAC

Figure 8.3: RBAC

Figure 8.4: AWS AssumeRole

Figure 8.5: Azure managed identity

Figure 8.6: Dangerous implied trust

Figure 8.7: Contributor Access

Figure 8.8: IDMS exploit

Guide

Cover

Table of Contents

Title Page

Copyright

Dedication

About the Author

About the Technical Editor

Acknowledgments

Foreword

Preface

Introduction

Begin Reading

Index

End User License Agreement

Pages

iii

xxv

xxvi

xxvii

xxix

xxx

xxxi

xxxiii

xxxiv

xxxv

xxxvi

xxxvii

xxxviii

xxxix

xl

xli

xlii

xliii

xliv

xlv

xlvi

xlvii

xlviii

xlix

l

li

lii

liii

liv

lv

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

iv

v

vii

ix

xi

xii

xiii

216

The Active Defender

Immersion in the Offensive Security Mindset

 

 

Dr. Catherine J. Ullman

 

 

 

 

 

 

Foreword

When I was a US government hacker, I regularly contemplated how system administrators could take trivial steps to make my job infinitely harder. Later in my career, as an incident responder helping victimized organizations around the world, I saw the same: system admins not taking basic steps to stop attacks. But this time I was in a different position. I could talk to them and understand why they made the choices they did to defend their networks. What I learned completely changed my world view.

Most system admins I talked to weren't simply ignoring security best practices as I had assumed from my position on the other side of the keyboard. The vast majority were simply devoting their limited time and resources to the wrong things. When I talked them through how attackers viewed the steps they were taking, I was met with near universal shock. Some were even disgusted, feeling that they had been misled about the most important steps to take in securing a system. The security of these systems is their charge after all, and many system admins take it personally when their systems are compromised.

In these discussions, it became clear that the root cause of incorrectly prioritizing security controls was their mindset. System administrators were learning security “best practices” from other, more senior system administrators. This is a fine model for many professions but fails spectacularly in the face of a sentient adversary. When you want to learn how to stop a thief, it's true that talking to a locksmith provides some value. But talking to a thief is the only way to truly understand how an adversary views the security controls emplaced by defenders. We see the same with cybersecurity. If you want to know how to stop hackers from breaching your systems, talk to a hacker.

Of course this trite “talk to a hacker” advice has several pitfalls. For one, where does the average system administrator find a hacker to talk to in the first place? Even assuming they can find one, how can they vet the hacker? Listening to bad (e.g., negligent or intentionally harmful) advice is probably worse than doing nothing at all. While some large organizations have the resources to find and vet “good hackers,” most system admins simply lack this luxury.

But there's another critical problem with the “just ask a hacker” trope. Whether it is the deep technical understanding of their craft, that hacking disproportionately draws neuro‐atypical people, or some combination of other factors, getting answers in ordinary terms is often more challenging than locating the hacker in the first place. The fact that many hackers have no experience with enterprise defense makes any recommendations all the more difficult to follow. Junior cybersecurity analysts often experience similar struggles getting their bearings in this community. This book will serve them well too.

What system administrators needed were clear and actionable explanations of where to focus their limited resources, from an attacker perspective, communicated in ways they can understand. This book distills years of experience in thinking like an attacker and communicates it clearly to readers with no background in offensive cyber. To put it bluntly, the information technology and cybersecurity industries have sorely needed this book for years. If my incident response customers mentioned earlier had been reading this book, I'd have had a lot less business.

Cathy is the perfect person to write this book—one the community so dearly needs. As Cathy describes in her Preface, she was a system administrator who has pivoted into a senior information security analyst. That alone would make her qualified to write this book. But there's another factor that makes Cathy uniquely qualified to discuss securing systems from an attacker's mindset.

With more than 20 years of experience working at the same university, Cathy is now securing some of the same systems that she doubtlessly administered more than a decade ago. Very few people have this type of lens to look through. In fact, I'd be surprised if you can point me to another cybersecurity professional with 10 years' system administration experience at an organization who now has another 13 years' experience as an information security professional at the same organization. This provides the reader with unparalleled perspective, not only of how defenders should think about offensive cybersecurity but also of the specific challenges they may face in operationalizing her recommendations. This type of experience (akin to a longitudinal study) is sorely missing in cybersecurity, and this book also addresses that gap.

But beyond all that, Cathy is just a great human being and I'm a better person for knowing her. She has firmly embedded herself into the cybersecurity community and is a regular fixture at the DEF CON Packet Hacking Village and BSides Rochester (among others). Her volunteer work, presentations, and mentoring of others would be an amazing body of work in its own right.

I was aware of Cathy's contributions in the cybersecurity community prior to formally meeting her a few years ago. We instantly bonded over our common work as emergency first responders (mine prior, hers current). When Cathy told me about her idea for this book, I was certain it would fill an important void. When Wiley signed on to publish the book and Cathy asked me to be the technical editor, I was even more excited.

I'll close by thanking Cathy for giving me the honor of being technical editor and writing the foreword you're reading now. I'm really excited for the value this book will bring to countless system administrators, junior information security analysts, or anyone else who wants more background in offensive security. I hope you have as much fun reading this book as I had watching it come together. I'm certain you'll come away with more secure systems after reading it.

— Jake Williams (aka MalwareJake)

Preface

In the early days of computing, neither the term cybersecurity nor the term information security existed. Instead, the original goals of computer security focused around government risk assessment, policy, and controls. By the mid‐1980s, the ARPANET was expanding rapidly, the personal computer boom had begun, and companies were starting to use this thing called the Internet for communications, leading to concerns about security and privacy. At about the same time, some people were trying to gain access to what were then relatively primitive computer networks by nibbling mostly around the edges of equally primitive security measures. Sometimes their goal was just to see where they could go and what they could access. Others had more nefarious goals. Still others, however, had legal access via equipment located in their homes by virtue of a parent's employment responsibilities, providing a fairly unusual early exposure to the technology of the time. I was one of these lucky people and this book is an indirect consequence of that exposure.

Like many Gen‐Xers, I was fortunate to have access to a personal computer in my home and at school from a very early age. However, unlike most children of that generation, my first foray into computing began not with a personal computer but with a mainframe terminal located in my house. We acquired this terminal as the result of my father taking a job running the college's computing center in a small town in 1977. My earliest tinkering with this terminal involved mostly playing text‐based games, such as Adventure, Zork, and Wumpus, but I also was able to use the early equivalent of instant messaging to chat with my father while he was at work. Uniquely, not only was there no telecom charge for the use of this terminal, there was no need to hear the screech of a modem, tie up the local phone line, or pay for the call, because terminal connectivity to the college was facilitated by a dry copper line. What I never realized was that this early exposure to the technology, which I took for granted, planted seeds that led me into an information security career.

My interest in computer security started long before I was officially a member of a computer security team. My earliest involvement in computer security revolved around updating antivirus software and patching the Windows NT 4.0 machines I was responsible for supporting. Installing intrusion detection/firewall software on servers was still a new concept, but it was a requirement on our systems that I had self‐imposed. I was tenacious enough to prevent an outbreak of Nimda in our offices by returning to work after getting a tip to patch immediately. I also had the opportunity to learn the basics of incident response from a friend who had been attending SANS incident response training and was willing to share his knowledge. Over time, I became one of the people called when a machine was behaving strangely. Eventually, I took some formal training in digital forensics and landed my first job as an information security analyst. Always looking for an opportunity to grow my skills, I got involved with a cyber‐defense course where I met my first offensive security practitioner. Intrigued by the experiences he shared with the group, I asked how I could learn more about offensive security. He recommended attending something called a Security BSides conference. I chose to attend BSidesROC (BSides Rochester), because it was the closest to my location. Not only did it open a whole new world to me, it also helped me realize what I had been missing as a defender. Today, I am one of the organizers of that conference.

Protecting computers, networks, software programs, and data from attack, damage, or unauthorized access is a difficult job, as evidenced by the fact that successful attacks continue to be on the rise. This book introduces the idea of the Active Defender as an alternative approach to the way cybersecurity defense has typically been practiced. The traditional approach is usually passive or reactive, waiting to respond to alerts or other indications of attack. The Active Defender, by comparison, is someone who seeks to understand a hacker mindset and embraces the knowledge gained from the offensive security community in order to be more effective. Offensive security entails testing the defensive mechanisms put in place to determine whether they can prevent attacks or at least detect them once they have occurred. Unfortunately, many defenders are either unaware of offensive security or choose to avoid it. By being ignorant of, or choosing to avoid, offensive security, defenders are missing half the larger story and thus are at a significant disadvantage. Immersion into the offensive security community helps the defender to have a broader, more comprehensive view of the effectiveness of their detections and defenses as well as providing many additional resources to further their understanding, which will be covered throughout the subsequent chapters.

Cybersecurity suffers when defenders don't know what they're missing. However, in the same way that I was unaware of the unique situation I had and the power it granted me as a child, many defenders are unaware of the additional power they already possess to be better at defensive security. Ultimately, being an Active Defender involves using many of the same sorts of tools, skills, and access that the traditional defender already has, but in a unique way. The Active Defender is, in a sense, a defensive hacker and, because I know the term hacker comes with some baggage, we'll be discussing the term further in Chapter 2, “Immersion into the Hacker Mindset.”

The inspiration for this book came from my own eye‐opening experiences while immersing myself in the offensive security world. These experiences helped me realize that not only was there more to effective defense than I previously understood, but, as I also discovered, many other defenders were equally in the dark. As they say, knowledge is power. Therefore, I wanted to share my discoveries so that full‐time defenders like myself as well as those who are responsible for securing their particular environments, such as system administrators and network administrators, might benefit from them as well. Once a defender opens the door to the offensive security world, they too can be Active Defenders and take their defensive skills to the next level.

Who Is This Book For?

This book is for anyone tasked with cybersecurity defense in general, those in the security‐specific roles such as information security analysts, SOC analysts, security engineers, security administrators, security architects, security specialists, and security consultants. It is also meant for people whose jobs involve aspects of security such as system administrators, networking administrators, developers, and people interested in transitioning to information security roles. Realistically, all information technology roles including, but not limited to, IT support, engineers, analysts, and database administrators are responsible for some elements of security, regardless of whether it is part of their formal job description. Everyone should be cognizant of the role they play in securing their environment rather than it being only the purview of one group.

Regardless of your security role, this book will help you shift from a traditional passive or reactive defensive mindset to cultivating a hacker mindset and becoming an Active Defender. As a result, you'll gain a more intimate understanding of the threats you're defending against. Ready to get started?

Introduction

Defense from a Different Perspective

A team of security analysts is working diligently around the clock monitoring for alerts and to prevent attackers from entering the network. They detect and contain any intrusions that slip by the preventative measures in place. A criminal threat actor sends a successful phishing email with a link that downloads malicious software, bypassing the company's antivirus detections. The attacker has now gained entry, unbeknown to the security analysts. The attacker goes to work, disabling security software to hide their activities and using built‐in operating system tools to blend in with legitimate user activity. As a result, no security alerts fire. The security analysts continue to work, unaware of the malicious activity that's happening right under their noses. Shortly thereafter, the company receives a notification that a number of enterprise passwords and other company secrets have been compromised and made available for sale to other bad actors—all because the defenders were acting like traditional defenders and not thinking like members of the offensive community. Had they done so, they might have had a chance to avoid this catastrophic and all‐too‐common outcome. Enter the Active Defender.

The Active Defender is an alternative approach to the way cybersecurity defense has typically been practiced. The traditional approach is usually passive or reactive, waiting to respond to alerts or other indications of attack. The Active Defender, by comparison, is someone who seeks to understand an attacker mindset and embraces the knowledge gained from the offensive security community in order to be more effective. While we'll explore what, exactly, an Active Defender is in Chapter 1, let's first define the notions of defensive and offensive security teams used here.

In the broadest sense, defensive security teams consist of security professionals who are responsible for defending an organization's information systems against security threats and risks in the operating environment. They may work toward identifying security flaws, verifying the effectiveness of security measures put in place, and continuing to monitor the effectiveness of any implemented security measures. They may also provide recommendations to increase the overall cybersecurity readiness posture—in other words, how ready an organization is to identify, prevent, and respond to cybersecurity threats. I will be using defensive security teams here to also include folks who are responsible for securing the services they provide, such as system and network administrators, as well as developers, who are also responsible for operational functions, because it is not unusual for smaller organizations to rely solely on these folks for their cybersecurity needs.

Offensive security teams, on the other hand, consist of security professionals who are responsible for testing the defensive mechanisms put in place to protect an organization's information systems to determine whether they prevent attacks or at least detect them once they have occurred. One team of offensive security professionals might be responsible for penetration testing (pen testing). Pen testing only goes as far as mapping the risk surface of an application or organization to evaluate the potential routes of exploitation for an attacker and does testing against those routes to see whether they are in fact exploitable. Other offensive security teams may be responsible for full adversarial emulation, which completely emulates a high‐capability, and/or well‐resourced, goal‐driven adversary that is attempting to compromise the network environment to achieve a set of operational goals. The goal of this activity is to assess threat readiness and response. Historically, these teams have sometimes been referred to as blue and red teams respectively, but the imprecise nature of this terminology is problematic. Therefore, I will be utilizing some form of defensive and offensive security professionals throughout this book.

Where We Are Now

Defensive security teams continue to be up against some pretty significant challenges. According to research provided by the Identity Theft Resource Center (ITRC), the number of data breaches reported as of the end of 2022 was 1,802, the second highest number of publicly reported data compromise events in a single year.i The average cost of a data breach in 2022 increased by 2.6 percent compared to the previous year, from $4.24 million to an all‐time high of $4.35 million.ii Yet, organizations are spending more than ever on cybersecurity. Costs are up another 12.1 percent from the previous year and were expected to surpass $219 billion before the end of 2023.iii

Compromised credentials continued to be the most common initial attack vector, costing organizations an average of $4.57 million.iv The average time to discover and contain an attacker in an organization's network in 2022 was 277 days, down only slightly from 287 in 2021.v Furthermore, both the 2021 Verizon Data Breach Investigations Report (DBIR) and the Ransomware Report 2023 found that overall, older vulnerabilities continue to be what attackers are more often exploiting.vi Therefore, it should be no surprise that one area in particular with which organizations continue to struggle is vulnerability management.

Vulnerabilities are any method that a threat actor can use to gain unauthorized access or privileged control over an application, service, endpoint, or server, such as hardware or software flaws or misconfigurations. For example, there may be an unpatched weakness in the operating system on a server that would allow a threat actor to log in without needing the proper credentials. Vulnerability management is generally defined as the process of identifying, categorizing, prioritizing, and resolving vulnerabilities in hardware and software before they can cause harm or loss. Important to note that vulnerability management is not the same as a vulnerability assessment. The former is a full process, whereas the latter is a point‐in‐time view of discovered vulnerabilities.

The good news is that there have been some improvements in certain areas of vulnerability management. For example, the SANS 2022 Vulnerability Management Survey reports that the number of organizations stating that they have vulnerability management programs, whether formal or informal, increased from 92 percent to 94 percent over the previous year.vii Most notably, as of 2021, all organizations reported either having a program in place or plans to create one.viii

The bad news is that several issues continue to plague organizations in this area. For example, many organizations are not budgeting properly for vulnerability management, in terms of either time or resources. In addition, while defensive security teams are typically accountable for the vulnerability management process, they are not actually responsible for the work in many cases. Those who are responsible for addressing vulnerabilities typically have operational roles such as system administrators or network engineers. These operational teams are already overwhelmed with the amount of work they're facing, and they're often not rewarded for the efforts they expend in this area. Furthermore, while the business may expect that vulnerabilities are managed properly, it often does not require anyone to do so and, as a result, does not recognize or reward the work done by operational staff in this area. Perhaps most importantly, because new vulnerabilities can come from anywhere at any time and in any format, vulnerability management is a never‐ending battle.

Another area where defensive security professionals are currently struggling is cloud computing, as evidenced by the fact that in 2021, according to the DBIR report, external cloud assets were more prevalent in both incidents and breaches than on‐premises assets.ix As organizations adopt new technology, they often leave established security practices and monitoring tools behind, either because existing practices and tools will not work with the new environment and they don't realize they need new tools for this purpose or because they do not realize that they need to secure cloud resources. As a result, the maturity level of security in cloud computing for most organizations is often significantly lower than in their on‐premises locations. Unfortunately, that leaves organizations blind to attacks in this space and, in some cases, can put on‐premises assets at an unrecognized risk. For example, Azure Active Directory and Windows Active Directory are often tied together such that if one becomes compromised it can lead to a much larger problem. Furthermore, assumptions about who is responsible for managing cloud security and/or accidental misconfigurations can lead to data compromise or loss.

Offensive security teams seem to have a much easier time accomplishing their objectives. The NUIX Black Report, the only industry report to focus on responses from offensive security teams rather than focusing on data from specific incidents or interviews from cybersecurity leadership, offers some insight into their experiences. For example, while 18 percent of respondents stated they could breach the perimeter of a target within an hour, all of them were able to achieve that goal within 15 hours. Once inside the perimeter, more than half were able to move laterally to find their target within five hours, and in certain industries, such as hospitals and healthcare, hospitality, and retail, they could accomplish the same goal within a single hour.

Not only are offensive security teams able to easily breach and access their targets, rarely were they identified by defensive security teams. How good are they at not being caught? Seventy‐seven percent of respondents said they were rarely identified by their target's defensive security teams, less than 15 percent of the time. When asked whether they thought defenders understood what they were looking for when detecting an attack, 74 percent of them said no.

While on the one hand, these results may appear to be ideal from the offensive security community perspective, they often report that repeating the same kinds of tests with the same results all the time is boring. More importantly, because the job of the offensive security practitioner is to test defenses, these results reinforce the fact that defenders and their defenses are not what they need to be in most organizations. In the end, the goal of both teams should be the greater security position of the organization for whom they are working.

How Did We Get Here?

Clearly what numerous defenders are currently doing is ineffective. But how did we get here? To understand and appreciate how we've gotten to this current state of our industry, it's important to understand how cybersecurity evolved over time.

Antivirus Software

The first significant security product to become mainstream for the PC was antivirus software. Although the first PC virus, Brain, was written as a copyright infringement tool and not intended to harm systems, it caused a wide‐spread response from the media in January of 1986 after numerous people flooded the developers' phone line with complaints that Brain had wiped files from their computers.x The only remediation at that time was to format and reinstall the operating system, which was frustrating and time‐consuming.

Brain was also the catalyst for John McAfee to enter the antivirus market after he reverse engineered its code in the hopes that he could help individuals remediate their systems. Intending to profit only from corporate customers, he launched McAfee Associates at the end of 1987 and by 1990 was making $5 million a year.xi McAfee wasn't the only successful antivirus vendor in this realm. Symantec and Sophos also made their debuts in the late '80s. Although the prevalence of viruses and other forms of malware continued to slowly increase, by 1989 there were actually more antivirus vendors than viruses.xii

The fear of being hit by a virus infection was also beginning to grip the US government. US government computer security experts were quoted in an article titled “Future bugs Computer world dreading electronic ‘virus’ attack,” published in the Toronto Globe and Mail on August 5, 1986, using phrases such as “potentially devastating weapon” to describe a virus and further stating that “the ‘virus’ is a high technology equivalent of germ warfare.”xiii

To some degree, their anxiety was well founded as only two years later the Morris Worm rapidly spread to and crashed one‐tenth of all the computers on the Internet at that time. It was the first worm to gain significant mainstream media attention and ultimately lead to the formation of the first Computer Emergency Response Team (CERT) Coordination Center at Carnegie Mellon University for the purposes of research and responsible disclosure of software vulnerabilities. Moreover, during a subsequent 1989 hearing before Congress about the Morris Worm, John Landry, executive VP of Cullinet Software Inc., stated that “virus attacks can be life threatening. Recently a computer used in real time control of a medical experiment was attacked. If the attack had not been detected, a patient might have been injured, or worse.”xiv

By 1990, both corporate reliance on the Internet and the acceleration of new viruses being propagated were inexorably tied together. However, the projected cost to the worldwide microcomputing community to remove malicious software was approximately $1.5 billion per year. At this point, the cost of either purchasing protective software at $5 to $10 per month per machine or hiring two additional staff members to triage infected systems at an estimated $120 thousand to $150 thousand now seemed reasonable.xv Companies were finally realizing that the need for something to combat the growing problem was absolutely necessary, thus giving rise to beginnings of the Internet security industry.

Firewalls

As more and more systems connected to the Internet, fear of attacks from external networks grew. Enter the network firewall, which first appeared in the late '80s but became commercially available in the early '90s. The goal of a network firewall, which originally provided only basic packet filtering, was to provide a secure gateway to the Internet for private networks and keep out unwanted traffic from external networks. As Frederic M. Avolo, a well‐known early security consultant, observed, “Firewalls were the first big security item, the first successful Internet security product, and the most visible security device.”xvi For some time, firewalls were thought to be “virtually fail‐safe protection,” but that all changed when Kevin Mitnick attacked the San Diego Supercomputer Center (SDSC) in December 1995. Despite the SDSC having a properly configured firewall in place, Mitnick was able to spoof his address and utilize a sequence prediction attack. As a result, his system was able to appear to the firewall as a trusted host allowing him access past it.xvii From that point forward, it was understood that what had been viewed as the ultimate protection from external networks was no longer the reassurance it once had been.

Secure Sockets Layer

About the same time, the Secure Sockets Layer (SSL) protocol was first introduced by Netscape in response to concerns about security and privacy on the Internet. The goal of this protocol was to enable users to access the Web securely and to perform activities such as online purchases. Unfortunately, the first version of SSL released, SSL v2, was not as secure as originally expected and was deprecated in 2011, as was its successor SSLv3 in 2015 for comparable reasons. The replacement for SSL, Transport Layer Security (TLS), which was originally released in 1999, also went through a number of revisions in 2006 and again in 2018 as more serious security flaws were uncovered.xviii

Intrusion Detection Systems and Intrusion Prevention Systems

The next significant security product to debut commercially was the intrusion detection system (IDS). Early IDSs were built on technology similar to that used in antivirus software in that they used signatures to continuously scan for known threats. Unfortunately, that meant that they could only detect known threats. Furthermore, updating the signature set was a hassle. IDS versions released in the 1990s moved to anomaly detection instead, which attempted to identify unusual behavior patterns that traversed the network.xix Eventually, misuse detection, which detected behavior violations of stated policies, was also an added feature.

By the mid '90s, commercially available products were available. The two most popular software packages at the time were Wheelgroup's Netranger and Internet Security Systems's RealSecure.xx After building products that could help detect threats, the next logical step was creating something to prevent them, which became known as an intrusion prevention system (IPS).

While IDSs became best practice by the early 2000s, very few organizations were using IPSs. IPSs were inline solutions that could take automated actions in response to a detected threat. For example, they could block an IP address, limit access, or block an account. Unfortunately, the original implementation of IPS technology was unreliable and cumbersome. It often blocked legitimate traffic, which was a headache for administrators who then had to remove the blocks. Furthermore, it used a single signature for each type of exploit associated with a vulnerability, which could lead to hundreds of signatures slowing down traffic significantly. However, by 2005 the model changed to use a single signature for an entire category of vulnerability rather than for a specific exploit. Once more vendors entered the market, the technology also became more reliable. The combination lead to an increase in IPS adoption. Shortly thereafter, systems offering combination IDS/IPS solutions became the standard.xxi

Next Generation Intrusion Prevention Systems

Between 2011 and 2015 came next generation intrusion prevention systems (NGIPSs), which added some additional features to the existing IDS/IPS technologies. User and application control was added in order to address vulnerabilities in specific applications such as Java or Adobe Flash, and sandboxing technology was added to test the behavior of files not already known to be good or bad. While these technologies have come a long way from their origins, human expertise and time are still needed both to tune the solution to prevent an overwhelming number of alerts and to review the alerts that are generated. Furthermore, false positive detections remain problematic.

Data Loss Prevention

One of the more recent security offerings, data loss prevention (DLP), got its start in the early 2000s but did not gain popularity until the late 2010s.xxii The goal of a DLP software solution is to prevent an organization's data from either being accidentally exposed by being leaked to some unauthorized entity, such as when someone sends an email they should not, or stolen by a bad actor during an attack.

While the significant increase in the use of cloud, mobile, and remote computing have made this an attractive solution, it is often difficult for organizations to implement properly. In order to prevent data loss, an organization has to understand both what data they have and what kinds of data they wish to prevent falling into the wrong hands. Each of these goals takes a significant amount of resources beyond the purchase of DLP software. Inventorying what data is available and categorizing it to know what is sensitive can often involve considerable overhead. Many organizations either do not have the resources or choose not to assign the resources they do have to this process.

Security Information and Event Management

One of disadvantages of all of the previous security solutions mentioned was the necessity of looking at the logs and alerts of these systems individually. Security information and event management (SIEM) was introduced around 2006 as a product to solve this problem. A SIEM system aggregates log data, security alerts, and events in a centralized repository in order to provide real‐time, correlated analysis of this data. The goal of SIEM is both to provide a single place where defenders could look to see alerts from multiple systems and to make connections between the various data sources.

The first SIEM systems introduced around 2006 had significant scaling limitations. In particular, they were able to add resources to only a single system. Furthermore, both data ingestion and output were particularly challenging, causing reports and dashboards to be equally as limited. The next versions of SIEM tools, which came about in roughly 2011, solved the scalability problem almost too well because at least architecturally, they had the potential for unlimited data sources. As a result, defenders were quickly overwhelmed when trying to make sense of what they were seeing. The most recent iteration of SIEM systems, which became available in about 2015, focuses more on analytical data than pre‐built alerts in order to provide better visibility into unknown threats.xxiii

Ultimately, the expectation is that having more data from more sources with a more powerful means of aggregation will lead to better security management. However, even with these most recent SIEM systems, there remain inherent challenges. For someone to understand what an unknown threat is in an environment requires the creation of a baseline of activity. Creating this baseline helps to determine what is “normal” for the environment in question. This task requires a defender to have specialized knowledge of the SIEM being used as well as advanced knowledge of an organization, its networks, and its data.

Active Defense

Each of the previously discussed products was created to address a particular security problem, and although the technology continues to improve over time, unfortunately no solution can be a silver bullet providing perfect protection. Furthermore, it is impossible for a single product to address the problem of cybersecurity holistically. More importantly from a historical perspective, it's critical to note that the solutions described previously have something in common: they are reactive and passive in nature. Most of them require waiting for some alert to trigger or information to appear in a log before acting, whether directly or indirectly. Even the ones that appear to be active, such as IPS and DLP, do not truly meet the definition of “active” that I will be using, because they do not require some form of constant human interaction but instead are automated and can be ignored. Therefore, it is not too surprising that defenders have gotten into the habit of being passive and reactive as well. To understand what I have in mind by the terms active defense and passive defense, let's turn to Robert M. Lee's 2015 white paper, The Sliding Scale of Cybersecurity.

Lee provides some illuminating explanations of these terms as part of a framework that he developed. Within it, he outlines five interconnected categories of actions, competencies, and investments of resources: architecture, passive defense, active defense, intelligence, and offense. We will focus on two for the purposes of this discussion: passive defense and active defense.

Lee notes that traditionally these two forms of defense have always been acknowledged. Taking a cue from the original intended notion of “passive defense” from the Department of Defense, Lee defines it as “systems added to the architecture to provide consistent protection against or insight into threats without constant human interaction.” The key here, he indicates, is the lack of regular human interaction and he cites firewalls, antimalware systems, intrusion prevention systems, antivirus, and intrusion detection systems as examples fitting this category. The intent of “active defense,” on the other hand, is not some form of attacking back as has been previously misinterpreted from its military origins but rather the maneuverability and adaptability of a defender to contain and remediate a threat. Thus, Lee explains that active defense for cybersecurity is “the process of analysts monitoring for, responding to, learning from, and applying their knowledge to threats internal to the network.”

Crucial to this definition is the focus on the analysts themselves and not specific tools. As he maintains, “Systems themselves cannot provide an active defense; systems can only serve as tools to the active defender. Likewise, simply sitting in front of a tool such as a SIEM does not make an analyst an active defender—it is as much about the actions and process as it is about the placement of the person and their training.”xxiv Therefore, it is this definition of active defense that will be used in describing the Active Defender in Chapter 1.

What Keeps Us Stuck?

There are two overarching elements that contribute to keeping us stuck in this reactive, passive defensive space: inertia and organizational culture. Let's dive into each in turn in the following sections.

Inertia

Despite this notion of active defense, most organizations continue to suffer from certain pitfalls that perpetuate a passive and reactive security posture. Perhaps the most common pitfall is simply inertia—“That's how we've always done it.” Change and growth are hard, and people are often resistant, organizations even more so. For example, many organizations are still focusing predominantly on perimeter security, which is a way of thinking that goes back to the 1990s and those early firewalls. This philosophy involves providing robust security for the exterior of the network and assumes that the people and assets inside this network can be trusted.

Concentrating mainly on the perimeter was particularly useful both when the employees spent their days working the office and when all data was centrally managed in the organization's on‐premises data center. Now, with employees often working remotely, the potential for data to be stored in the cloud, and email connecting people around the world, perimeter security, while still necessary, is no longer sufficient. Current best practices dictate a far more thorough and layered approach called defense in depth, in which companies put multiple layers of security controls in place to protect their assets, not just perimeter protection. Yet, defenders continue to return to this mindset because it's what they know and have always done.

Inertia impacts a number of areas such as how defenders use their tools, alert fatigue, and financial constraints. Let's next explore each of these in turn.

Tool Usage

Inertia can impact one of the most basic fundamentals of defense: tool usage. The way traditional defenders have always utilized their tools perpetuates a passive or reactive culture. Most of the tools described earlier fit Lee's definition of passive by providing “consistent protection against or insight into threats without constant human interaction.” As a result, defenders often focus on waiting for alerts to come in rather than actively hunting for threats.

The assumption is that antivirus, firewalls, IDS/IPS, and other tools are providing adequate protection for the organization until an alert is generated. In medium to large organizations that have implemented SIEM technology, it can be particularly problematic. SIEM systems often lead people to have tunnel vision with respect to defense. In other words, the people responsible for monitoring them wind up focusing exclusively on the alerts, dashboards, and other data this tool provides rather than the broader picture of security at an organization. As a result, they often wind up trapped in a vicious cycle of investigating false positive alarms, tuning existing rules, or adding new rules, which leaves time for little else.

Alert Fatigue

The hyper focus on SIEM or other tool data can lead to something called alert fatigue. Alert fatigue is what many defenders experience when they are constantly barraged by warnings and alerts from their tools.

There are several reasons for warnings or alerts to be generated that turn out to be false positives. A false positive might be triggered because the information being fed into a tool is problematic. For example, it is not uncommon for the intelligence data being fed into the tools that defenders use for detecting malicious activity to contain IP addresses that host both legitimate and malicious sites. Without having the proper context to make an informed decision, the defender may accidentally block legitimate traffic, generating a number of alerts that each have to be investigated.

In addition, these feeds can provide indicators of compromise that, without the proper context, are misleading. For example, it is not uncommon for legitimate tools to be detected by antivirus as malicious, such as the application netcat. Netcat, while often used by some system administrators for its flexible network capabilities, can also be used by attackers to set up malicious network connections. However, just because there's an indication that it's being used does not mean that it's an indicator of something malicious occurring. We need the proper context to answer that question such as, Where is it being used? By whom? For what purpose? Without that information, a defender has no way to know whether an alert on this application is legitimate.

Another way these tools can generate a false positive is if oversensitive rules have been deployed within the tools. In other words, a rule that detects brute force attacks might fire as the result of a user having forgotten their password after returning from vacation. The threshold may need to be adjusted to take these cases into account. It is also possible for certain applications to generate what appears to be anomalous traffic but is actually legitimate for that particular application. If the systems generating these alerts are not particularly well tuned, they could create thousands of false positives that could very easily overwhelm defenders. Proper tuning of these systems isn't just about preventing false positives but also involves setting the level of severity for each of the alerts. If all alerts are considered to be the same priority level, really serious incidents will get lost in all the noise.

There are also some psychological reasons for alert fatigue, such as bright red blinking lights found in some tool user interfaces. These notifications are meant to indicate some level of malicious activity, but they can be extremely misleading, leading to nothing more than fear, uncertainty, and doubt. Seeing these indicators incessantly, regardless of whether they are legitimate, can cause people to tune them out. In addition, continuously reading articles about zero‐day vulnerabilities and large breaches can be emotionally draining and mentally detrimental. Reinforcing negative outcomes of this nature can cause defenders to feel as if they are fighting a losing battle. These things can lead to significant stress, burnout, and high turnover. xxv

Financial Constraints

Inertia is further reinforced by financial constraints. If a business has not experienced a reason to make significant change, such as a massive breach, it is not likely to do so in part because cybersecurity is typically seen as a cost center rather than a revenue driver. In other words, unlike a sales department that brings money into the company, the perception is that spending money to have a more secure environment is, instead, a drain on the organization's resources with little or no obvious return on investment. Rather than implementing some form of validation on their existing security controls such as with penetration tests, adversarial emulation, or active threat hunting, which is a topic we will discuss in later chapters, these organizations just become further mired in passivity.

Furthermore, because they are unwilling to commit more money to defend their cybersecurity assets, most organizations are frequently understaffed. These companies view security as a burden and nothing more. In addition, the staff they do have typically are not able to maintain sufficient training for their positions, much less find the time to learn new skills or tools. As a result, they are often missing the level of expertise needed to take defending their organizations from a passive to a more active state.

The unfortunate result of being shorthanded is that those who are working never have any downtime, which means more possibilities for mistakes and the potential for burnout, among other problems. In this particular case, it also means security staff are constantly responding to alerts or other urgent matters. Thus, they have no time to consider, much less implement, any proactive approaches.

It also prevents them from looking into emerging trends both within and external to their environments. Even if they managed to find the time, an environment that views security as a cost center is likely to consider some of these investigations “unproductive” because they turn up nothing of consequence. However, a hypothesis that is researched is still a result regardless of the outcome. As long as this result is documented, communicated, and discussed, it still has value to the organization. Focusing only on productivity severely limits visibility into new potential attacks or other noteworthy risks.

While it may seem counterintuitive, studies have shown that having some flexibility throughout the workday actually increases productivity.xxvi A company that views security as something that enables an organization to be more flexible provides substantial funding in order to integrate it more fully into its environment, leading to far better outcomes overall. Assuming all the necessary controls are in place for the right reasons and people are working well together doing the right things and communicating appropriately, companies can take better, well‐managed risks.

Another way to see this thought process in action is to consider race car driving. The brakes on a race car make it go faster, not slower. How is that possible, you ask? While driving on a track, you must brake into a corner in order to accelerate out of it. If you don't brake and you go around the corner too fast, you either crash or have a lower corner entry speed. Slow is smooth. Smooth is fast. In other words, it's all about the deliberate choices that we make today helping us to move faster tomorrow.

Organizational Culture

The culture of the organization and its organizational processes can cause defenders to remain passive and reactive as well as perpetuate existing inertia. Cultural elements include how resistant to change an organization is; the presence of siloing, outsourcing, shadow IT, or BYOD (bring your own device); and lack of support from leadership.

Resistance to Change

If the environment tends to be extremely conservative in terms of change, it can be very difficult to institute more preemptive security practices. For example, some companies will only allow a firewall rule change during a prescribed window. If the rule change is meant to block an active attack, this window might be too late. These organizations often exhibit considerable hesitancy or pushback about blocking anything because they are afraid that it might interrupt production.

The same is true for network access. In large organizations, it is common for multiple departments to take part in a single change, which slows down the process further and prevents any possibility of agility, which is often required for proactive security changes. Other organizations, such as in higher education, resist activities like the manual removal of dangerous messages from their email system for fear that their users might view this as too intrusive or a privacy violation. Furthermore, consider the situation where an existing employee becomes an internal threat and their access needs to be terminated immediately. Organizations often resist making immediate changes to an established process such as cutting off access to an account. However, in this case, if this process takes any significant amount of time, the employee could use their access to cause malicious damage or steal company data.

Siloing

If siloing is common within the organization, security‐related observations from other departments such as networking, QA, or development may never make it to the security team. In some companies, these silos are created because communication is blatantly forbidden or strongly discouraged between certain groups. Instead, it may be because these groups do not have a connection within the security team and thus no easy way to share information. The possibility also exists that these other groups want to avoid additional work, so they intentionally do not seek out the security team after making a problematic discovery. As a result of this delay, reactionary measures may be the only options once a problem is brought to light. Furthermore, the security team may not have the necessary insight into what the other teams are doing or what requirements must be met before those requirements become a last‐minute problem.

Outsourcing

Whether a company outsources its IT services is another organizational factor that can perpetuate a reactive and passive security posture. Outsourcing is the practice of hiring a third party to manage some or all of the IT services that a company utilizes in its day‐to‐day operations. While a common practice, outsourcing has a number of hidden costs/risks associated with it that are often overlooked.

If taken to extremes, outsourcing can leave your internal team bereft of the ability to effect change in a timely manner, making them little more than vendor managers. In other words, they can cause the in‐house staff to get to a point where they cannot make any changes without asking the vendor how to handle the request. If the vendor is not reasonably responsive, it may take an extremely long time for any changes to be made, including security‐related requests. Furthermore, service level agreements for outsourced IT services often do not align with an organization's security demands.

Selecting the right vendor can make a difference too. A vendor that hires cheap, unqualified, or unethical employees can contribute to an already poor security posture through mistakes, oversights, and potentially malicious activity. They may not even notify the organization if there has been an intrusion or data theft.

Some vendors are genuinely interested in providing good quality service with the intention to improve an organization's security posture. However, if the majority of IT is outsourced and the vendor is paid based on the number of tickets they resolve rather than on real results, they may be less interested in doing the right thing.

Shadow IT

A passive security attitude may also lead companies to allow shadow IT to fulfill user requests, further preventing their ability to cultivate proactive security practices. Shadow IT is the practice of deploying information technology systems outside of an organization's normal central IT structure. It can include the use of file storage solutions or email accounts with online providers such as Microsoft or Google, productivity solutions like Trello or Zoom, and instant messaging apps such as WhatsApp or Signal.