22,99 €
Infuse efficiency into risk mitigation practices by optimizing resource use with the latest best practices in vulnerability management
Organizations spend tremendous time and resources addressing vulnerabilities to their technology, software, and organizations. But are those time and resources well spent? Often, the answer is no, because we rely on outdated practices and inefficient, scattershot approaches. Effective Vulnerability Management takes a fresh look at a core component of cybersecurity, revealing the practices, processes, and tools that can enable today's organizations to mitigate risk efficiently and expediently in the era of Cloud, DevSecOps and Zero Trust.
Every organization now relies on third-party software and services, ever-changing cloud technologies, and business practices that introduce tremendous potential for risk, requiring constant vigilance. It's more crucial than ever for organizations to successfully minimize the risk to the rest of the organization's success. This book describes the assessment, planning, monitoring, and resource allocation tasks each company must undertake for successful vulnerability management. And it enables readers to do away with unnecessary steps, streamlining the process of securing organizational data and operations. It also covers key emerging domains such as software supply chain security and human factors in cybersecurity.
Effective Vulnerability Management is a new and essential volume for executives, risk program leaders, engineers, systems administrators, and anyone involved in managing systems and software in our modern digitally-driven society.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 396
Veröffentlichungsjahr: 2024
Cover
Table of Contents
Title Page
Foreword
Introduction
What Does This Book Cover?
Who Should Read This Book
1 Asset Management
Physical and Mobile Asset Management
Cloud Asset Management
Third-Party Software and Open Source Software (OSS)
On-Premises and Cloud Asset Inventories
Tooling
Asset Management Risk
Recommendations for Asset Management
Summary
2 Patch Management
Foundations of Patch Management
Manual Patch Management
Automated Patch Management
Patch Management for Development Environments
Open Source Patching
Not All Software Is Equal
Who Owns Patch Management?
Building a Patch Management Program
Summary
3 Secure Configuration
Regulations, Frameworks, and Laws
NSA and CISA Top Ten Cybersecurity Misconfigurations
Summary
4 Continuous Vulnerability Management
CIS Control 7—Continuous Vulnerability Management
Continuous Monitoring Practices
Summary
5 Vulnerability Scoring and Software Identification
Common Vulnerability Scoring System
Exploit Prediction Scoring System
Moving Forward
Stakeholder-Specific Vulnerability Categorization
Software Identification Formats
Summary
6 Vulnerability and Exploit Database Management
National Vulnerability Database (NVD)
Sonatype Open Source Software Index
Open Source Vulnerabilities
GitHub Advisory Database
Exploit Databases
Summary
7 Vulnerability Chaining
Vulnerability Chaining Attacks
Vulnerability Chaining and Scoring
Vulnerability Chaining Blindness
The Human Aspect of Vulnerability Chaining
Integration into VMPs
IT and Development Usage
Summary
8 Vulnerability Threat Intelligence
Why Is Threat Intel Important to VMPs?
Where to Start
Threat Hunting
Integrating Threat Intel into VMPs
Summary
9 Cloud, DevSecOps, and Software Supply Chain Security
Cloud Service Models and Shared Responsibility
Hybrid and Multicloud Environments
Summary
10 The Human Element in Vulnerability Management
Human Factors Engineering
Human Factors Security Engineering
Cognition and Metacognition
Vulnerability Cognition
The Art of Decision-Making
Integration of Human Factors into a VMP
Summary
11 Secure-by-Design
Secure-by-Design/Default
Secure-by-Design
Secure-by-Default
Software Product Security Principles
Secure-by-Design Tactics
Secure-by-Default Tactics
Hardening vs. Loosening Guides
Recommendations for Customers
Threat Modeling
Secure Software Development
Security Chaos Engineering and Resilience
Summary
12 Vulnerability Management Maturity Model
Step 1: Asset Management
Step 2: Secure Configuration
Step 3: Continuous Monitoring
Step 4: Automated Vulnerability Management
Step 5: Integrating Human Factors
Step 6: Vulnerability Threat Intelligence
Summary
Acknowledgments
About the Authors
About the Technical Editor
Index
Copyright
Dedication
End User License Agreement
Chapter 1
Figure 1.1: Hybrid vs. multicloud environments
Figure 1.2: IT infrastructure layers
Figure 1.3: Various enterprise layers
Figure 1.4: Physical data centers
Figure 1.5: On-premises vs. cloud environments
Figure 1.6: Complexity of an asset inventory system
Figure 1.7: Progression of organizational management over the years
Figure 1.8: Digital transformation (DX)
Chapter 2
Figure 2.1: Foundations of patch management pyramid
Figure 2.2: Manual patching risks
Figure 2.3: How Ansible works
Figure 2.4: Example of Ansible script for patch management
Figure 2.5: Benefits of automated vs. manual patching solutions
Figure 2.6: Risks of automated patching
Figure 2.7: Example of RACI matrix for infrastructure and operations
Figure 2.8: End-of-life software listing examples
Figure 2.9: Alignment of people-process-tech
Chapter 3
Figure 3.1: CISA KEV flag
Figure 3.2: Ratio of monthly open to closed vulnerabilities
Figure 3.3: Weaponization of vulnerabilities
Chapter 5
Figure 5.1: CVSS metrics
Figure 5.2: CVSS nomenclature
Figure 5.3: How a CVE Makes it’s Way Into the NVD
Figure 5.4: Base metric group breakdown
Figure 5.5: Threat metric group
Figure 5.6: Environmental metric group
Figure 5.7: Supplemental metric group
Figure 5.8: Qualitative Severity Rating Scale
Figure 5.9: CVE improvements
Figure 5.10: EPSS efficiency
Figure 5.11: SSVC comparison
Figure 5.12: Potential SSVC decisions
Figure 5.13: Potential exploitation decision values
Figure 5.14: Two options of technical impact
Figure 5.15: Lockheed Martin's seven-step Kill Chain
Figure 5.16: CISA's SSVC binary approach to assessment
Figure 5.17: Mission Prevalence potential decision values
Figure 5.18: SSVC Impact Types
Figure 5.19: Three criteria of Mitigation Status
Figure 5.20: An expanded attack tree
Figure 5.21: A table format of an attack tree
Figure 5.22: CPE 2.3's structure
Figure 5.23: 2022 OSSRA Report summary
Figure 5.24: Required CWE elements
Chapter 6
Figure 6.1: OSV data aggregation
Chapter 7
Figure 7.1: Direct vs. indirect chaining
Figure 7.2: Diagram of gaps
Figure 7.3: Combination of terms to create VCB
Figure 7.4: Solutions for VCB
Figure 7.5: Integration into a VMP diagram
Chapter 9
Figure 9.1: The shared responsibility model
Figure 9.2: The four Cs of cloud security
Figure 9.3: Containers vs. virtual machines
Figure 9.4: Chainguard analysis of base images
Figure 9.5: A Kubernetes cluster
Figure 9.6: Inherent OSS risks
Figure 9.7: 2022 OSS security risks
Figure 9.8: OSS in 2021
Chapter 10
Figure 10.1: Vulnerability management life cycle
Figure 10.2: How human factors incorporate psychology, engineering, and desi...
Figure 10.3: Example of SOC tools and complexity
Figure 10.4: Example of vulnerability dashboard
Figure 10.5: Example of a vulnerability report
Figure 10.6: Funnel of data inputs for patching
Figure 10.7: Roadmap of solutions
Chapter 12
Figure 12.1: A maturity model pyramid
Cover
Table of Contents
Title Page
Copyright
Dedication
Foreword
Introduction
Begin Reading
Acknowledgments
About the Authors
About the Technical Editor
Index
End User License Agreement
iii
xvii
xviii
xix
xx
xxi
xxii
xxiii
xxiv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
233
234
235
236
237
238
239
240
241
242
243
244
245
247
249
250
251
253
254
255
256
257
258
259
260
261
262
263
264
iv
v
265
Chris Hughes, M.S., MBA
Nikki Robinson, DSc, PhD
When I helped found Tenable Network Security, in many ways I was trying to get ahead of all the ways that we'd seen bad actors break into networks with our Dragon Network Intrusion Detection System. With Dragon, we saw all sorts of hostile state-of-the-art nation-state attacks and exploitations of unpatched systems as well as ankle-biter hackers. In starting Tenable, my cofounders and I wanted to make cybersecurity an obtainable and defendable goal. Continuous monitoring did not exist as a concept in the early 2000s. Annual penetration tests and even quarterly vulnerability scans were the norm. We wanted to make understanding cybersecurity risks easy for individuals and organizations.
As use of the Internet and dependency on it grew, so did nation-state threat actors. Our industry responded with IT regulations and frameworks. By 2020, we had the Payment Card Industry requirement, which was a wide variety of government standards that culminated in the National Institute of Standards and Technology (NIST) Cybersecurity Framework as well as the MITRE ATT&CK framework. During that same time frame, we saw the SANS organization publish their list of the Top 20 Vulnerabilities. This quickly became hard to manage and was replaced by the SANS Top 20 Controls, which was subsumed by the Center for Internet Security (CIS). We also saw hacking move from denial-of-service attacks on websites in the early 2000s, to crippling nation-state attacks that shut down hospitals, shipyards, and grocery stores.
As awareness of the risks of IT grew, new types of tech seemed to grow faster. From 2000 to 2020, we saw the introduction of Wi-Fi networks, mobile devices, virtualization, containers, software-as-a-service (SaaS) services, elastic cloud infrastructure, and embedded devices, and now we are grappling with implementing artificial intelligence (AI).
In the last decade, we have seen an increased role of government in IT. The Trump administration banned network technologies like drones, security cameras, and network devices from China, and introduced the “defend forward” concept that is still in use by the National Security Agency (NSA). The Biden administration recently added the Office of the National Cyber Director, which quarterbacks much of the U.S. government's cyber strategy. It's very likely there will be more regulation to come that will impact how we defend and use the Internet.
However, as of late 2023, we don't have a consistent recipe or set of rules for securing data. If you are new to vulnerability management, this may seem surprising to you. How you perform vulnerability management is extremely subjective, based on the technology, the sensitivity of the data stored within it, the sophistication of the threat actors you are protecting against, your available budget, your people, and a wide variety of political, regulatory, and legal requirements. What works for a financial institution protecting trillions of dollars of transactions per day simply won't work for protecting the U.S. President's email. Protecting a video game service with millions of users is very different than keeping ransomware actors from stealing credit cards at your favorite coffee shop. Even though we all use the Internet, we all use it differently, with different technologies and tolerances for reliability and potential data loss.
It's because of this that I am very happy to have been asked by Nikki and Chris to write this book's Foreword. No matter what type of network security background you have, this book does an excellent job of covering the various aspects of vulnerability management. It presents several different advantages and limitations of technology for measuring vulnerabilities and remediating them across a wide breadth of technologies. It also covers the different types of frameworks that can be used to make sense of assets, their vulnerability, and compliance data, which can be extremely overwhelming. Whether you are learning vulnerability management concepts for the first time or looking to run an enterprise team focused on securing the network of a major bank, this book has the proper topics covered.
—Ron Gula, President, Gula Tech Adventures and Co-Founder, Tenable Network Security
We live in a world that is enabled in countless ways by software. Over a decade ago, Marc Andreessen quipped, “Software is eating the world,” and it indeed is. From our personal leisure activities to critical infrastructure and national security, nearly everything uses software. It powers our medical devices, telecommunications networks, water treatment facilities, educational institutions, and countless other examples. This means that software is pervasive, but as software use and integration into every facet of society has grown, so have the vulnerabilities associated with our digital systems. This has manifested in tremendous levels of systemic risk that can, has, and will continue to impact our daily lives.
The World Economic Forum (WEF) stated that at the end of 2022, a total of 60 percent of global gross domestic product (GDP) was dependent on digital technologies. That said, the WEF also conducted a survey in 2023 with respondents projecting a “catastrophic” cyber incident within the next two years. The threats of vulnerability exploitation are growing each year, in combination with the ease of use of malicious tools for creating and distributing ransomware and malware.
Since the earliest days of computer systems, researchers and practitioners have been trying to address vulnerabilities in digital systems by practicing what is referred to as “vulnerability management.” As defined by the National Institute of Standards and Technology (NIST), a vulnerability is “a weakness in an information system, system security procedures, internal controls, or implementation that could be exploited or triggered by a threat source.”
Digital system vulnerabilities and the ability for them to be exploited were documented as early as the 1970s, with a report titled “Security Controls for Computer Systems,” also known as the “Ware Report” because a RAND employee named Willis Ware chaired the committee producing it for the U.S. Department of Defense (DoD). In addition to the report touching on vulnerabilities in systems, it discusses the need to design systems with security in mind throughout the software and system development life cycle. In 2023, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) issued guidance titled, “Shifting the Balance of Cybersecurity Risk: Security-by-Design and Default Principles,” which called for technology manufacturers to shift to creating products that are secure-by-design.
Despite the calls for secure-by-design systems and the awareness for over 50 years of the vulnerabilities of digital systems and the ability to exploit them, as an industry we continue to struggle with remediating vulnerabilities in digital systems as well as ensuring that security is a core part of system design and development. As modern digital environments have only gotten more complex and software more pervasive, organizations struggle to keep up with addressing vulnerabilities, now leading to unforeseen levels of systemic risk in our digital ecosystems.
Tremendous growth has occurred in publicly disclosed and tracked vulnerabilities, with notable sources such as the NIST National Vulnerability Database (NVD) seeing Common Vulnerabilities and Exposures (CVEs) grow from merely a few hundred in the 1990s to over 190,000 in 2022. These vulnerabilities are seen across a sprawl of software, hardware, libraries, and tools (in both open source and off-the-shelf solutions). With the complexity of software and applications across organizations, the sheer volume of vulnerabilities is difficult to track and remediate.
As the list of publicly disclosed vulnerabilities has grown each year, so have organizations' backlogs of unresolved vulnerabilities as they struggle to keep pace. A 2022 survey conducted by security vendor Rezilion and the Ponemon Institute found that 66 percent of respondents cited having a backlog of more than 100,000 vulnerabilities, and that they're only able to patch less than half of those vulnerabilities. Another study published in 2022 by security vendor Qualys found that there remains a gap between organizations' mean-time-to-remediate (MTTR) vulnerabilities and malicious actors' abilities to exploit them. In our roles both in organizations and as members of society, we, as cybersecurity practitioners, simply cannot keep up with the growth of vulnerabilities associated with our digital ecosystem, nor the malicious actors who are actively exploiting them.
Contributing to the problem of the growing publication of vulnerabilities and malicious actors exploiting them is the reality that organizations can't identify the important components of the noise. Despite there being over 25,000 known vulnerabilities published in 2022, less than 1 percent of all these known vulnerabilities were exploited by malicious actors. This means that organizations are spending energy, effort, and resources on addressing vulnerabilities that never actually get exploited by malicious actors, and are trying to make sense of and prioritize the ones that have been or are likely to be exploited.
As we will point out throughout the text, in addition to organizations struggling to keep up with patching flaws in software and systems, there are a myriad of other factors that complicate an organization's ability to address vulnerabilities. These include challenges with proper asset visibility and inventory, ensuring secure configurations are in place to prevent system exploitation by malicious actors, the pervasive use of third-party and open source code, configuration missteps, and the addition of the human factors in vulnerability management.
Malicious actors increasingly are gaining efficiency at chaining together vulnerabilities and taking advantage of the pervasiveness of software in modern society, driven by widespread efforts at digital transformation. Efforts such as DevSecOps that promise to “shift security left” have their own challenges like noisy findings by modern vulnerability scanning tools, cognitive overload on often-understaffed security teams, and worldwide shortages of cybersecurity talent.
Given the prevalence of vulnerability chaining, digital transformation, DevSecOps, and software supply chain security concerns, vulnerability management is more important now than ever. Without an updated and modern approach to handling vulnerabilities, organizations will continue to be buried in vulnerabilities with little context. Our approach addresses cloud environments, large and small development programs, and the combination of hybrid and multicloud deployments. This approach focuses on not just the technology and methodologies of vulnerability management, but also the humans and organizations involved in the activities.
So let's begin.
This book covers the following topics:
Chapter 1: Asset Management This chapter addresses fundamental activities such as asset management, which includes physical and mobile asset management, as well as software asset inventory and dealing with complex cloud, hybrid, and multicloud environments. There will also be coverage of tooling to facilitate asset management.
Chapter 2: Patch Management This chapter covers the fundamentals of patch management, including both manual and automated patch management, as well as the benefits and trade-offs between the two. It discusses software patch management, including open source management, and the various roles and responsibilities for patch management between different teams within the organization.
Chapter 3: Secure Configuration While patching known vulnerabilities are a core of vulnerability management processes, there is also the need for secure configurations. This chapter discusses the role of regulations and frameworks in secure configurations, as well as resources such as the NSA and CISA Top 10 cybersecurity misconfigurations publication. It also discusses industry-leading configuration resources such as CIS Benchmarks and DISA STIGs.
Chapter 4: Continuous Vulnerability Management Vulnerability management is far from a snapshot in time or once-and-done activity. This chapter discusses the concept of continuous vulnerability management and continuous monitoring. It discusses resources such as CIS and NIST controls that tie in to continuous vulnerability management and their associated tasks and activities.
Chapter 5: Vulnerability Scoring and Software Identification A major part of vulnerability management is identifying software and properly prioritizing vulnerabilities. In this chapter we cover both, including long-standing vulnerability scoring methodologies, as well as emerging vulnerability intelligence resources to help organizations more effectively prioritize vulnerabilities such as the Exploit Prediction Scoring System (EPSS) and the CISA Known Exploited Vulnerability (KEV) catalog.
Chapter 6: Vulnerability and Exploit Database Management Vulnerabilities are captured and stored in vulnerability databases. In this chapter, we cover widely used vulnerability databases such as the NIST National Vulnerability Database (NVD), as well as emerging databases such as Open Source Vulnerabilities (OSV) and others that address gaps in databases such as NVD. We also cover the role of exploit databases and how they can be used for both good and harm, depending on the user.
Chapter 7: Vulnerability Chaining It's often said that defenders think in lists while attackers think in graphs. This is because attackers are often looking to chain vulnerabilities together to move laterally through environments or make their way toward sensitive resources. In this chapter, we discuss the concept of vulnerability chaining, as well as provide examples and gaps in the industry when it comes to focusing on vulnerability chaining.
Chapter 8: Vulnerability Threat Intelligence This chapter covers the role of vulnerability threat intelligence and advanced techniques such as threat hunting. We also discuss integrating threat intelligence into vulnerability management programs, including not just technologies but also people and process.
Chapter 9: Cloud, DevSecOps, and Software Supply Chain Security The modern threat landscape is complex, including cloud, a push for DevSecOps, and increasing attacks on the software supply chain. In this chapter, we go deep into these topics, including multi- and hybrid cloud containers, as well as the role of open source software and the systemic risks across the software supply chain.
Chapter 10: The Human Element in Vulnerability Management Most conversations about vulnerability management focus on the technical aspects, such as software and applications. However, behind all that technology are humans, operating in complex socio-technical environments, dealing with psychological stressors and challenges such as decision and alert fatigue. This chapter covers the human element of vulnerability management, including leading research on the topic from one of the authors.
Chapter 11: Secure-By-Design At the heart of vulnerability management is an uncomfortable truth, that the process of “patch faster, fix faster” is broken. Organizations continue to struggle with mounting vulnerability backlogs and insecure products. This chapter discusses the push for secure-by-design/default software and products and some of the key players who advocated for this paradigm shift. It also discusses some of the challenges facing the need to make this fundamental change of how we operate in the digital world.
Chapter 12: Vulnerability Management Maturity Model We conclude the book with a chapter looking at how to begin down the path of creating a mature vulnerability management model. We discuss key recommendations and steps, from asset management to continuous monitoring and integrating human factors. We hope to empower readers to modernize their vulnerability management programs and ultimately lead to decreased organizational risk.
As the title implies, this book is intended for people who have an interest in vulnerability management, software, and digital and cyber physical systems. It is suited for various professional roles ranging from the C-suite (CISO, CTO, CEO, etc.) to security and software practitioners and aspiring entrants looking to better understand the vulnerability management practice and evolving landscape.
If you believe you have found a mistake in this book, please bring it to our attention. At John Wiley & Sons, we understand how important it is to provide our customers with accurate content, but even with our best efforts an error may occur.
In order to submit your possible errata, please email it to our Customer Service Team at [email protected] with the subject line “Possible Book Errata Submission.”
The authors would appreciate your input and questions about this book! Email Chris Hughes at [email protected] and Dr. Nikki Robinson at [email protected].
Asset management is one of the most critical components of a vulnerability management program (VMP). Of all the fundamental building blocks of a successful VMP, it's crucial to get asset management right and complete before focusing on other aspects of vulnerability management.
Asset management is the listing or inventory of all hardware and software of an environment. Each environment has a different makeup of assets, including everything from mobile devices (e.g., laptops and cell phones) to application libraries and third-party software-as-a-service (SaaS) software. Without a comprehensive asset management program, organizations are limited in building mature VMPs with secure configuration, patch management, and continuous monitoring.
Asset management has evolved quite a bit over the last 10 years, with the advent of cloud infrastructure, increased use of SaaS, exponential growth of open source software use, and incredibly large and complex development environments. Years ago, asset management could be as simple as a spreadsheet with a list of asset names, tag numbers, and potentially an asset owner or IP address. Hardware and software inventories were kept separately and possibly managed by that same IT administrator. Yet with the increased use of cloud infrastructure, whether infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), or SaaS, traditional asset management methods are simply no longer viable. Using a spreadsheet to manage complex and dynamic assets is not maintainable or feasible to keep updated information available for management.
Traditional vulnerability management components are no longer able to mature with manual or incomplete asset inventories. It's increasingly difficult to manage dynamic assets such as containers, which are meant to come online and be torn down at will. These asset types require a dynamic asset management program—one that can be updated quickly and at scale with large-scale development projects. An asset library can no longer be solely used for managing mobile devices or hardware assets but must be capable of keeping updated information on ephemeral applications and tools.
Without a modern approach to asset management, organizations have limited visibility of the hardware and software used by employees, which can have several cascading effects. Without knowledge of a laptop, for example, there is no way to determine if it has proper monitoring software installed, if it's still in the employee's possession, if it's checking for updated patches, or if it's compliant with organizational policies. And if an organization does not have the ability to see what software is installed on what systems, they have no way of knowing the number of vulnerabilities it has, what its potential attack surface is, or what dependencies that software might have on other systems.
Other limitations of an immature asset management program are the “unknown unknowns.” If there are hardware or software assets that aren't effectively managed or visible to IT operations staff, organizations do not know the scope of vulnerabilities, inherent risks, or the interconnectivity of devices and applications. These limitations make it impossible to prioritize and remediate vulnerabilities effectively. It also makes it difficult to determine if applications are at the right patch level, if the application's version is at end of life/support, and if there are outstanding vulnerabilities or missing configurations that could lead to cyberattacks like distributed denial-of-service (DDoS) attacks, malware, or ransomware.
Asset management can be performed in a variety of ways. Organizations are using IT operations software, vulnerability scanning tools, cloud inventories, and even other configuration management software like ServiceNow (www.servicenow.com). This type of software can not only keep track of assets, but can also tie tickets and ongoing management of those devices with a system owner. Smaller organizations might still be managing assets manually, which limits the maturity and capability of a VMP. In this chapter, we discuss the common limitations of asset management tools and processes, possible impacts of an immature asset management program, and what organizations can do to create a modern approach to asset management.
In traditional data centers, asset management consists of the physical components in server racks—for example, networking devices, servers, power management, and any other physical devices in the organization. However, organizations have moved to a much more digital workforce, utilizing multiple mobile devices per employee. One employee might have a tablet, laptop, and smartphone, and use primarily online applications for collaboration versus solely working on a physical desktop located in an office setting.
Many organizations are moving to hybrid work environments where employees are working between an organization's office and their home or an off-site location. This type of work environment complicates the management of these devices, given that they may or may not be connected to the organization's virtual private network (VPN) or potentially cloud assets and servers. This setup has increased the challenge of managing and tracking mobile devices.
In modern organizations, managing all these mobile devices requires an asset management solution to handle all the operating systems (OSs) and types of applications required for online collaboration. A mobile toolkit includes asset management and inventory software, as well as configuration management, usually performed by a mobile device management (MDM) solution. This tool provides a management console to catalog each mobile device and assigns policies and security configurations as determined by the organization.
Several SaaS solutions are also available as well as tools provided by the mobile carrier. For example, mobile solutions provided by Apple (e.g., iPhones and iPads) have their own asset management solution like Jamf software. Other devices or applications, however, can be managed by MDM solutions like Miradore and Citrix Endpoint Management.
Because most organizations are moving away from on-premises data centers, there are fewer servers and network devices requiring asset management. With the advent of the cloud, more organizations are migrating their physical assets to a cloud infrastructure and using more ephemeral servers like containers. Yet on-premises data centers still require an asset management solution to provide full visibility to all systems. And it's not just for security reasons—they also must manage systems and ensure they are properly online and functioning without hardware failures. All the physical assets could be providing warning indicators of cyberattacks, and if not monitored properly, an organization could be missing critical data to determine risk.
While physical risk management is typically focused on mobile devices, there has been an increased “return to work” effort across large organizations. It means that physical assets and MDM could grow in complexity and include a mix of bring-your-own-device (BYOD) and corporate-owned assets. Such complexity might require integration with either multiple products or the use of two separate applications to manage the physical assets, versus more configuration settings on laptops and tablets. Because most organizations use a tool for inventory and a separate tool for configuration management, this complexity adds another layer for system owners to review and manage assets for consistency.
Another category of assets that has become a major risk for organizations is Internet of Things (IoT) devices. With the interconnectivity of devices, IoT could be anything from a thermostat to a treadmill, home automation devices, or wearable devices like smartwatches. Because many organizations, particularly healthcare and medical organizations, use Wi-Fi or wireless connections, employees may have the option to connect their wearable devices to the local network.
Allowing these potentially vulnerable IoT devices to gain access to the network causes many concerns. The National Institute of Standards and Technology (NIST) has published a consumer's guide on the risks and potential security concerns around IoT devices. The NIST guide, “IoT Cybersecurity Criteria for Consumer Labeling Program,” came out in early 2022 and details a growing need for more consumer cybersecurity information around risks of IoT devices. The Biden–Harris administration recently released additional guidance around consumer labeling to ensure consumers understand risks associated with products (see www.whitehouse.gov/briefing-room/statements-releases/2023/07/18/biden-harris-administration-announces-cybersecurity-labeling-program-for-smart-devices-to-protect-american-consumers/#:~:text=This%20new%20labeling%20program%20would,trustworthy%20products%20in%20the%20marketplace).
Based on an article by Mary K. Pratt in TechTarget titled “Top 10 security threats and risks to Prioritize” on page (www.techtarget.com/iotagenda/tip/5-IoT-security-threats-to-prioritize), there are numerous ways that IoT devices can pose risk to organizations. One of the biggest threats to all organizations that is highlighted in the article is the increased attack surface. Similar to mobile devices and increased teleworking or mobile workforces, the more devices that connect to the network, the more risks and possible attack vectors there are. Organizations must have a good grasp of what IoT devices may exist on their network, by using either network scanning or sniffing to detect rogue or unexpected IoT devices. Sniffing is a technique used by hackers to detect if there are unsecured devices or systems that may be exploitable. There are many ways to detect attacks in an environment and these are covered at length in later chapters.
Software inventories have become an increasingly important topic. While this area will be covered in depth in a later chapter, it's important to cover the basics here. Recent attacks and zero-days against SolarWinds, Log4J, and MOVEit have been big motivators for understanding the software landscape and attack surface. To understand large attack surfaces, organizations need to catalog and inventory their use of software tools, libraries, and dependencies. A zero-day is a vulnerability that was previously unknown in software or hardware that can be majorly exploitable.
Without a proper software inventory, organizations may scramble to find zero-days in their applications, which leaves little time for remediation and more time for attackers to exploit vulnerabilities. With many organizations leveraging larger and more complex development environments, software asset discovery and continuous monitoring become a crucial aspect of risk management.
For example, if an organization has limited visibility into which libraries developers are adding, removing, patching or not patching, their security team will be unable to determine risk and prioritize patching and remediation. If any libraries and dependencies go undetected, or are not reported automatically to an inventory tool, the organization would be unaware of the number and severity of vulnerabilities that do exist.
Another concern is the possibility of using open source software that may not be patched or maintained regularly. And the larger the development environment, the more possibility there is for unknown and undetected vulnerabilities and missing secure configurations.
With digital transformation, agile software development, and an increasing focus on artificial intelligence (AI), the move to the cloud for systems is an integral step of managing infrastructure and complex development environments. More organizations are considering multicloud or hybrid cloud environments using either two cloud providers or potentially a private and public cloud deployment with the same provider. Multicloud environments allow for more resiliency and scalability, whereas private and public cloud options (i.e., a hybrid cloud) allow organizations to keep specific assets apart from the public cloud infrastructure.
Figure 1.1 provides a simple explanation of the differences between hybrid and multicloud environments. A hybrid cloud setup uses a combination of a private and public cloud option, but typically within the same cloud service provider (CSP). A multicloud solution uses two or more different CSPs to host the infrastructure.
Figure 1.1: Hybrid vs. multicloud environments
Figure 1.1 shows the unique characteristics of multicloud environments compared to hybrid cloud environments. Hybrid cloud is made up of one public cloud and one (or more) private cloud environments while using the same CSP, whereas a multicloud solution uses a combination of private and public cloud environments across multiple CSPs.
In some multicloud environments, an organization may need multiple cloud providers. One example is the need to run production and nonproduction workloads in one cloud environment and use a second cloud for resiliency and quick transfer in the event of network or regional failure in one of their providers. Another example is to run production and nonproduction workloads in one cloud environment and have backups and long-term storage for recovery in the event of data loss in another cloud environment.
Unfortunately, using multiple cloud providers complicates an asset management strategy. One of the biggest concerns of using multiple cloud providers in a multicloud strategy is that collecting, automating, and keeping track of assets between both environments may require multiple tools. There are more modern organizations using a multicloud strategy and third-party tools can sync data between those disparate workloads. Tools like CloudSphere are working to solve secure configuration and inventory concerns by collecting and maintaining asset data. But this means that each cloud environment may need to open various ports and create service accounts to manage the information. It would be incredibly easy to lose sight of the ephemeral systems of each environment unless they were mirrored.
A hybrid cloud solution could potentially be used for similar reasons, but the architecture is quite different. A hybrid cloud utilizes both public and private cloud environments. Organizations, for example, might use this strategy to store certain high-impact data and assets in the private cloud, while keeping lower-impact items in a lower-cost public cloud environment. This may complicate asset management in a few ways, but it can also be beneficial for organizations looking to strategize spending and budget over time. Hybrid cloud environments can also be a great solution for organizations who want to keep intellectual property (IP), personally identifiable information (PII), or other sensitive data in a private cloud, while keeping other data and workloads that are less critical to the business in the public cloud environment.
Figure 1.2 highlights the various layers within an organization for which you should build an IT infrastructure. The top layer includes everything from the platform to cloud management and infrastructure, as well as the overall networking architecture. The mid-layer includes the infrastructure operations applications, development environment, and the major security components that continuously monitor the environment. A sovereign cloud environment is one where the provider stores each organization's data within their own country.
Figure 1.2: IT infrastructure layers
Figure 1.2 illustrates the differing layers in platform services in cloud environments. In the top layer, there are services like cloud and edge computing, management interfaces, as well as the application platform. The middle layers are composed of development environments, infrastructure operations, as well as the largest security components. The cloud layer is really the platform itself, whether it's Amazon Web Services (AWS) or Oracle.
One of the main concerns in using hybrid cloud solutions is the potential limitations between the private and public cloud environments. These limitations include the sheer complexity of managing two separate cloud environments as well as the security concerns of using two separate cloud environments and manually implementing the same controls. Another possible solution would be to run the same tool in both environments to segment the networks and aggregate the data elsewhere. But allowing access to the private cloud from the public cloud could increase the risk of compromise between both environments.
Many traditional asset management tools did not account for third-party software or open source software (OSS) being used in modern organizations. But the rampant use of OSS has complicated the asset management and software library processes and the ability to calculate risk.
As displayed in Figure 1.3, software assets are used across the various enterprise layers. Starting with the business layer, applications like Java and Log4j (i.e., OSS components) build the foundation for development environments. Additional software in the presentation and service layers may be required to integrate and communicate to build complex applications.
Figure 1.3: Various enterprise layers
Figure 1.3 outlines the various layers that work together across an enterprise. The business layer is the backbone of the rest of the layers, and it has major connectivity between all the other enterprise environments—everything from the data and persistence layer that contains databases, to the infrastructure layer using Red Hat and OS components. Each piece of this matrix works together to create a comprehensive platform to support business functions.
Due to the increased OSS use, organizations are witnessing the dependencies and intricacies of how OSS works in complex and large application environments. Many developers leverage OSS because of the mean time to delivery, meaning the developer can spend less time rebuilding code that already exists by using tools that other developers have built. Lowering their time spent coding and providing some consistency in their applications allows developers to spend their time on more complex and nuanced development cycles. Yet with the increased use of OSS comes the need to catalog and understand what types of libraries and tools are being used within the applications.
One of the difficult items to collect and maintain within an inventory is the number of third-party companies and applications, contractors, SaaS products, and any other external software or hardware involved. For example, an organization might choose to use a firewall service provider rather than running their own firewall appliances and network configuration, due to a lack of skilled personnel or other resources to manage those assets.
Another third-party assets example is when an organization outsources their accounting or IT helpdesk firm. These third parties must have access to corporate resources, potentially requiring domain credentials or open ports/access to an organization's SaaS or infrastructure. A third-party contractor might have mobile devices that require access to an environment, thus spreading the potential attack surface.
Since the early 2020s, malicious actors have been leveraging open account access or infrastructure from third-party applications to gain access to corporate secrets. Cataloging these third-party applications can be performed using a variety of tools and methods but may be discovered by vulnerability scanning tools like Tenable or Qualys. Therefore, it's critical for organizations to determine what method is best for discovering and monitoring these third-party applications in the environment to protect themselves from risk.
Static lists will not capture changing versions, patches, or removal of any OSS within an environment. Organizations must move to dynamic asset discovery and categorization because of the possibility for human error and missed assets with a manual process. Every missed asset is a possible entry point for an attacker with exploitable vulnerabilities or misconfigurations. The process should be as automated as possible—allowing developers to consistently change their applications without running into major hurdles with configuration management activities.
Using something like GitHub or another open source tool (made for developers) is a possible solution for dynamic OSS application inventories. The recommendation is to use the open source repository that the developers are already using, whether that's GitHub, GitLab, or another platform. The most important component of each of these options is to have a consistent process known among all developers.
Documentation and the standard operating procedure (SOP) for OSS inventory management is just as important as the tools that perform inventory management. These options allow developers to manage OSS, are usually cross-platform, and provide additional functionality over the standard cloud inventory management systems. There are also several “for purchase” options, and organizations should carefully weigh their own unique needs before selecting a product.
While many small to medium businesses (SMBs) are choosing to create cloud environments from the start, there are still many organizations who have on-premises environments or who are choosing smaller on-premises data centers to manage specific data. Because there's still a mix of solutions for organizations, this complicates the tooling landscape for managing hardware and software appropriately. Hardware in data centers includes everything from servers to network devices, as well as all the IoT devices that may tie into the corporate network. Software incorporates everything from SaaS products like email services, to the actual tools and libraries used by developers like Python and Tomcat.
In reviewing Figure 1.4, it's easy to see how complex physical data centers can be compared to their cloud environment competitors. Physical data centers require power management, servers, racks, cables, and physical storage devices like storage area networks (SANs).
Figure 1.4: Physical data centers
Source: pixelnest/Adobe Stock Photos, khamkula/Adobe Stock and shymar27/Adobe Stock.
In on-premises environments, assets are a mix of hardware and software, in addition to any other SaaS products that the organization is using. Part of the trouble is that many organizations who have on-premises environments are also supporting workloads in the cloud.
It's rare to find a tool to manage all of an organization's systems and applications and parse the information into one spot. But organizations should work toward using as few tools as possible, while also balancing the needs of an ever-changing hardware and software landscape.
Because hardware fails and must be replaced over time, having a tool like Microsoft Configuration Manager may be good for both inventory and patch management. Organizations can benefit from this automation and reduce the overhead of manual patching and remediation activities.
Figure 1.5 shows the vast difference between on-premises and cloud environments. On-premises environments require appliances and physical devices like firewalls and physical servers that will sit in a server rack, whereas cloud environments will require additional tooling to look at static and dynamic application scanning, web application firewalls, and more software devices.
In Figure 1.5, it is easy to see how different on-premises and cloud environments are, based on the types of hardware and software supported. In a data center, there are hardware firewalls and network switches, as well as all of the physical servers that would sit in a server rack. However, in a cloud environment, there would be web application firewalls (WAFs), cloud-native tooling, as well as containers and virtual servers.
Figure 1.5: On-premises vs. cloud environments
There are multiple tool options to determine whether assets—physical or virtual, hardware or software—are available for on-premises and cloud environments. For most organizations, a combination of inventories from cloud systems and application libraries may need to be consolidated into one platform. A few tools are available today that will catalog and categorize hardware, software, continuous integration/continuous delivery (CI/CD) pipelines, SaaS, and cloud platform inventories into one dashboard.
But there is hope; asset management is a moving target that must be evaluated any time new products or devices are brought on board. Just like patch management, monitoring and logging, and all other cybersecurity activities, asset management must be an iterative and continuous process.
To begin, tools like Salesforce, ServiceNow, Microsoft Configuration Manager, and others, have been standard IT asset management tools for many years. Many large organizations leverage ServiceNow or a similar ticketing system because of its ability to catalog assets and assign tickets for maintenance and operations to those assets and their respective owners. However, this may not be an option for SMBs. Smaller organizations may need to leverage open source tools or the inventory management systems that come with their CSP. If you're using a small cloud environment, whether private or public, it makes more sense to leverage the CSP's in-house capabilities and compare those results to a vulnerability scanner for validation. One possible open source tool is Asset Panda, which can be used to manage inventory for cloud environments.