20,99 €
Dramatically lower the cyber risk posed by third-party software and vendors in your organization In Zero Trust and Third-Party Risk, veteran cybersecurity leader Gregory Rasner delivers an accessible and authoritative walkthrough of the fundamentals and finer points of the zero trust philosophy and its application to the mitigation of third-party cyber risk. In this book, you'll explore how to build a zero trust program and nurture it to maturity. You will also learn how and why zero trust is so effective in reducing third-party cybersecurity risk. The author uses the story of a fictional organization--KC Enterprises--to illustrate the real-world application of zero trust principles. He takes you through a full zero trust implementation cycle, from initial breach to cybersecurity program maintenance and upkeep. You'll also find: * Explanations of the processes, controls, and programs that make up the zero trust doctrine * Descriptions of the five pillars of implementing zero trust with third-party vendors * Numerous examples, use-cases, and stories that highlight the real-world utility of zero trust An essential resource for board members, executives, managers, and other business leaders, Zero Trust and Third-Party Risk will also earn a place on the bookshelves of technical and cybersecurity practitioners, as well as compliance professionals seeking effective strategies to dramatically lower cyber risk.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 291
Veröffentlichungsjahr: 2023
Cover
Table of Contents
Title Page
Foreword
INTRODUCTION: Reduce the Blast Radius
PART I: Zero Trust and Third-Party Risk Explained
CHAPTER 1: Overview of Zero Trust and Third-Party Risk
Zero Trust
Cybersecurity and Third-Party Risk
ZT with CTPR
Notes
CHAPTER 2: Zero Trust and Third-Party Risk Model
Zero Trust and Third-Party Users
Zero Trust and Third-Party Applications
Zero Trust and Third-Party Infrastructure
CHAPTER 3: Zero Trust and Fourth-Party Cloud (SaaS)
Cloud Service Providers and Zero Trust
Vendors and Zero Trust Strategy
Part I: Zero Trust and Third-Party Risk Explained Summary
PART II: Apply the Lessons from Part I
CHAPTER 4: KC Enterprises: Lessons Learned in ZT and CTPR
Kristina Conglomerate Enterprises
KC Enterprises’ Cyber Third-Party Risk Program
A Really Bad Day
Then the Other Shoe Dropped
CHAPTER 5: Plan for a Plan
KC's ZT and CTPR Journey
Define the Protect Surface
Map Transaction Flows
Architecture Environment
Deploy Zero Trust Policies
Written Policy Changes
Monitor and Maintain
Part II: Apply the Lessons from Summary
Acknowledgments
About the Author
Index
Copyright
Dedication
About the Technical Editor
End User License Agreement
Chapter 1
TABLE 1.1 Zero Trust and Third-Party OSI Table
Chapter 5
TABLE 5.1 Zero Trust and Third-Party Risk OSI Table
Chapter 1
FIGURE 1.1 U.S. Presidential Motorcade and Security related to Zero Trust
Chapter 3
FIGURE 3.1 Zero Trust Maturity Model Pillars
Chapter 5
FIGURE 5.1 KC's Enterprises’ architecture pre-zero trust
FIGURE 5.2 KC Enterprises’ zero trust and third-party risk architecture
Cover
Table of Contents
Title Page
Copyright
Dedication
About the Technical Editor
Foreword
Introduction: Reduce the Blast Radius
Begin Reading
Acknowledgments
About the Author
Index
End User License Agreement
i
xiii
xiv
xv
xvii
xviii
xix
xx
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
209
211
213
214
215
216
217
218
219
220
ii
iii
iv
221
Gregory C. Rasner
Zero trust is like kung fu.
Before we get into a debate about whether Brazilian jiu jitsu or Krav Maga is better, I'm just using kung fu as a general term for the personal discipline involved in mastering a martial art. Zero trust is the discipline of protecting yourself and your community in the cyber world.
In the cyber world, it's illegal to attack back, so our discipline is defensive.
Our adversaries don't have the element of surprise anymore. We know what they're after: money, information, secrets. We also know how they get it. No matter what technology you use, what industry you're in, or what role you may play in your organization, the one common denominator of the thing that attackers exploit is trust. We've evolved our defense to focus on trust relationships in digital systems, hence the name zero trust.
Our discipline focuses on how to remove the trust relationships in digital systems. In my book, Project Zero Trust (Ascent Audio, 2022), I argue that everyone inside your organization should play a role in your zero trust effort, both inside IT and outside of IT.
But what about people outside of your company? Your partners, your suppliers, your vendors?
Most martial arts can be traced back to a school or group of people who founded it as a school of thought. For zero trust, John Kindervag is the kung fu master.
In 2010, while he was the lead analyst for cybersecurity at Forrester Research, John coined the term zero trust. In these two words, John attempted to distill the most successful strategy for preventing breaches that he had seen deployed in real companies around the world.
John wrote this strategy, not just for security people who were already starting to make a shift in their defensive strategies based on changes to technology and our adversaries’ evolving tactics but for everyone else in information technology as well. Zero trust isn't just for us security nerds. We need everyone in our organizations to help.
In Project Zero Trust, I worked with John to create a fictional case study about a company that uses his repeatable five-step design methodology and his zero trust maturity model to secure their systems after a ransomware incident. I brought my experiences as a chief information security officer to highlight how organizations can apply zero trust to every critical aspect of cybersecurity: from physical security to enterprise resource planning (ERP) or customer relationship management (CRM) software, to identity, to cloud, DevOps, and security operations centers.
In other words, I wrote my book for your internal team.
But even if you get everything right in your own organization, that might not be enough in your zero trust journey. Today, two-thirds of all breaches are caused by vendors. Even if you've gotten everything right in your organization, you may have a blind spot to one of your biggest risks.
If zero trust is like kung fu, then Gregory Rasner is one of the blackbelts.
Gregory Rasner literally wrote the book on cybersecurity and third-party risk. And now he's applied zero trust to third-party risk to help complete our defense. We are stronger together than we are apart, and ensuring that your vendors or partners are secure is critical to success when it comes to cybersecurity.
Right now, when you think of third-party risk and zero trust, I hope you're picturing one of those kung fu movies where the heroes have to fight side by side, sometimes intertwined, doing incredibly intricate and daring moves to thwart their adversaries. That's a pretty good analogy for how you'll be able to work with your partners to defend yourselves together after reading Gregory Rasner's book.
—George Finney
CEO, Well Aware Security and author of Project Zero Trust
A breach of your third and fourth parties is mathematically inevitable. The Identity Theft Resource Center reported a 14 percent increase in data breaches in 2022 over the preceding year, which follows a 68 percent increase from 2020 to 2021 (and 2020 broke the 2017 record with a 23 percent increase). The concept of zero trust operates on the assumption that a breach will happen, and it produces a strategy designed to reduce the impact (the blast radius) of that inevitable breach or incident. Considering the continued exponential growth of malicious cyber activities and the fact that most organizations have numerous vendors, embracing a zero trust strategy becomes the most reliable way to significantly decrease your vulnerability to third-party cyber risks.
In the past several years, cybersecurity risk in third-party risk management has increased significantly as malicious and criminal cybersecurity activity has also increased (up 800 percent since early 2020 according to FBI cyber reporting). In late 2021, the SolarWinds breach occurred, where a highly skilled and persistent actor utilized widely used software to infiltrate its ultimate targets: large technology companies and many three-letter governmental agencies. This breach served as a wake-up call for the cybersecurity and third-party risk management communities—a tangible example of a very dangerous and capable hacking organization leveraging a vendor to gain access to their intended targets/victims. Since then, the frequency of potential and actual breaches involving third and even higher-level parties has risen substantially, impacting organizations in a similar manner to the escalation in cyber activities. Even before 2020, organizations were struggling with the challenges of cyber and third-party risk management. And then, the exponential increase in cyber incidents, breaches, and related events within their vendor networks has posed additional difficulties, even for companies with mature risk management programs. Considering all of this, how can we reduce the risks in this space when cyber activity is growing exponentially and advanced persistent threat actors are taking advantage of control gaps?
Recently, a new strategy has been gaining headway: zero trust. Zero trust operates on the premise that a breach is inevitable, and its objective is to reduce the impact caused by such breaches. There is some truth to the idea that breaches are inevitable or bound to happen, considering the increasing number of cybersecurity and technology companies that have experienced breaches, despite having strong cybersecurity measures in place. This mindset also aligns with the reality that risk is never zero or completely eliminated. Cyber teams work to reduce risk; they cannot eliminate it entirely. Zero trust means implementing measures to protect assets and adopting a more mature identity and access management process, which will include incorporating features, such as multifactor authentication, least privilege, and enhanced network access controls.
Considering that the level of malicious cyber activity is unlikely to decrease anytime soon, if ever, it's unrealistic to expect a reduction in the number of cyber incidents, events, and breaches. Does anyone think that the lesson that the advanced persistent actors took from the SolarWinds breach was to stop doing the same in the future? SolarWinds showed how easy it is for a malicious actor to use a third party to get access when customers don't hold their vendors to a cyber security standard. From the viewpoint of zero trust, a breach is inevitable, especially at your third parties. Therefore, adopting the strategy of zero trust becomes crucial to minimize the blast radius when a third-party breach occurs. Implementing a zero trust approach to third-party risk and vendors allows for a far greater reduction of risk because it requires an organization to compartmentalize and cordon off areas with segmentation and access controls. Zero trust can be a challenge to implement in many organizations as they struggle to determine where to start their strategy. Starting the journey with cyber third-party risk management provides an area to deploy that is easily defined, and this can often lead to enhanced risk reduction compared to other areas within a company.
The book is structured into two main parts: Part I provides an overview of the intersection between zero trust and third-party risk management, and then discusses the implementation of each domain: users, devices, and applications. Because zero trust is not a technology or a product, the emphasis is on processes, programs, and controls. Part I provides detailed insights into the necessary processes, programs, and controls for implementing zero trust in cyber third-party risk management, incorporating relevant examples and use cases whenever possible. Part II centers around the experiences of a fictitious company called KC Enterprises, which was introduced originally in my previous book, Cybersecurity and Third-Party Risk: Third Party Threat Hunting (Wiley, 2021). KC Enterprises suffers a breach caused by a third party, prompting them to begin their journey of zero trust and third-party risk management. Part II also allows you to observe how an organization implements a zero trust strategy to effectively mitigate vendor-related risks. It builds upon the lessons from Part I, offering practical insights into reducing vendor risk via the implementation of zero trust principles.
The intersection of zero trust (ZT) and third-party risk (TPR) can be a challenging one to cross. Neither is a set of technologies. Instead, both are a combination of people, processes, and technologies to accomplish a strategy. Implementing them isn't as simple as buying and installing a bunch of new stuff and walking away; it requires a way to find the overlap between the two (ZT and TPR) and making informed decisions to identify the changes required and carrying them out.
Zero trust can be intimidating for any organization to implement, given that it is not a technology but changes to how specific security controls are accomplished in the enterprise. The next pages briefly cover the history of ZT to enable you to better understand the principles and then see the overlap with TPR.
Zero trust is a strategy—it is not a tool or technology. To better understand the strategy, it is necessary to understand who developed it, why, and how. ZT was borne out of John Kindervag's observation that the previous trust model (perimeter-based security) was the fundamental cause of most data breaches. Kindervag expanded on this concept in “No More Chewy Centers: Introducing the Zero Trust Model of Information Security”1. In 2016, John updated his research with “No More Chewy Centers: The Zero Trust Model for Information Security, Vision: The Security Architecture and Operations Playbook.”2 The term chewy center derives from the previous (old) model in which information security professionals wanted their network to be like M&Ms: hard on the outside but with a soft and chewy center.
The perimeter-based, firewall-focused security models were ineffective against threats. The assumption that we trust all users, applications, and transactions once they've passed the firewall is folly and has been proven time and again to be wrong. Which interface is trusted and which untrusted? How do we know which packets to trust? Many attacks come from malicious insiders who are already inside the chewy center, munching away at the lack of controls past the crunchy outside.
ZT does not seek to gain trust but assumes all traffic is untrusted. The requirement in ZT becomes to ensure that resources are accessed securely, wherever they are located, require least privilege for access, strictly enforce access controls, and all traffic is logged and inspected. This approach eliminates the chewy center by removing trust from the process.
Zero trust is not a project but an updated approach to thinking about information security. As previously mentioned, ZT is a strategy, not a tool or technology. Strategy is defined as “a plan of action or policy designed to achieve a major or overall aim.” A successful strategy requires structure, and one of the most widely used comprises the four levels of warfare: policy, strategy, tactics, and operations. Policy has the overall grand strategy or political outcome as the ultimate goal—for example, the grand strategy in World War II for the Allies was the unconditional surrender of the Axis powers. Under the policy is the strategy. Using the same WWII analogy, this would be the European and Asia Theater strategies for conquering the Axis powers in those regions. Tactics are the things used and include the tools of war (tanks, planes, ships, etc.), and operations are the way the tools are used (battles, engagements, etc.).
Taking that same outlook on cyber strategy, the grand strategy is to stop all data breaches. That should be borne out through all downstream activities as the outcome of this grand strategy. The strategy at the next level is ZT. To successfully meet the top-level grand strategy, ZT will be the “big idea” deployed down into the tactics and operations. Tactics are the tools and technologies leveraged to achieve the ZT strategy, and operations are the policies and governance that ensure successful execution up the strategy stack.
Connecting the strategy and ultimate goals of ZT drives the definition: a strategy designed to stop data breaches and make other cyberattacks unsuccessful by eliminating trust from digital systems.
Three concepts are crucial to the success of any ZT strategy: secure all resources, strictly enforce access controls, and verify always. These concepts derive from the strategy that you can no longer trust any traffic on your network. The previous model of trusted network internal and untrusted outside your network is over, and everything is untrusted.
One of the best visual examples of ZT was shown to me by John Kindervag himself, leveraging the US presidential motorcade as the visual tool. Much like ZT, the Secret Service trusts no one who approaches the president.
Figure 1.1 shows the presidential motorcade from the 2005 inauguration of President George W. Bush. The protect surface is the oval where the president sits inside the limousine, which is referred to as “the Beast.” The Beast has many security features built into it to protect this asset. This is the area ZT is designed to protect—the most valuable asset. The four circles represent the controls around the dotted line of the microperimeter. The pentagon shapes represent the monitoring that is happening around the protect surface, always looking for anomalous behavior coming from anywhere, not just internally or externally; hence, they are facing forward and always looking around the area. The dotted lines on the top and bottom of the picture are the perimeter and clearly show the “firewall” equivalent of the fence. To further illustrate the concept of the protect surface being the focus of ZT, consider a worst-case scenario in which, as a result of an attack on the president, one of the service members who is saluting was injured, but the ZT strategy worked and the president came out unharmed. While it would be tragic if the service member were killed or injured, the mission of ZT was successful. Take the analogy to your environment: Your ZT strategy will be considered successful if during a cyber event your customer database with credit card numbers is unseen and unmolested but you lose public data that was not inside the protect surface.
FIGURE 1.1 U.S. Presidential Motorcade and Security related to Zero Trust
1. Secure Resources For Zero Trust to work as a strategy, it is critical to ensure all resources are accessed securely, regardless of location, and regardless of where the traffic originates to access the resources. You should treat all traffic as a threat, until it is determined to be authorized, inspected, and secured. For example, all traffic should be encrypted, regardless of whether it is internal or external. Insider abuse is often the largest cyber threat organizations face. All traffic, both internal and external, must be inspected for malicious activity and authorized to access the resources. However, it isn't just the access; the level of access must be more specific, via a least-privileged strategy with strictly enforced access controls.
2. Least Privilege and Access Control The principle of least privilege grants users or systems the smallest amount of access to resources needed to perform their tasks. Nothing more, nada. Using this is a standard ZT practice, and users and systems should be offered permissions only when required to perform their duties. Providing users or systems permissions beyond the scope of their requirements can allow them to gain access or change data. I intentionally used the term users or systems here because although users are typically associated with people, much of the data access is carried out by systems such as computers, software, or code. These accounts often have excessive privileges or access beyond what they actually need for their intended functions.
An example of why and how, in a nontechnical scenario, is if you ask a neighbor to watch your house while you're away on vacation. The level of work required of the neighbor dictates the level of access provided. For example, if you just want the neighbor to check your mailbox, you give them only the mailbox key, not your house key. However, if you need them to water your indoor plants and walk your dog, you must give them a housekey. Perhaps you don't want them to check the mail but just your houseplants and dog; in this case, you give them only the housekey, not the mailbox key. Further, when you're not on vacation, you don't allow the neighbor to keep keys because they do not need them.
Here is an example of how this should work with a system. A print server accepts print jobs from the local network and copies the documents into a spool directory, which then uses that to print to the paper. When the printer finishes with the printing, it should surrender the right to access that file/spool directory because it no longer needs that resource (until the next print request). One of the most infamous violations is in Internet mail servers (sendmail is a great example), where they require root access to initially gain access to port 25, the classic Simple Mail Transfer Protocol (SMTP) port. Once access to port 25 is completed, the mail server should relinquish that root level access. However, if it does not because it is not required or coded to follow least privilege, an attacker could still leverage that root level access. The server could be tricked into running code at the root level, and anything the attacker attempts will succeed at this level of access.
Access controls must be strict and based on minimal privilege. Currently, the best method for implementing such access controls is with role-based controls for all, employees and third parties. Role-based access controls (RBAC) are standard and best practice, with most software, infrastructure, and IAM systems designed with this in mind. The roles are defined by the minimum level of access required, and users or systems are placed into these roles as a method to ensure access control is enforced. For example, access to a company's finance system has many different roles, and thus permissions or abilities: the analyst who works on Accounts Receivable only has access to A/R, whereas the Chief Financial Officer has access to all of the financial records; the System Administrator has access only to the system configuration for the finance software but not any of the financial records themselves. The backup system that takes nightly snapshots of the database in the finance software only has access to stop the processing of the software so it can safely back up the system without more processing going on.
Privileged users, those with administrator or root level access, can do a lot of damage, both intentionally and accidentally. Malicious actors always strive to get these user accounts so that they can more easily steal data, wreck systems, and plant malicious code. These accounts need to be managed by Privileged Identity Management (PIM), which allows visibility into their activities and has these super users check out much stronger passwords than a human can process in order to reduce the risk.
Last but not least is governance as part of the overall process for access controls. Cyber governance is all the methods and tools used by an organization to respond to cybersecurity risks, including policies, processes, and programs. NIST describes governance as “the policies, procedures, and process to manage and monitor the organization's regulatory, legal, risk, environmental, and operational requirements are understood and inform management of cybersecurity risk.”3 If there is no governance structure over what is being done to secure information systems, then it is not a repeatable process, and failure is inevitable.
Identity and access management (IAM) strategies and tools are part of almost every organization's standard practices. IAM is a term used to describe all the items around user identities, user authentications, and access controls to resources. Privileged access management (PAM) and privileged identity management (PIM) are subsets of the IAM strategy and tools. PIM policies enable controlling elevated privileged users to modify settings, perform provision and deprovisioning of access, and make other changes to user access. PIM solutions also offer the capability to monitor privileged user behavior and access to prevent users from having too many permissions that violate least-privilege rules. PAM is the process of controlling and monitoring privileged access to resources. PAM solutions manage credentials, provide just-in-time access, and authenticate users. These tools also provide session monitoring and access logs for consumption and alerts. PAM addresses how to monitor and control access when a user requests access to a resource, whereas PIM addresses the access the privileged user already has been granted. Understanding the distinction between PIM and PAM is helpful, even though you'll find that many people confuse them and sometimes use them interchangeably.
3. Ongoing Monitoring and Validation In the old model of chewy centers, most organizations focused on monitoring traffic primarily from external interfaces. In the ZT model, the requirement is to monitor all internal and external traffic. Whether it's the malicious insider or an attacker who broke into your crunchy center, internal monitoring is the only way to detect and remediate any harmful behavior. This monitoring is continuous and ensures the ability to identify suspicious activity by users. Many systems are logged internally by most organizations, but the major difference in ZT is that these are not just logged and reviewed later. Many tools are available that can consume logs in near real time to be able to react more quickly.
Network analysis and visibility (NAV) is a term coined by Forrester in 20114 to describe the tools to passively analyze network traffic for threats by leveraging behavior- and signature-based algorithms, to analyze traffic flows, packet captures, and relationship between assets, to integrate with controls to remediate threats, and to enable forensics. NAV products sit at the center of the network to provide visibility into lateral movement, anomalous behavior, application dependencies, and granular reporting. Other names for these types are network visibility, detection, and response (NVDR) and network traffic analysis (NTA). Regardless of the name, they all use a combination of machine learning, behavior modeling, and rules-based analysis to detect anomalous or malicious activities.
These tools and systems provide key benefits for the ZT strategy. Most importantly, they provide insight into the traffic flow on the network, along with user access and behavior. This is in contrast to the practice of monitoring all applications individually, which in most organizations is not scalable. Given that all applications must work with traffic on the network for access, this approach allows the review of application access and user behavior more holistically and at scale. There is an ability to correlate data for earlier and better breach detection when leveraging NAV, and it sends a message to would-be malicious actors that they are being watched. Think of it as a police car that is following a bad actor down the road: when a driver sees a police car in their rear-view mirror, their driving vastly improves.
One of the best examples of a third-party breach—and where ZT would've vastly reduced or eliminated the impact—was the Home Depot breach in 2014. The total bill for Home Depot, as of 2021, was estimated at over $200 million;5 they were still paying out damages more than 7 years later. The cyberattack was the result of user credentials stolen from a vendor of Home Depot that were used to access, then elevate privileges, resulting in the theft of 50 million credit card numbers and 53 million customer email addresses. The attack went on for 5 months, from April 2014 to September 2014, while the hackers moved laterally and undetected as they searched for and found the point-of-sale (POS) system.
The three key concepts laid out by John Kindervag and adopted into the ZT strategy are the foundation for the tactics and operations success. Because people with a variety of cyber skills may be reading this book, we need to spend a few minutes discussing some key concepts and tactics for implementing ZT.
Multifactor Authentication Multifactor authentication (MFA) refers to using two or more factors in authentication. The types of factors can be something you know, something you have, and something you are. Increasingly, a fourth factor is where you are. MFA is not a username and password (which is considered single-factor authentication).
Something you know:
Something only the user knows (password or PIN)
Something you have:
Any physical object the user has (security token, bank card, key)
Something you are:
A physical characteristic of the user (fingerprint, voice, typing pattern, eye)
MFA enhances security because it greatly reduces the risk that an attacker can get access by using just a single factor, such as username and password. Many passwords are reused and are for sale by criminals on the dark web, making a username and password an easy bar to overcome in most cases. A true deployment of MFA must use two of the three distinct factors of authentication, not just multiple instances of one factor.
Microsegmentation Microsegmentation refers to the concept of breaking up your network into many zones, thereby limiting the damage that can be done when one area becomes compromised. Segmenting your network can prevent lateral movement from one infected area into the rest of the network. Microsegmentation creates secure areas in the environment to isolate work and data by having firewall policies and policy enforcement points that limit east-west traffic and prevent lateral movement in order to contain breaches and strengthen compliance. It works by expressly allowing specific application or user traffic and, by default, denying all other traffic. Creating granular control policies allows enforcement across any workload (virtual machines, containers).
Typically, in the past, networks were segmented using virtual local area networks (VLANs) and access control lists (ACLs), but microsegmentation takes this a step further by having policies that apply to individual workloads for better attack resistance. Intrusion prevention systems (IPSs), traditional firewalls, and data loss prevention (DLP) usually inspect traffic going north-south (vertical) in a network. Microsegmentation reduces the blast area of an attack with east-west (lateral) limits that provide better control of communications between systems (servers), which often bypass perimeter-based security tools. This concept allows the tailoring of security settings to different types of traffic and policies that limit traffic from network and applications to those that are expressly permitted.
The goal of microsegmentation is to decrease the attack surface. Segmenting rules all the way down to the workload or application can greatly reduce the risk of a hacker moving from one compromised area or application to another. Operational efficiencies are also gained in this process, as ACLs, router rules, firewall policies, and other systems can become overwhelmed and produce a lot of churn.
Protect Surface The protect surface is the smallest possible reduction of the attack surface and is where the sensitive resources are located. In the old model, the attack surface was often poorly defined as the whole of your network and all resources and assets on it. Defining what is most sensitive and what requires protection and placing that in the protect surface minimizes the attack surface to an easily defined space. This allows the controls to be moved close to the protect surface where the anomalous or malicious activity must be detected. Each protect surface should contain a single resource (DAAS, discussed in the following section), and each ZT environment will have more than one protect surface.
For the context of this book, the protect surface is focused on third parties and their DAAS in your network. ZT is an iterative process that suggests you start with one asset and add to the ZT strategy defined as a new DAAS. Starting with your third-party risk, which are mathematically more likely to have an event, is ideal.
Data, Applications, Assets, Services (DAAS) Data, applications, assets, and services—known as DAAS—are the sensitive resources inside the protect surface. Data is any information that would be damaging if observed or exfiltrated and sits above your organization's risk appetite for loss. Applications are ones that use confidential data or control critical infrastructure or assets. Assets are devices and organizational and information technologies. Services are those your organization depends on to operate and can include all the back-office operations, such as DNS, DHCP, NTP, and APIs.
The implementation of ZT should be broken down into small, manageable components. As with any large and ongoing effort, it is best to start small and add protect surfaces and sensitive assets as they are identified and remediated in risk-based order. The five steps defined in the process allow the organization to start small and include defining the protect surface, mapping transactions flows, building the ZT architecture, creating the ZT policy, and then monitoring and maintaining the network.
Step 1: Define the Protect Surface Identify the DAAS that need to have a protect surface defined. Providing a precise definition of this is almost impossible, as it depends on which critical DAAS examples are in your network and on your risk appetite. However, examples include a customer database with Social Security numbers, an application systemically critical to operations, the server running directory services for your domain, or an API that is crucial for your daily close of financial records. There should be only one DAAS asset per protect surface, and your network will have multiple protect surfaces. This step begs a risk-based approach and will take negotiations, but it must be business-driven, not dictated by information security risk alone.