39,59 €
Understand how Palo Alto Networks’ firewall as a service (FWaaS) platform Prisma Access offers secure access to internal and external resources to mobile users and branch offices. Written by Palo Alto Networks expert Tom Piens, a renowned mentor instrumental in fostering a dynamic learning environment within the Palo Alto Networks LIVE community, this guide is your roadmap to harnessing the full potential of this platform and its features.
The first set of chapters will introduce you to the concept of cloud-delivered security and the key components of Prisma Access. As you progress, you’ll gain insights into how Prisma Access fits into the larger security landscape and its benefits for organizations seeking a secure and scalable solution for their remote networks and mobile workforce.
From setting up secure connections, implementing advanced firewall policies, harnessing threat prevention capabilities, and securing cloud applications and data, each chapter equips you with essential knowledge and practical skills.
By the end of this book, you will be armed with the necessary guidance and insights to implement and manage a secure cloud network using Prisma Access successfully.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 273
Veröffentlichungsjahr: 2024
Implementing Palo Alto Networks Prisma® Access
Learn real-world network protection
Tom Piens Aka 'Reaper'
Copyright © 2024 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Pavan Ramchandani
Publishing Product Manager: Neha Sharma
Book Project Managers: Srinidhi Ram and Neil D’mello
Senior Editor: Sujata Tripathi
Technical Editor: Irfa Ansari
Copy Editor: Safis Editing
Proofreader: Sujata Tripathi
Indexer: Rekha Nair
Production Designer: Prashant Ghare
DevRel Marketing Coordinator: Marylou De Mello
First published: May 2024
Production reference: 1190424
Published by Packt Publishing Ltd.
Grosvenor House
11 St Paul’s Square
Birmingham
B3 1RB, UK
ISBN 978-1-83508-100-6
www.packtpub.com
Dedicated to my wife and son, who lovingly ensured that this book took much longer to finish than it otherwise would have.
As a security professional at Palo Alto Networks, I have had the privilege of witnessing the transformative power of Prisma Access firsthand. While my journey didn’t involve formally writing this book, my contributions to research, testing, and real-world implementations have helped shape its content.
This book marks a collaborative effort, drawing upon the combined expertise of myself and my esteemed ex-colleague, Tom Piens. Our years of experience have brought my insights and countless hours of testing to life on these pages.
Within these pages, you’ll find a comprehensive guide to navigating the evolving security landscape with Prisma Access. From foundational concepts to practical applications, the book caters to both seasoned professionals and those new to the SASE revolution.
However, I would be remiss not to acknowledge my limitations as a contributor. While I haven’t personally crafted the words, I stand firmly behind the knowledge and experience embedded within them. I hope that this book empowers you, like me, to harness the power of Prisma Access and confidently navigate the future of network security.
Delve into the world of Prisma Access, uncover its potential, and become an expert in securing your organization’s future.
Remember, the knowledge shared here is a culmination of countless voices, including my own, all seeking to make the SASE journey smoother for everyone.
– Rutger Truyers, SASE Expert
Tom Piens Aka 'Reaper' is a seasoned expert in network security and cybersecurity, boasting over two decades of dedicated experience in the field. His journey has been marked by a 14-year focus on Palo Alto Networks products, during which he made significant contributions over a 12-year tenure at the company itself. Tom distinguished himself as the first international support engineer at Palo Alto Networks, later transitioning to a pivotal role in the LIVE community department. There, he played a crucial role in rebuilding the knowledge base, moderating the forum going by his alias “Reaper,” and authoring numerous insightful articles.
For the last three years, Tom has embarked on an entrepreneurial venture, founding PANgurus BV. Under his leadership, the company has specialized in Prisma Access solutions and has been instrumental in enhancing customers’ firewall configurations through meticulous audits and the implementation of best practices. Tom’s commitment to excellence in cybersecurity and network security, coupled with his hands-on approach to solving complex challenges, positions him as a leading authority in the industry.
Dimitri Zuodar graduated in 2000 as an industrial engineer and embarked on a career in data communications. By the end of 2002, his commitment toward specializing in routing and switching resulted in his becoming CCIE certified (#10782). This strong technical foundation, combined with his personal development in architecture, project management, and pre-sales roles, resulted in the decision to become an independent consultant in early 2010. Over the past 14+ years, Dimitri has been hired as a trusted technical consultant by his customers in several industries. His always-learning mentality has since resulted in achieving multiple certifications in public cloud, multi-cloud networking, and OT cybersecurity (IEC 62443 Cybersecurity Expert).
Kim Wens has accumulated over 20 years of experience in the field of network security and has spent the last 13 years working for Palo Alto Networks. In 2011, Kim joined Palo Alto Networks as a TAC engineer, and for the past 8 years, he has been engaged in Palo Alto Networks’ LIVE community providing solutions for customers, adding content, and moderating the forum. The CISSP certification he holds is a testament to his mastery of information security concepts and practices and underscores his commitment to excellence and proficiency in safeguarding critical information assets.
This part covers how to plan for and deploy the base configuration on which all other components in Prisma Access will be built. This part has the following chapters:
Chapter 1, Designing and Planning Prisma AccessChapter 2, Activating Prisma AccessChapter 3, Setting Up Service InfrastructureChapter 4, Deploying Service ConnectionsPrisma Access is Palo Alto Networks’ Secure Access Service Edge (SASE) solution that provides Firewall-as-a-Service (FWaaS) functions to secure internet and network access for branch offices and mobile users leveraging the full functionality of the well-known Next Generation Firewall (NGFW) platform. SASE enables a distributed cloud environment with security processing nodes in different countries and locations so that protection can take place close to the branch office or remote user versus the traditional method of tunneling all traffic back to the main data center or headquarters location. This approach ensures low latency and in-countryinternet breakout.
In this chapter, you will be introduced to the basic building blocks of Palo Alto Networks’ Prisma Access. We will review which preparations will need to be made and which steps need to be taken before we can deploy a tenant. We are going to learn how each component has similarities and some profound differences from the other components and how this will help us with our design considerations.
In this chapter, we’re going to cover the following main topics:
Planning for routingPlanning the service infrastructurePlanning for remote network connections and mobile usersPlanning for service connectionsTo complete this chapter, you should have a working knowledge of Border Gateway Protocol (BGP) andcloud networking.
In this section, we’ll learn how the basic building blocks of Prisma Access communicate with one another. This knowledge is critical in the later stages when building, planning, and troubleshooting to understand why certain components act differently from others.
Before we get started, we need to learn about the building blocks. We will dive much deeper into the individual components in the next few chapters, but it is important to gain a good understanding of what each component is for so that you can more easily imagine where each piece fits into the puzzle. We’ll start with a basic outline and gradually build upon what we’ve learned. This is what you need to know so that the following sections make sense.
Cloud infrastructure is the base on which everything is built. It is the embodiment of the tenant that is spun up. Any configuration changes you make here apply to everything; it is the cloud where everything else lives. It is the backbone upon which everything else is built.
Service Connection Corporate Access Node (SC-CAN) or simply Service Connection serves several purposes. The primary task is connectivity toward a data center or public cloud (IaaS) environment, be it virtual or physical, which is achieved by setting up IPSec VPN tunnels. The second role of this type of node is to perform dynamic routing. Lastly, it can serve as a User-ID redistribution node. Service connections do not have access to the internet and do not have any security enforcement. They are treated as a trusted connection between the data center, public cloud IaaS environment, and the infrastructure. A service connection provides unmetered throughput, up to 1 Gbps, and several service connections can be set up to the same data center in a load-sharing configuration. They cannot be set for load balancing.
Remote Network Security Processing Node (RN-SPN) is typically used to connect remote offices securely to the internet and internal resources behind – for example – a service connection. RN-SPN has a direct internet connection and functions like a firewall virtual machine (VM) with security rules from the remote network to the internet or internal resources. The advantage of an RN-SPN is that, because it is a firewall, it can apply deep packet inspection and perform any security check a regular on-premises firewall can. All the same features – that is, Advanced Threat Prevention, DNS Security, Advanced URL filtering, Advanced WildFire, AntiVirus, AntiMalware, AntiPhishing, File Blocking, DLP, IoT security, TLS decryption, Remote Browser Isolation, and Authentication – are available on each gateway (the availability of each feature depends on your license model; we will cover this in the next chapter). An important consideration regarding RN-SPN is that these are metered connections that require a certain bandwidth to be assigned to a node in a region and any peer connecting to them will need to share that bandwidth. However, some load balancing options, such as Equal Cost Multi-Path (ECMP), are available. The bandwidth for RN-SPN is purchased as a pool of bandwidth (for example, 5,000 MB) that can be distributed to different compute locations, with the minimum amount being 50 MB.
A single node in a compute location can only support up to 1,000 MB, so if you assign more capacity, additional nodes will be spun up and the allotted amount will be divided. For example, if a region is assigned 1,500 MB of capacity, two nodes will be spun up – one with 1,000 Mbps and another with 500 Mbps of throughput. Each node is a firewall VM that can be assigned VPN connections from remote offices.
Mobile User Security Processing Node (MU-SPN) functions the same way a traditional GlobalProtect gateway does and is used to terminate user VPN connections. From these connections, users will have secured internet access and access to internal resources. Like the RN-SPN, the MU-SPN is a full next-generation firewall (NGFW), and security rules can be applied to control what users can access on the internet or from internal resources, and security profiles provide full Layer 7 inspection. Unlike RN-SPNs, MU-SPNs are unmetered. If there are several thousands of users in a single country, it is recommended to reach out to Palo Alto Networks to ensure scaling options are reviewed; similar to how additional RN-SPN nodes are spun up in a compute location for bandwidth purposes, additional MU-SPN can be spun up in a region to accommodate a large number of users. We’ll go into more detail about this in Chapter 6.
Portal Security Processing Nodes (PT-SPNs) are related to MU-SPNs in the same way a regular GlobalProtect portal and gateway form a pair. These portal nodes are used primarily to serve the remote agents with their configuration and as the portal to download the GlobalProtect agent and access clientless applications.
Explicit Proxy Security Processing Nodes (EP-SPNs) are an alternative to the MU-SPN, which allows customers to rely on proxied connectivity over a TLS connection to secure internet access instead of relying on an IPSec or SSL VPN tunnel. The advantage over MU-SPN is that legacy proxy setup can be easily displaced and endpoints that do not support a VPN agent can still be safely connected to the internet. However, clients using the EP-SPN will not be able to reach private apps via the infrastructure.
The aforementioned components are primarily hosted on a private tenant in Google Cloud Platform (GCP), with some backup locations being hosted on Amazon Web Services (AWS).
Let’s move on and see how all these parts tie together.
The first thing that needs to be set up when a fresh Prisma Access tenant is provisioned is the cloud infrastructure. The cloud infrastructure automatically builds a full VPN mesh between all SC-CANs and RN-SPNs across all compute regions with dynamic routing.
MU-SPNs and portals are connected to their geographically closest SC-CANs; dynamic routing is set so that SC-CANs become route reflectors for their connected MU-SPNs.
Finally, the cloud infrastructure enables MU-SPNs and RN-SPNs to access the internet.
The following figure provides a broad overview of what a deployment might look like, with the cloud representing the infrastructure:
Figure 1.1 – The infrastructure mesh
As we can see, all SC-CANs connect to all SC-CANs and all RN-SPNS, and all RN-SPNs connect to all RN-SPNs and all SC-CANs. Each MU-SPN connects to the geographically closest SC-CAN.
Because there is a full VPN mesh between each RN-SPN and SC-CAN, sessions from one node will always use the fastest path available to a destination RN-SPN or SC-CAN. Dynamic routing is configured between all nodes to advertise which routes are available at each node, which also allows for redundancy with the connected remote peers.
As shown in the following figure, even if an internal path is interrupted, the fastest alternative path will be used to get to the desired destination:
Figure 1.2 – Route redundancy
The full mesh also ensures that geographically distant SC-CANs and RN-SPNs can communicate directly and do not need to rely on complex or long paths to traverse the infrastructure network to the other side of the globe.
MU-SPNs, on the other hand, only connect to the nearest SC-CAN and do not build a full mesh with other MU-SPNs, RN-SPNs, or SC-CANs. This means that any connection to or from a remote RN-SPN or SC-CAN will always need to traverse the SC-CAN that the MU-SPN is connected to.
As shown in the following figure, a user on one MU-SPN setting up a session with a user connected to a different MU-SPN that is also connected to a different SC-CAN would need to traverse two SC-CANs before reaching the remote MU-SPN. In short, an IT admin in the US connecting to a user’s desktop in Europe will need to pass two SC-CANs:
Figure 1.3 – Routing mobile users
The preceding figure also illustrates that any connections between an MU-SPN and an RN-SPN also rely on the existence of an SC-CAN. Since MU-SPNs only connect to an SC-CAN in the infrastructure network, the only way for an RN-SPN to communicate with any given MU-SPN is via an SC-CAN.
If users in a specific region need to be able to set up sessions in a remote office in the region, an SC-CAN should be planned accordingly. An example of this would be employees working from home and needing to print something in their office.
As shown in the following figure, if a user in Europe needs to connect to a remote network in Europe but no service connection is available, the session would need to be routed through the SC-CAN in the US, which may increase latency:
Figure 1.4 – Routing outside of a geographical location
Adding an SC-CAN in Europe will tremendously improve latency on the aforementioned connections. Any SC-CAN deployed for this reason does not require a connection to a data center. Palo Alto Networks will refer to a dummy service connection in their customer engagement; an SC-CAN can simply be provisioned in a region for connectivity purposes and not have a VPN set up.
As we saw earlier, the service infrastructure is what ties everything together. To ensure all the components can communicate with each other, a subnet is needed; this serves as the backbone of everything. IP addresses will be needed to set up internal VPN tunnels and BGP peers.
Since this subnet serves as your backbone network in the cloud, and the cloud is connected to your data centers, a range needs to be selected that does not overlap with any of the production networks. In most cases, an x.x.x.x/23 subnet should suffice to provide enough IP addresses to support the entire infrastructure. However, if the deployment calls for several thousands of remote users in several different locations globally or a large number of remote networks in many different countries, a larger subnet may be required.
As we touched on earlier, remote networks are deployed based on the bandwidth assigned to a specific compute location. Once bandwidth is assigned, one or more RN-SPN nodes are provisioned for a compute location, and remote networks can be connected to them. The first task when planning the Prisma Access deployment is to estimate how many remote offices there will be and what the bandwidth requirements will look like. Once you know how many sites and how much peak throughput is estimated to be required, you can calculate the amount of bandwidth you will need to purchase.
Consider how much bandwidth will be needed; for anything between 50 Mbps and 1,000 Mbps, a single RN-SPN node is spun up in a compute location, but going over 1,000 Mbps will cause a second node to be spun up and the assigned bandwidth to be divided between the two.
At this time, assigning 1,050 Mbps to a compute location would cause two nodes to be spun up; one node will be capable of reaching 1,000 Mbps and the other node will be capable of 500 Mbps throughput. Prisma Access allows you to oversubscribe up to the maximum capacity of the second node, so traffic will not be limited. Assigning 2,050 would cause three nodes to be spun up, with two nodes with 1,000 Mbps throughput and one node with 500 Mbps capacity, with the same oversubscribing capabilities. This is due to sizing capabilities for individual nodes as they are currently deployed by Palo Alto Networks. In a future release, the distribution will be applied evenly rather than lopsided as it is at this time.
Once your compute location has been provisioned with bandwidth and several RN-SPN nodes have been spun up, you will need to manually select which node is used to connect remote offices. Make sure you spread the load of remote networks evenly or strategically as this is not checked by Prisma Access. You select which RN-SPN is used to terminate the tunnel as no load sharing is available to move a tunnel over to a different RN-SPN (more on load balancing later).
The following figure shows a use case example where 2,500 MB has been assigned to the Belgium compute location. This causes three nodes to be spun up, each receiving two nodes with 1,000 Mbps and 1 node with 500 Mbps from the pool. Multiple remote offices are connected, with three being connected to node1, one being connected to node2, and two more being connected to node2. In this constellation, three smaller offices can share the bandwidth of one node; one remote office has the full bandwidth of one node all to itself and two more offices share the last node.
The following table provides an overview of various examples:
Table 1.1 – Examples of bandwidth allocation to nodes
Following is an example of how bandwidth allocation could look if we assigned 2500MB to a single RN-SPN location:
Figure 1.5 – RN-SPN provisioning example
The bandwidth that’s allotted to a single RN-SPN node is shared among all of its connected remote networks. If only one remote office is active at any given time, they will have the full use of all the bandwidth available. Additionally, there’s a QoS configuration option we’ll cover in Chapter 5.
As mentioned earlier, each RN-SPN is capped at 1,000 Mbps, so if more bandwidth is required for a remote office, tunnels can be bundled to several RN-SPNs from a single remote office. There are a few things to consider when setting this up:
Load balancing needs to be set up in the remote network’s infrastructure. Any technology, such as ECMP, on the tunnels or using a network load balancer can be used, but the mechanism needs to be set in such a way that sessions, sources, or applications are routed symmetrically across the tunnels. Asymmetric session flows are not supported on the RN-SPNs.Each RN-SPN has a unique public IP address, so sessions originating from the remote office will be egressing onto the internet using the IP of the associated RN-SPN. If nodes are used across different countries, this will influence the geolocation of the sessions.For internet access, this allows you to scale large numbers of RN-SPNs (up to 500 per 1,000 Mbps node in the 5.0 innovation). These can be connected so long as the previous bullet points are taken into account and the on-premises load balancing supports it (for example, some routing devices may have a limitation on the number of ECMP paths that can be selected).
At the time of writing, there are 116 locations available worldwide to select when deploying mobile user security processing nodes. These locations are spread across multiple major regions: the Americas, Europe, the Middle East, and Asia Pacific. There are also some smaller compute regions, such as the Netherlands, Canada, Ireland, and Hong Kong.
When planning out which countries need to be onboarded, you need to consider that each location that is onboarded will require at least one /24 subnet to serve as an IP pool for the mobile users in that country. For larger countries, multiple /24 subnets may be needed to account for the number of simultaneously connected users. These subnets are assigned to the MU-SPNs at random out of a larger global or regional pool by the service infrastructure. You can choose to use one large global pool or lots of smaller regional pools. If a regional pool is depleted, the global pool will be used to extract additional /24 subnets where needed.
The following figure illustrates that an IP subnet of 10.200.0.0/16 is assigned to the Americas, 10.0.0.0/16 is assigned to EMEA, and a global subnet of 10.100.0.0/16 is set aside as a reserve pool. The US West location in the Americas is assigned two subnets because 350 users are connecting to it. In EMEA, the Netherlands Central location gets one /24 subnet as only 45 users are connecting there, while Belgium received four /24 subnets because 900 users are connecting to this node. The South Korea node receives a /24 subnet from the global pool as no specific subnet was set aside for Asia Pacific:
Figure 1.6 – Illustration of IP pool distribution
The first subnets are assigned somewhat at random when a location or node is activated the first time; additional subnets are assigned whenever there is a need to add IP space. You should not rely on the source IP of a user to identify which country they are originating from as the subnet can be reassigned to a different node.
In this chapter, we covered the basic building blocks of what makes up Prisma Access and how these are connected. You should now be able to identify that an RN-SPN is used to connect a remote office, an SC-CAN is used to connect a data center, and an MU-SPN is used as an in-country gateway for mobile users.
In the next chapter, we’ll go over the activation process and how to deploy the infrastructure and Cortex Data Lake.