38,99 €
A clear, comprehensive guide to VMware's latest virtualization solution Mastering VMware NSX for vSphere is the ultimate guide to VMware's network security virtualization platform. Written by a rock star in the VMware community, this book offers invaluable guidance and crucial reference for every facet of NSX, with clear explanations that go far beyond the public documentation. Coverage includes NSX architecture, controllers, and edges; preparation and deployment; logical switches; VLANS and VXLANS; logical routers; virtualization; edge network services; firewall security; and much more to help you take full advantage of the platform's many features. More and more organizations are recognizing both the need for stronger network security and the powerful solution that is NSX; usage has doubled in the past year alone, and that trend is projected to grow--and these organizations need qualified professionals who know how to work effectively with the NSX platform. This book covers everything you need to know to exploit the platform's full functionality so you can: * Step up security at the application level * Automate security and networking services * Streamline infrastructure for better continuity * Improve compliance by isolating systems that handle sensitive data VMware's NSX provides advanced security tools at a lower cost than traditional networking. As server virtualization has already become a de facto standard in many circles, network virtualization will follow quickly--and NSX positions VMware in the lead the way vSphere won the servers. NSX allows you to boost security at a granular level, streamline compliance, and build a more robust defense against the sort of problems that make headlines. Mastering VMware NSX for vSphere helps you get up to speed quickly and put this powerful platform to work for your organization.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 446
Veröffentlichungsjahr: 2020
Cover
Introduction
What Does This Book Cover?
Additional Resources
Chapter 1: Abstracting Network and Security
Networks: 1990s
Data Centers Come of Age
VMware
Virtualize Away
The Bottom Line
Chapter 2: NSX Architecture and Requirements
NSX Network Virtualization
Competitive Advantage: IOChain
NSX Role-Based Access Control
The Bottom Line
Chapter 3: Preparing NSX
NSX Manager Prerequisites
Installing NSX Manager
Linking Multiple NSX Managers Together (Cross-vCenter NSX)
Creating a Universal Transport Zone on the Primary NSX Manager
The Bottom Line
Chapter 4: Distributed Logical Switch
vSphere Standard Switch (vSS)
Virtual Distributed Switch (vDS)
Virtual eXtensible LANs (VXLANs)
Employing Logical Switches
Three Tables That Store VNI Information
We Might as Well Talk about ARP Now
Understanding Broadcast, Unknown Unicast, and Multicast
The Bottom Line
Chapter 5: Marrying VLANs and VXLANs
Shotgun Wedding: Layer 2 Bridge
Hardware Switches to the Rescue
The Bottom Line
Chapter 6: Distributed Logical Router
Distributed Logical Router (DLR)
Control Plane Smarts
Let's Get Smart about Routing
Deploying Distributed Logical Routers
The Bottom Line
Chapter 7: NFV: Routing with NSX Edges
Network Function Virtualization: NSX Has It Too
Let's Do Routing Like We Always Do
Routing with the DLR and ESG
The Bottom Line
Chapter 8: More NVF: NSX Edge Services Gateway
ESG Network Placement
Network Address Translation
ESG Load Balancer
Configuring an ESG Load Balancer
Layer 2 VPN (If You Must)
Secure Sockets Layer Virtual Private Network
Internet Protocol Security VPN
Round Up of Other Services
The Bottom Line
Chapter 9: NSX Security, the Money Maker
Traditional Router ACL Firewall
I Told You about the IOChain
Adding DFW Rules
Why Is My Traffic Getting Blocked?
Distributing Firewall Rules to Each ESXi Host: What's Happening?
The Bottom Line
Chapter 10: Service Composer and Third-Party Appliances
Security Groups
Service Insertion
Service Insertion Providers
Security Policies
The Bottom Line
Note
Chapter 11: vRealize Automation and REST APIs
vRealize Automation Features
vRA Editions
Integrating vRA and NSX
vRealize Orchestrator Workflows
Deploying a Blueprint that Consumes NSX Services
REST APIs
The Bottom Line
Appendix: The Bottom LineThe Bottom Line
Chapter 1: Abstracting Network and Security
Chapter 2: NSX Architecture and Requirements
Chapter 3: Preparing NSX
Chapter 4: Distributed Logical Switch
Chapter 5: Marrying VLANs and VXLANs
Chapter 6: Distributed Logical Router
Chapter 7: NFV: Routing with NSX Edges
Chapter 8: More NVF: NSX Edge Services Gateway
Chapter 9: NSX Security, the Money Maker
Chapter 10: Service Composer and Third-Party Appliances
Chapter 11: vRealize Automation and REST APIs
Index
End User License Agreement
Chapter 2
TABLE 2.1 Sizes of ESGs
Chapter 1
FIGURE 1.1 Simplex, half duplex, and full duplex compared
FIGURE 1.2 Colocated space rented in provider data centers
FIGURE 1.3 Traditional provisioning involves numerous teams and is time-cons...
FIGURE 1.4 The move to company-built data centers
FIGURE 1.5 Manhattan city grid designed in 1811
FIGURE 1.6 The hypervisor is a virtualization layer decoupling software from...
FIGURE 1.7 Virtualization creates an abstraction layer that hides the comple...
FIGURE 1.8 Physically storing data
FIGURE 1.9 VMware vMotion is a benefit that would not be possible without vi...
FIGURE 1.10 In the event of a physical host failing, the workload can be mov...
FIGURE 1.11 Allocating storage
FIGURE 1.12 Virtualization can now go beyond only servers.
FIGURE 1.13 NSX is a hypervisor for the network.
FIGURE 1.14 Traditional security relies on external security.
FIGURE 1.15 NSX microsegmentation moves the security rules to the VMs, secur...
FIGURE 1.16 Workloads are isolated from one another, with security applied t...
Chapter 2
FIGURE 2.1 Network virtualization decoupling network functions from the phys...
FIGURE 2.2 Traditional three planes of operation found in each networking de...
FIGURE 2.3 Network planes of operation compared with company job roles
FIGURE 2.4 Each networking device in traditional networking making decisions...
FIGURE 2.5 NSX architecture
FIGURE 2.6 Accessing NSX Manager using the same web client used for vSphere...
FIGURE 2.7 Deploying and controlling NSX components through automation
FIGURE 2.8 Primary NSX Manager at Site A managing secondary NSX Manager at S...
FIGURE 2.9 Compute clusters separated from management clusters, increasing a...
FIGURE 2.10 NSX Manager can only be registered to one vCenter Server.
FIGURE 2.11 Individual vSphere Standard Switches
FIGURE 2.12 A vSphere Distributed Switch spanning several ESXi hosts
FIGURE 2.13 The data plane for the overlay network, provided by the NSX Virt...
FIGURE 2.14 VMs on separate hosts but members of the same 10.1.1.0 subnet
FIGURE 2.15 vSphere Installation Bundles used to create the NSX virtual envi...
FIGURE 2.16 IOChain with customizable slots for adding third-Party functions...
FIGURE 2.17 The NSX controller Cluster controls routing and switching for ea...
FIGURE 2.18 NSX Controllers deployed three to a cluster, each on a different...
FIGURE 2.19 Dividing the workload into slices/shards, assigned across the th...
FIGURE 2.20 Job of two roles, VXLAN Logical Switches and Logical Router, div...
FIGURE 2.21 NSX Edge routes traffic between your virtual and physical networ...
FIGURE 2.22 NSX Edge services
FIGURE 2.23 RBAC pre-built roles for assigning access to NSX
FIGURE 2.24 VXLANs tunnel through the physical network to allow traffic to o...
FIGURE 2.25 A 50-byte header containing VXLAN information is added to the fr...
FIGURE 2.26 VTEPs are the tunnel endpoint IP addresses assigned to each host...
FIGURE 2.27 Multicast mode relies on multicast routing to be configured in t...
FIGURE 2.28 Unicast mode does not require the physical network to be configu...
FIGURE 2.29 Hybrid mode multicasting to members of the same group, unicastin...
Chapter 3
FIGURE 3.1 TCP and UDP ports needed for NSX communication
FIGURE 3.2 Major NSX components requiring resources
FIGURE 3.3 Cross-vCenter design spanning multiple sites
FIGURE 3.4 Separate compute and management clusters
FIGURE 3.5 Home ➢ Networking
FIGURE 3.6 Deploy OVF Template
FIGURE 3.7 Manage vCenter Registration
FIGURE 3.8 vCenter Server name and login
FIGURE 3.9 vSphere Web Client
FIGURE 3.10 Manage Appliance Settings
FIGURE 3.11 NTP Time Settings
FIGURE 3.12 Syslog Server name or IP address
FIGURE 3.13 Installation and Upgrade
FIGURE 3.14 NSX Controller Nodes
FIGURE 3.15 Add a Controller Node
FIGURE 3.16 Cross-vCenter design
FIGURE 3.17 Cross-vCenter with universal objects
FIGURE 3.18 Selecting the primary NSX Manager
FIGURE 3.19 Host Preparation
FIGURE 3.20 Transport Zones
FIGURE 3.21 Logical Switches
FIGURE 3.22 Logical Switch naming and replication mode
FIGURE 3.23 Add Secondary Manager
Chapter 4
FIGURE 4.1 Switch dynamically learning MAC addresses and ports
FIGURE 4.2 A vSS can only span a single ESXi host.
FIGURE 4.3 Traffic shaping is a mechanism for controlling a VM's network ban...
FIGURE 4.4 Port group C has been created on a separate vSS for each ESXi hos...
FIGURE 4.5 vMotion not only migrates the virtual machine, but the VM's port ...
FIGURE 4.6 Connecting the virtual switch to two physical NICs on the ESXi ho...
FIGURE 4.7 The vDS is created and managed centrally but spans across multipl...
FIGURE 4.8 Traffic from ESXi-1 to ESXi-2 in this topology must now traverse ...
FIGURE 4.9 Two VMs on the same subnet but located on different hosts communi...
FIGURE 4.10 The original frame generated from the source VM is encapsulated ...
FIGURE 4.11 Using the analogy of a subway stop, for VM1 to send traffic to V...
FIGURE 4.12 Each Logical Switch created receives an ID from the segment pool...
FIGURE 4.13 The VLAN ID field is 12 bits long, which indicates that over 400...
FIGURE 4.14 Each VTEP, indicated here with an IP address starting with 192.1...
FIGURE 4.15 VM-A is attached to VXLAN 5001. When it is powered on, informati...
FIGURE 4.16 VTEPs send their local VNI information to the Controller cluster...
FIGURE 4.17 Walkthrough to find the VNI information needed to send traffic t...
FIGURE 4.18 VNI information added when VM-B is powered up
FIGURE 4.19 VNI information added when VM-D is powered up
FIGURE 4.20 Powering on VM-F adds VNI information to a new table.
FIGURE 4.21 The three Controller nodes divide the workload of tracking VNI i...
FIGURE 4.22 The PC, 10.1.1.3/24, wants to send packets to the server, 30.1.1...
FIGURE 4.23 The PC knows the default gateway IP address 10.1.1.4 through DHC...
FIGURE 4.24 Once R1 (the PC's default gateway) receives the packet, it check...
FIGURE 4.25 The destination network, 30.1.1.0, is directly attached.
FIGURE 4.26 VM-A needs to send packets to VM-E.
FIGURE 4.27 Selecting a Replication Mode (Multicast, Unicast, Hybrid) when c...
FIGURE 4.28 Troubleshooting connectivity
Chapter 5
FIGURE 5.1 Adding a Logical Switch
FIGURE 5.2 Configuring a Logical Switch
FIGURE 5.3 Adding a DLR
FIGURE 5.4 Configuring DLR Basic Details
FIGURE 5.5 Configuring DLR Settings
FIGURE 5.6 DLR Deployment Configuration
FIGURE 5.7 Configuration options for the Edge Appliance VM
FIGURE 5.8 DLR Management Interface configuration
FIGURE 5.9 Selecting the Distributed Virtual Port Group
FIGURE 5.10 Verifying the DLR configured options
FIGURE 5.11 DLR option to Configure Interfaces
FIGURE 5.12 DLR option to specify Default Gateway
FIGURE 5.13 DLR review of Deployment Configuration
FIGURE 5.14 DLR Deployment Status
FIGURE 5.15 vSphere Flex Web Client
FIGURE 5.16 Selecting the new DLR
FIGURE 5.17 DLR Manage Bridging
FIGURE 5.18 Naming the Bridge
FIGURE 5.19 Selecting the Logical Switch
FIGURE 5.20 Browsing to the Distributed Virtual Port Group
FIGURE 5.21 Selecting the Distributed Virtual Port Group
FIGURE 5.22 Publishing the changes
FIGURE 5.23 Verification that the bridge has been deployed
FIGURE 5.24 Verifying that the dvPortGroup is backed by VLAN 10
FIGURE 5.25 Selecting the Logical Switch created previously
FIGURE 5.26 Verifying which VMs are attached to the Logical Switch
FIGURE 5.27 Confirming that Server1 is not connected to LogicalSwitch, but i...
FIGURE 5.28 Verifying that App-VM can ping Server1 via the L2 Bridge
FIGURE 5.29 Selecting the Logical Switch
FIGURE 5.30 Choices in the Actions drop-down
FIGURE 5.31 Network And Security (NSX) menu options
Chapter 6
FIGURE 6.1 ESXi host learning routes
FIGURE 6.2 Following the routing path with an analogy
FIGURE 6.3 Active/standby LR Control VMs
FIGURE 6.4 Two VMs on same ESXi host and an external router
FIGURE 6.5 Two VMs on same ESXi host with NSX
FIGURE 6.6 The DLR is distributed to the kernel of each ESXi host.
FIGURE 6.7 Two VMs on different ESXi hosts and an external router
FIGURE 6.8 Two VMs on different ESXi hosts with NSX
FIGURE 6.9 Representing a DLR in NSX diagrams
FIGURE 6.10 Packet walk from end user to web server VM on ESXi Host8
FIGURE 6.11 Transit segment shared by DLR, NSX Edge, and LR Control VM
FIGURE 6.12 Edge participating in external and internal routing
FIGURE 6.13 For scalability, Area 0 is designated as the Backbone with all o...
FIGURE 6.14 Four organizations, each assigned its own AS number
FIGURE 6.15 Simple Cisco router BGP example for R1 to peer with R2
FIGURE 6.16 New DLR step 1: Basic Details
FIGURE 6.17 Within NSX, navigate to NSX Edges to create a DLR.
FIGURE 6.18 Providing basic details for the DLR
FIGURE 6.19 New DLR step 2: Settings
FIGURE 6.20 New DLR step 3: Deployment Configuration
FIGURE 6.21 Assigning resources to the Control VM
FIGURE 6.22 New DLR returning to step 3: Deployment Configuration
FIGURE 6.23 Selecting a network for LR Control VM connectivity
FIGURE 6.24 New DLR step 4: Interface
FIGURE 6.25 Adding an Uplink interface for the DLR to communicate with the E...
FIGURE 6.26 Selecting a network shared by the DLR, ESG, and Control VM
FIGURE 6.27 Configuring an Uplink IP address on the DLR
FIGURE 6.28 Checking the Deployment Status column for the DLR
FIGURE 6.29 Viewing the IP addresses of the Uplink and Internal interfaces
FIGURE 6.30 Verifying connectivity from a web server VM to its gateway, the ...
Chapter 7
FIGURE 7.1 Network functions virtualized and offered by an NSX Edge
FIGURE 7.2 With HA enabled, a copy of the Edge VM is created.
FIGURE 7.3 The DLR routes E-W traffic, and the ESG routes N-S.
FIGURE 7.4 Distributed vs. centralized routing
FIGURE 7.5 Distributed and centralized routing approach host failure differe...
FIGURE 7.6 Routing between different transport zones
FIGURE 7.7 Routing options available for the DLR and the NSX Edge
FIGURE 7.8 Identifying ESG and DLR Internal LIFs and Uplink LIFs
FIGURE 7.9 NSX Edge name and description
FIGURE 7.10 NSX Edge Settings
FIGURE 7.11 NSX Edge Configure Deployment
FIGURE 7.12 NSX Edge Cluster and Datastore selection
FIGURE 7.13 NSX Edge Configure Interfaces
FIGURE 7.14 Adding an NSX Edge interface
FIGURE 7.15 NSX Edge Default Gateway Settings
FIGURE 7.16 NSX Edge Firewall and High Availability
FIGURE 7.17 DLR Routing Global Configuration
FIGURE 7.18 A Router ID is required when configuring OSPF or BGP.
FIGURE 7.19 Configuring BGP
FIGURE 7.20 Enabling BGP and configuring the Autonomous System number
FIGURE 7.21 Adding the ESG as the DLR's BGP neighbor
FIGURE 7.22 OSPF Configuration
FIGURE 7.23 Configuring the Protocol and Forwarding Addresses
FIGURE 7.24 Adding a Static Route
FIGURE 7.25 DLR with pathways to the physical router via two ESGs
FIGURE 7.26 ECMP routing between the ESGs and the physical network
Chapter 8
FIGURE 8.1 Separating edge services from compute
FIGURE 8.2 ESG providing Network Address Translation
FIGURE 8.3 Configuring NAT on the NSX ESG
FIGURE 8.4 Adding a Source NAT rule
FIGURE 8.5 Configuring the translated address
FIGURE 8.6 Publishing the changes
FIGURE 8.7 Adding a Destination NAT rule
FIGURE 8.8 Configuring the translated address
FIGURE 8.9 One‐armed load‐balancing design
FIGURE 8.10 Inline load‐balancing design
FIGURE 8.11 Selecting the ESG
FIGURE 8.12 Load Balancer tab
FIGURE 8.13 Enabling the load‐balancer service
FIGURE 8.14 Creating a pool of servers to load balance across
FIGURE 8.15 Selecting the load‐balancing algorithm
FIGURE 8.16 Adding members to the server pool
FIGURE 8.17 Configuring each member of the pool
FIGURE 8.18 Selecting the default http monitor for the pool
FIGURE 8.19 Verifying the status of the pool
FIGURE 8.20 Load balancer application profiles
FIGURE 8.21 Selecting a preconfigured application profile
FIGURE 8.22 Creating a virtual server to be the front end
FIGURE 8.23 Selecting an IP address and port number for the virtual server
FIGURE 8.24 Implementing an L2VPN as a temporary solution to connect two sit...
FIGURE 8.25 SSL VPNs allowing remote mobile users to connect
FIGURE 8.26 Configuring the SSL VPN service on the ESG
FIGURE 8.27 Choosing the IP address and port users will VPN to
FIGURE 8.28 Creating an IP pool
FIGURE 8.29 Configuring a range of addresses to assign to VPN users
FIGURE 8.30 Selecting which networks the user can access
FIGURE 8.31 Enabling TCP optimization when using the tunnel
FIGURE 8.32 Authenticating VPN users
FIGURE 8.33 Configuring password rules for VPN access
FIGURE 8.34 Creating VPN client installation packages for download
FIGURE 8.35 Creating and customizing packages for different platforms
FIGURE 8.36 Creating credentials for VPN users
FIGURE 8.37 Adding a VPN user
FIGURE 8.38 Starting the SSL VPN service
FIGURE 8.39 Site‐to‐site IPsec VPN
FIGURE 8.40 Deploying a site‐to‐site IPsec VPN
FIGURE 8.41 Configuring tunnel endpoints and peer subnets
FIGURE 8.42 Publishing the changes
FIGURE 8.43 Starting the IPsec tunnel service
FIGURE 8.44 The exchange of DHCP messages
FIGURE 8.45 Adding a DCHP pool of addresses
FIGURE 8.46 Defining the address range of the DHCP pool
FIGURE 8.47 Starting the DHCP server service
FIGURE 8.48 Configuring a DHCP reservation
FIGURE 8.49 Ensuring a server is always allocated the same IP based on MAC a...
FIGURE 8.50 VM‐A powered off and without an assigned DHCP address
FIGURE 8.51 DHCP Relay to forward IP requests to a DHCP server on a differen...
FIGURE 8.52 Pointing the DHCP Relay Agent to the DHCP server
FIGURE 8.53 Adding the DHCP Relay Agent
FIGURE 8.54 Selecting which interface will be the DHCP Relay Agent
FIGURE 8.55 Relaying DNS requests to a DNS server on a different segment
FIGURE 8.56 Configuring DNS forwarding on the ESG
FIGURE 8.57 Directing the ESG to forward DNS requests to 8.8.8.8
Chapter 9
FIGURE 9.1 Traditional firewall design is best suited for N-S traffic.
FIGURE 9.2 NSX IOChain slots 0–2
FIGURE 9.3 Native DFW rules are based on Layers 2 through 4.
FIGURE 9.4 Firewall services distributed to the kernel of every ESXi host
FIGURE 9.5 Three-tier application separated by a traditional router ACL fire...
FIGURE 9.6 Single Layer 2 domain microsegmented by the DFW
FIGURE 9.7 Matching ACL rules based on datagram header fields
FIGURE 9.8 Adding a new rule to the DFW
FIGURE 9.9 Three rules present by default
FIGURE 9.10 Final default rule action can be modified, not deleted.
FIGURE 9.11 Creating a new rule
FIGURE 9.12 Choosing the destination object to match on
FIGURE 9.13 Selecting a specific server from the list of VMs
FIGURE 9.14 Selecting the specific service to match on
FIGURE 9.15 Adding sections to your DFW rule set
FIGURE 9.16 Adding a previous rule to the new section
FIGURE 9.17 After publishing the change, the section is now lockable.
FIGURE 9.18 Comparison of a regular ARP and a gratuitous ARP
FIGURE 9.19 ARP poisoning example
FIGURE 9.20 DFW firewall timeout settings
FIGURE 9.21 Icon to add or remove columns
FIGURE 9.22 AMQP protocol is built for exchanging messages reliably.
FIGURE 9.23 NSX Distributed Firewall architecture
Chapter 10
FIGURE 10.1 Defining group membership dynamically
FIGURE 10.2 Adding a new security group with Service Composer
FIGURE 10.3 Naming the security group
FIGURE 10.4 Listing Available Objects by type
FIGURE 10.5 Selecting an object from the available list
FIGURE 10.6 Service Composer security group created
FIGURE 10.7 Dynamic membership criteria
FIGURE 10.8 Customizing and combining multiple criteria groups
FIGURE 10.9 Using AND between criteria blocks
FIGURE 10.10 Creating exceptions by excluding objects
FIGURE 10.11 Applying the created security group
FIGURE 10.12 Navigating to the Security Tags menu
FIGURE 10.13 Moving VMs to the Selected Objects pane
FIGURE 10.14 Verifying group membership
FIGURE 10.15 Adding a DFW rule to the WebDevs section
FIGURE 10.16 Specifying source IP addresses
FIGURE 10.17 Security group object available for DFW rule
FIGURE 10.18 Adding a new firewall rule
FIGURE 10.19 Rules added but in wrong order
FIGURE 10.20 IOChain service insertion within the kernel
FIGURE 10.21 IDS and IPS are examples of third-party network introspection s...
FIGURE 10.22 Partial list of NSX partners for best-of-breed integration
FIGURE 10.23 Creating a security policy in Service Composer
FIGURE 10.24 Steps within Service Composer to create a security policy
FIGURE 10.25 Options to change weight and inheritance when naming policy
FIGURE 10.26 Adding a firewall rule to the security policy
FIGURE 10.27 Choosing ICMP Echo as the service to match
FIGURE 10.28 Firewall rule created within the security policy
FIGURE 10.29 Applying the policy (finally!)
FIGURE 10.30 Security policy created and applied
Chapter 11
FIGURE 11.1 vRealize Easy Installer components
FIGURE 11.2 vRA Advanced and Enterprise editions compared
FIGURE 11.3 Accessing the vRealize Automation console
FIGURE 11.4 Selecting vSphere as the endpoint
FIGURE 11.5 Entering general details for the endpoint
FIGURE 11.6 Testing the connection
FIGURE 11.7 Associating NSX with the vCenter endpoint
FIGURE 11.8 Three subnets in blocks of 16 for the three-tiered application
FIGURE 11.9 Selecting the network profile type
FIGURE 11.10 Configuring an external network profile
FIGURE 11.11 Configuring supporting DNS information
FIGURE 11.12 Configuring the start and end IP range within the existing netw...
FIGURE 11.13 Under the General tab of a NAT network profile
FIGURE 11.14 Under the General tab of a routed network profile
FIGURE 11.15 Creating a reservation policy
FIGURE 11.16 Creating a vRA reservation
FIGURE 11.17 Configuring the General tab of a new reservation
FIGURE 11.18 Selecting compute, memory, and resources for the reservation
FIGURE 11.19 Creating a blueprint
FIGURE 11.20 Blueprint general configuration settings
FIGURE 11.21 Configuring NSX settings within the blueprint
FIGURE 11.22 Drag objects from the left to the design canvas grid on the rig...
FIGURE 11.23 vSphere (vCenter) component placed on the design canvas
FIGURE 11.24 Configuring the design canvas component
FIGURE 11.25 Adding the build information
FIGURE 11.26 Configuring CPU, memory, and storage for the VMs
FIGURE 11.27 Clicking the component brings up the configuration panel.
FIGURE 11.28 Creating a new service for the catalog
FIGURE 11.29 Configuring a new service for the catalog
FIGURE 11.30 Using search to find matching users and groups
FIGURE 11.31 Enabling the Manage Catalog Items option
FIGURE 11.32 Catalog item published after clicking OK
FIGURE 11.33 Creating a new entitlement
FIGURE 11.34 Users & Groups pane
FIGURE 11.35 Using the search function to find the SED general users group
FIGURE 11.36 Defining entitled services, items, and actions
FIGURE 11.37 Defining what actions can be taken by the end user
FIGURE 11.38 Verifying the catalog with a non-admin account
FIGURE 11.39 Catalog view using a vanilla user account
FIGURE 11.40 Entering the request form to provision
FIGURE 11.41 Tracking the progress of requests
FIGURE 11.42 Postman desktop REST API client
FIGURE 11.43 Adding NSX Manager credentials to the REST API header
FIGURE 11.44 Issuing a GET request to retrieve controller status
FIGURE 11.45 Additional header required for POST request
Appendix
FIGURE 4.28 Troubleshooting connectivity
FIGURE 5.31 Network And Security (NSX) menu options
Cover
Table of Contents
Begin Reading
i
xvii
xviii
xix
xx
1
2
3
4
5
6
7
8
9
10
11
12
13
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
ii
iii
v
vii
301
Elver Sena Sosa
The advantages of server virtualization in data centers are well established. From the beginning, VMware has led the charge with vSphere. Organizations migrating physical servers to virtual immediately see the benefits of lower operational costs, the ability to pool CPU and memory resources, server consolidation, and simplified management.
VMware had mastered compute virtualization and thought, “Why not do the same for the entire data center?” Routers, switches, load balancers, firewalls … essentially all key physical networking components, could be implemented in software, creating a Software-Defined Data Center (SDDC). That product, VMware NSX, is the subject of this book.
In 1962, Sir Arthur Clarke published an essay asserting three laws. His third law stated, “Any sufficiently advanced technology is indistinguishable from magic.” If you're not familiar with NSX, the abilities you gain as a network administrator almost seem like magic at first, but we'll dive into the details to explain how it all works. It doesn't matter if you don't have a background in vSphere. There are plenty of analogies and examples throughout, breaking down the underlying concepts to make it easy to understand the capabilities of NSX and how to configure it.
The way NSX provides network virtualization is to overlay software on top of your existing physical network, all without having to make changes to what you have in place. This is much like what happens with server virtualization. When virtualizing servers, a hypervisor separates and hides the underlying complexities of physical CPU and memory resources from the software components (operating system and application), which exist in a virtual machine. With this separation, the server itself just becomes a collection of files, easily cloned or moved. An immediate benefit gained is the time and effort saved when deploying a server. Instead of waiting for the order of your physical servers to arrive by truck, then waiting for someone to rack and stack, then waiting for someone else to install an operating system, then waiting again for network connectivity, security, installation, and configuration of the application … you get the picture. Instead of waiting on each of those teams, the server can be deployed with a click of a button.
NSX can do the same and much more for your entire data center. The agility NSX provides opens new possibilities. For instance, a developer comes to you needing a temporary test server and a NAT router to provide Internet connectivity. The admin can use NSX to deploy a virtual machine (VM) and a virtual NAT router. The developer completes the test, the VM and NAT router are deleted, and all of this occurs before lunch. NSX can do the same thing for entire networks.
The same developer comes to you in the afternoon requesting a large test environment that mimics the production network while being completely isolated. She needs routers, multiple subnets, a firewall, load balancers, some servers running Windows, others running Linux: all set up with proper addressing, default gateways, DNS, DHCP, and her favorite dev tools installed and ready to go. It's a good bet that setting this up in a physical lab would take a lot of time and may involve several teams.
With NSX, that same network could be deployed by an administrator with a few clicks, or even better, it can be automated completely, without having to involve an administrator at all. VMware has a product that works with NSX called vRealize Automation (vRA) that does just that. It provides our developer with a catalog portal, allowing her to customize and initiate the deployment herself, all without her needing to have a background in networking.
If you're a security admin, this might seem like chaos would ensue, with anyone being able to deploy whatever they want on the network. NSX has that covered as well. As a security administrator, you still hold the keys and assign who can do what, but those keys just got a lot more powerful with NSX.
Imagine if you had an unlimited budget and were able to attach a separate firewall to every server in the entire network, making it impossible to bypass security while significantly reducing latency. Additionally, what if you didn't have to manage each of those firewalls individually? What if you could enter the rules once and they propagate instantly to every firewall, increasing security dramatically while making your job a lot easier and improving performance. It's not magic; that's the S in NSX.
The N in NSX is for networking, the S is for security. The X? Some say it stands for eXtensibility or eXtended, but it could just as well be a way to make the product sound cool. Either way, the point is that both networking and security get equal treatment in NSX, two products in one. At the same time, instead of these additions adding more complexity to your job, you'll find just the opposite. With the firewall example or the example of the developer deploying the large test network, as a security administrator, you set the rules and permissions and you're done. Automation takes care of the tedious legwork, while avoiding the typical mistakes that arise when trying to deploy something before having your morning coffee. Those mistakes often lead to even more legwork with more of your time drained troubleshooting.
Wait, the title of the book says NSX-V. What does the V for? Since NSX is tightly integrated with vSphere, its legal name is NSX for vSphere, but we'll just refer to it as NSX for short. NSX-V has a cousin, NSX-T, with the T standing for transformers. In a nutshell, that product is made to easily integrate with environments using multiple hypervisors, Kubernetes, Docker, KVM, and OpenStack. If all that sounds like a lot to take in, not to worry, we'll save that for another book.
Welcome to NSX.
Chapter 1
: Abstracting Network and Security
We often learn how to configure something new without really understanding
why
it exists in the first place. You should always be asking, “What problem does this solve?” The people armed with these details are often positioned to engineer around new problems when they arise. This chapter is a quick read to help you understand why NSX was created in the first place, the problems it solves, and where NSX fits in the evolution of networking, setting the stage for rest of the book's discussions on virtualization.
Chapter 2
: NSX Architectures and Requirements
This chapter is an overview of NSX operations. It details the components that make up NSX, their functions, and how they communicate. Equally important, it introduces NSX terminology used throughout the book, as well as virtualization logic.
Chapter 3
: Preparing NSX
In this chapter, you will find out everything you need to have in place before you can deploy NSX. This includes not only resources like CPU, RAM, and disk space, but it also covers ports that are necessary for NSX components to communicate, and prepping your ESXi hosts for NSX.
Chapter 4
: Distributed Logical Switch
It's helpful if you are already familiar with how a physical switch works before getting into the details of a Distributed Logical Switch. Don't worry if you're not. In this chapter, we'll look at how all switches learn, and why being distributed and logical is a dramatic improvement over centralized and physical. You'll also find out how NSX uses tunnels as a solution to bypass limitations of your physical network.
Chapter 5
: Marrying VLANs and VXLANs
On the virtual side, we have VMs living on VXLANs. On the physical side, we have servers living on VLANs. Rather than configuring lots of little subnets and routing traffic between logical and physical environments, this chapter goes into how to connect the two (physical and logical), making it easy to exchange information without having to re-IP everything.
Chapter 6
: Distributed Logical Router
In
Chapter 4
, we compared a physical switch and a Distributed Logical Switch. We do the same in this chapter for physical routers vs. Distributed Logical Routers, covering how they work, how they improve performance while making your job easier, and the protocols they use to communicate.
Chapter 7
: NFV: Routing with NSX Edges
In this chapter, we talk about network services beyond routing and switching that are often provided by proprietary dedicated physical devices, such as firewalls, load balancers, NAT routers, and DNS servers. We'll see how these network functions can be virtualized (Network Function Virtualization, or NFV) in NSX.
Chapter 8
: More NFV: NSX Edge Services Gateway
This chapter focuses on the Edge Services Gateway, the Swiss Army knife of NSX devices, that can do load balancing, Network Address Translation (NAT), DHCP, DHCP Relay, DNS Relay, several flavors of VPNs, and most importantly, route traffic in and out of your NSX environment.
Chapter 9
: NSX Security, the Money Maker
When it's said that NSX provides better security, you'll find out why in this chapter. Rather than funneling traffic through a single-point physical firewall, it's as if a police officer were stationed just outside the door of every home. The NSX Distributed Firewall provides security that is enforced just outside the VM, making it impossible to bypass the inspection of traffic in or out. We also look at how you can extend NSX functionality to incorporate firewall solutions from other vendors.
Chapter 10
: Service Composer and Third-Party Appliances
This chapter introduces Service Composer. This built-in NSX tool allows you to daisy-chain security policies based on what is happening in real time. You'll see an example of a virus scan triggering a series of security policies automatically applied, eventually leading to a virus-free VM. You'll also learn how to tie in services from other vendors and explain the differences between guest introspection and network introspection.
Chapter 11
: vRealize Automation and REST APIs
Saving the best time-saving tool for last, this chapter covers vRealize Automation (vRA), a self-service portal containing a catalog of what can be provisioned. If a non-admin needs a VM, they can deploy it. If it needs to be a cluster of VMs running Linux with a load balancer and NAT, they can deploy it. As an admin, you can even time bomb it, so that after the time expires, vRA will keep your network clean and tidy by removing what was deployed, automatically. You will also see how administrative tasks can be done without going through a GUI, using REST APIs.
Here's a list of supporting resources that augment what is covered in this book, including the authorized VCP6-NV NSX exam guide, online videos, free practice labs, helpful blogs, and supporting documentation.
VCP6-NV Official Cert Guide
(NSX exam #2V0-642) by Elver Sena Sosa:
www.amazon.com/VCP6-NV-Official-Cert-Guide-2V0-641/dp/9332582750/ref=sr_1_1?keywords=elver+sena+sosa&qid=1577768162&sr=8-1
YouTube vSAN Architecture 100 Series by Elver Sena Sosa:
www.youtube.com/results?search_query=vsan+architecture+100+series
Weekly data center virtualization blog posts from the Hydra 1303 team:
www.hydra1303.com
Practice with free VMware NSX Hands-on Labs (HOL):
www.vmware.com/products/nsx/nsx-hol.html
VMUG – VMware User Group:
www.vmug.com
VMware NSX-V Design Guide:
www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/nsx/vmw-nsx-network-virtualization-design-guide.pdf
VMware authorized NSX classes (classroom and online):
mylearn.vmware.com/mgrReg/courses.cfm?ui=www_edu&a=one&id_subject=83185
If you believe you've found a mistake in this book, please bring it to our attention. At John Wiley & Sons, we understand how important it is to provide our customers with accurate content, but even with our best efforts, an error may occur.
In order to submit your possible errata, please email it to our Customer Service Team at [email protected] with the subject line “Possible Book Errata Submission.”
In this chapter, we will examine the evolution of Data Center Networking and Security from the 1990s to the present in order to better understand how network virtualization in today's data centers provides solutions that reduce costs, greatly improve manageability, and increase security.
Most IT professionals are familiar with server virtualization using virtual machines (VMs). A virtual machine is purely software. An abstraction layer creates a way to decouple the physical hardware resources from that software. In doing so, the VM becomes a collection of files that can be backed up, moved, or allocated more resources without having to make changes to the physical environment.
We will delve into how VMware NSX is the next step in data center evolution, allowing virtualization to extend beyond servers. Routers, switches, firewalls, load balancers, and other networking components can all be virtualized through NSX. NSX provides an abstraction layer that decouples these components from the underlying physical hardware, which provides administrators with new solutions that further reduce costs, improve manageability, and increase security across the entire data center.
The evolution of the modern data center
How early networks created a need for data centers
Colocation: the sharing of provider data centers
Challenges in cost, resource allocation, and provisioning
VMware server virtualization
VMware storage virtualization
VMware NSX: virtual networking and security
The 1990s brought about changes to networking that we take for granted today. We shifted from the original Ethernet design of half-duplex communication, where devices take turns sending data, to full duplex. With full duplex, each device had a dedicated connection to the network that allowed us to send and receive simultaneously, while at the same time reducing collisions on the wire to zero (see Figure 1.1). The move to full duplex effectively doubled our throughput.
FIGURE 1.1 Simplex, half duplex, and full duplex compared
100 Mbps Ethernet connections became possible and the technology was given the unfortunate name Fast Ethernet, a label that has not aged well considering that available 100 GB ports of today are 1000 times faster than the '90s version of “Fast.”
The '90s also ushered in our first cable modems, converging data and voice with VoIP, and of course, the Internet's explosion in popularity. As Internet businesses started to boom, a demand was created for a place to host business servers. They needed reliable connectivity and an environment that provided the necessary power and cooling along with physical security. They needed a data center. Although it was possible for an organization to build its own dedicated data centers, it was both costly and time-consuming, especially for online startups booming in the '90s.
An attractive solution, especially for startups, was colocation. Many providers offered relatively inexpensive hosting plans, allowing businesses to move their physical servers and networking devices to the provider's ready-made data center. With colocation, organizations were essentially renting space, but they still maintained complete control over their physical devices (see Figure 1.2). The organization was still responsible for installing the operating system, upgrades, and backups. The only real difference was that the location of their compute resources had changed from locally hosted to the provider site.
FIGURE 1.2 Colocated space rented in provider data centers
The Internet boom of the '90s meant that due to web computing, a massive amount of data was being generated, which created a need for storage solutions such as Fibre Channel, iSCSI, and NFS. One major benefit in having these resources together in a data center was centralized management.
Not all data centers looked as impressive in the '90s as they do today. Google's first data center was created in 1998, and was just a 7 × 4 foot cage with only enough space for 30 servers on shelves.
The general design choice at the time was that each server would handle a single workload in a 1:1 ratio. To support a new workload, you bought another server, installed an operating system, and deployed it. There were numerous issues with this plan.
There was no centralized management of CPU and memory. Each server was independent and had its own dedicated resources that could not be shared. This led to one of two choices:
The simplistic approach was to allocate servers with a fixed amount of CPU and RAM, giving wide berth for future growth. This strategy meant that resources were largely underutilized. For example, servers on average used less than 20 percent of their CPU.
The alternative was to micromanage the resources per machine. Although compute resources were better utilized, the administrator's time was not. Spikes in usage sometimes created emergency situations with applications failing due to a lack CPU or memory.
Rolling out a new server involved numerous teams: the infrastructure team would install the server; the network team would allocate an IP subnet, create a new VLAN, and configure routing; the server team would install the operating system and update it; the database team would establish a database for the workload, and the team of developers would load their applications; and finally, the security team would modify the firewall configuration to control access to and from the server (see Figure 1.3). This process repeated for every new workload.
FIGURE 1.3 Traditional provisioning involves numerous teams and is time-consuming.
The time to fully provision a new workload, from the moment the server was purchased to the point where the application was ready to use, could often take months, greatly impacting business agility. Hand in hand with the slow rollouts was cost. When dealing entirely in the physical realm with hardware, it's almost impossible to automate the process. Many teams had to be directly involved and, typically, the process could not move forward until the tasks of the previous team had been completed.
As companies grew, many reached a point where colocation was no longer cost-effective due to the amount of rented space required, and they built out their own data centers. Some organizations were unable to take advantage of colocation at all due to compliance regulations, and they built their own data centers for this reason (see Figure 1.4).
FIGURE 1.4 The move to company-built data centers
A typical data center would consist of the physical servers, each with its own operating system connected to network services.
Rather than relying on leveraging a lot of local disks for permanent storage, most enterprises liked having their data all in one place, managed by a storage team. Centralized storage services made it easier to increase data durability through replication and to enhance reliability with backup and restore options that did not rely solely on tape backups. The storage team would carve out a Logical Unit of space, a LUN, for each operating system.
To control access, firewall services would protect the applications and data.
Having centralized resources and services only solved part of a problem. Although being in one place made them easier to control, so much still could only be accomplished manually by personnel. Automation allows the organization to be much more agile; to be able to quickly react when conditions change. Data centers during the '90s lacked that agility.
Consider this analogy. Imagine you are the civil engineer for what will someday be Manhattan, New York. You design the layout for the roads. Going from design to a fully functional road will take considerable time and resources, but an even greater issue is looming in the future. The grid design for Manhattan was developed in 1811 (see Figure 1.5). The design supported the 95,000 residents of the time and took into consideration growth, but not enough to cover the 3.1 million people who work and live there now. The point is that trying to alleviate traffic congestion in New York is very difficult because we lack the ability to move the roads or to move the traffic without making the problem worse. Any time we are dealing with the physical world, we lack agility.
FIGURE 1.5 Manhattan city grid designed in 1811
The data centers of the '90s were heavily reliant on dedicated physical devices. If congestion occurred in one part of the data center, it was possible that a given workload could be moved to an area of less contention, but it was about as easy as trying to move that city road. These data center management tasks had to be done manually, and during the transition, traffic was negatively impacted.
In 2005, VMware launched VMware Infrastructure 3, which became the catalyst for VMware's move into the data center. It changed the paradigm for how physical servers and operating systems coexist. Prior to 2005, there was a 1:1 relationship: one server, one operating system.
VMware created a hypervisor (what we now refer to as ESXi) that enabled installing multiple operating systems on a single physical server (see Figure 1.6). By creating an abstraction layer, the operating systems no longer had to have direct knowledge of the underlying compute services, the CPU and memory.
FIGURE 1.6 The hypervisor is a virtualization layer decoupling software from the underlying hardware.
The separate operating systems are what we now call virtual machines. The problem of trying to decide between provisioning simplicity and micromanaging resources immediately disappeared. Each virtual machine has access to a pool of CPU and memory resources via the abstraction layer, and each is given a reserved slice. Making changes to the amounts allocated to a virtual machine is something configured in software.
Virtualization decoupled the software from the hardware. On the software side, you had the operating system and the application; on the hardware side, you had the compute resources. This bifurcation of physical and software meant that on the software side, we finally had agility instead of being tied to the railroad tracks of the physical environment (see Figure 1.7).
Consider the analogy of a physical three-drawer metal filing cabinet vs. a Documents folder on your laptop (Figure 1.8). They may both contain the same data, but if the task is to send all your records to your lawyer, sending or copying the contents of the papers within the metal filing cabinet is a giant chore. In comparison, sending the files from your Windows Documents folder may take a few clicks and 45 seconds out of your day.
FIGURE 1.7 Virtualization creates an abstraction layer that hides the complexities of the physical layer from the software.
FIGURE 1.8 Physically storing data
The point is we can easily move things in software. It is a virtual space. Moving things in physical space takes vastly more effort, time, and almost always, more money. VMware's decoupling of the two opened a whole new world of possibilities.
A key VMware feature that really leveraged the decoupling of physical and software is vMotion (see Figure 1.9). With vMotion, we can move a running workload to a different physical host. But portability and the option to move workloads is only the first step. Once an entity is wholly contained in software, you can automate it.
FIGURE 1.9 VMware vMotion is a benefit that would not be possible without virtualization.
vMotion works in concert with the Distributed Resource Scheduler (DRS). The DRS actively monitors the CPU and memory resource utilization for each virtual machine. If multiple virtual machines located on the same physical server spike in CPU or memory usage to the point where there is a contention for these resources, DRS can detect the issue and automatically leverage vMotion to migrate the virtual machine to a different server with less contention (see Figure 1.10).
FIGURE 1.10 In the event of a physical host failing, the workload can be moved to other hosts in the cluster.
Another way VMware takes advantage of portability is to provide a means for disaster recovery. It does so with the VMware High Availability (HA) feature. Think of HA as a primary and backup relationship, or active and passive. For example, suppose you have a physical server with eight virtual machines and HA is enabled. If the server loses all power, those virtual machines would be automatically powered up on a different physical server. A physical server with an ESXi hypervisor is referred to as a host.
These key VMware features—HA, DRS, and vMotion—are the building blocks of VMware's Software Defined Data Center solution.
Virtualizing compute was a game changer in data centers, but VMware realized that it didn't have to stop there. Traditional storage could be virtualized as well. They took the same idea they used in abstracting compute to abstract the storage to be available across all physical servers running the ESXi hypervisor.
The traditional way of allocating storage involved having the storage team create a LUN and configuring RAID. VMware's alternative is the vSAN product (see Figure 1.11). Instead of manually carving out a LUN and RAID type, the administrator configures a policy. The policy is then used to determine the amount of storage needed for a given application.
FIGURE 1.11 Allocating storage
From the perspective of the application and virtual machine, the complexities of dealing with the physical network to access the storage are factored out. It is as simple as accessing local storage on a laptop.
Recall the diagram of the general data center architecture from the '90s we started with in the beginning of the chapter (Figure 1.1). We've discussed how VMware has virtualized the operating systems so that they can share a single physical server, and we just mentioned how VMware extended the virtualization concept to storage (see Figure 1.12).
VMware recognized the value in the Software Defined Data Center strategy and decided to apply it to networking and security services as well, giving us even more flexibility and new ways of doing things that previously were impossible.
FIGURE 1.12 Virtualization can now go beyond only servers.
Since virtual machines have been around longer than these concepts of virtual storage, virtual networking, and virtual firewalls, let's use VMs as an example of what is now possible. You can create a VM, delete a VM, move a VM, back up a VM, and restore a VM.
VMware knew it had a winner with virtualized servers and started to question what else could benefit from virtualization. What if the actual networking components were virtualized as well? Their answer was VMware NSX. With NSX, you can create a virtual network, delete it, move it, back it up, and restore it. You can do the same with virtual firewalls and load balancers.
NSX is essentially a network hypervisor. It abstracts the complexity of the underlying physical network, and it overlays on top of your physical network (see Figure 1.13
