51,59 €
Master SDDC Operations with proven best practices
This book is primarily for any system administrator or cloud infrastructure specialist who is interested in performance management and capacity management using VMware technologies. This book will also help IT professionals whose area of responsibility is not VMware, but who work with the VMware team. You can be Windows, Linux, Storage, or Network team; or application architects. Note that prior exposure to the VMware platform of data-center and cloud-based solutions is expected.
Performance management and capacity management are the two top-most issues faced by enterprise IT when doing virtualization. Until the first edition of the book, there was no in-depth coverage on the topic to tackle the issues systematically. The second edition expands the first edition, with added information and reorganizing the book into three logical parts.
The first part provides the technical foundation of SDDC Management. It explains the difference between a software-defined data center and a classic physical data center, and how it impacts both architecture and operations. From this strategic view, it zooms into the most common challenges—performance management and capacity management. It introduces a new concept called Performance SLA and also a new way of doing capacity management.
The next part provides the actual solution that you can implement in your environment. It puts the theories together and provides real-life examples created together with customers. It provides the reasons behind each dashboard, so that you get the understanding on why it is required and what problem it solves.
The last part acts as a reference section. It provides a complete reference to vSphere and vRealize Operations counters, explaining their dependencies and providing practical guidance on the values you should expect in a healthy environment.
This book covers the complex topic of managing performance and capacity in an easy-to-follow style. It relates real-world scenarios to topics in order to help you implement the book's teachings on the go.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 510
Veröffentlichungsjahr: 2016
Copyright © 2016 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: March 2016
Production reference: 1230316
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham B3 2PB, UK.
ISBN 978-1-78588-031-5
www.packtpub.com
Author
Iwan 'e1' Rahabok
Reviewers
Mark Achtemichuk
Sunny Dua
Commissioning Editor
Karthikey Pandey
Acquisition Editor
Vinay Argekar
Content Development Editor
Viranchi Shetty
Technical Editor
Vishal Mewada
Copy Editor
Madhusudan Uchil
Project Coordinator
Izzat Contractor
Proofreader
Safis Editing
Indexer
Hemangini Bari
Graphics
Kirk D'Penha
Production Coordinator
Shantanu N. Zagade
Cover Work
Shantanu N. Zagade
I first ran across Iwan Rahabok, the author of this book, in late 2013 when my company, Blue Medora, began shipping our first software solutions that were built on top of vRealize Operations. By that point, Iwan had established himself as one of the top 2–3 authorities on the planet on vRealize Operations; its capabilities were that it was strong, but still immature, and the key role the product had been playing in the Cloud Systems Management journey early adopters of the product were embarking on. Blue Medora's first encounter with Iwan was via a series on VMware internal training classes that Iwan had developed based on his early experiences with the product, with an emphasis on how it optimizes its configuration to solve real-world customer challenges. In short, in those early days of vRealize Operations, Iwan was playing an integral role in educating others in VMware on the product, its capabilities, and how to relate that to real-world customer environments.
vRealize Operations has become the foundational component of, according to IDC, the #1 Cloud Systems Management platform, considering market share available today. Over the past 2 years, VMware has continued to invest heavily in expanding vRealize Operations, both in terms of scalability, usability, and consumability as well as adding or deeply enhancing core features, including anomaly detection, predictive analytics, capacity planning, right-sizing, and workload placement. These new capabilities have been rolled out of a short 15-month time period via 3 significant updates: version 6.0 in December 2014, version 6.1 in August 2015, and version 6.2 in January 2016. VMware has successfully evolved vRealize Operations from a vSphere-centric tool to a broad-based SDDC management tool perfectly suited to monitoring and managing mixed-hypervisor environments, compute, storage, network, converged, infrastructure, as well as Tier-1 business critical applications.
Given those changes, Iwan rightly decided that in order to keep this book relevant and up to date with the incredible pace of change within the vRealize Operations platform since this book's first publication, which he needed to go back and rewrite it from the ground up. This edition is the labor of those efforts.
This book is required by all vRealize Operations admins and users—whether you are a first-time user of vRealize Operations or a seasoned professional, managing large-scale hybrid enterprise environments. Iwan is one of the most recognized vRealize Operations experts in the world, and the content of this book belies his deep personal experience with the product as well as Iwan's ongoing interactions with VMware staff, partners, and customers who are architecting configuring, installing, and using the product.
Nathan Owen
CEO, Blue Medora
Bridging the gap between R&D, the people who build products, the field, the people who sell solutions that include those products, and customers who consume those solutions, is critical to VMware's success. They are connected so that each can be effective. Customer intimacy and feedback into R&D drives more relevant innovation and compelling products. A clear channel from R&D and the CTO office to our customers ensures that our customers understand the broad context for VMware's solutions and are equipped to take the maximum advantage of them. In between is the field, and in the field are the CTO Ambassadors, whose specific mission is to provide that bridge.
Iwan is a great example of a CTO Ambassador—passionate and knowledgeable about technology, committed to our customers' success, and always going above and beyond. He was elected the CTO Ambassador program in 2014 among 100 ambassadors. The program is run by the VMware Office of the CTO, and the individual CTO Ambassadors are members of a small group of our most experienced and talented customer facing, individual contributor technologists. They are presales Systems Engineers (SEs), Technical Account Managers (TAMs), professional services consultants, architects, and global support services engineers.
The ambassadors are able to articulate VMware strategy and have a keen understanding of the big picture. They typically specialize in certain technology or business areas and are subject matter experts in their chosen fields. These ambassadors help to facilitate an effective collaboration between R&D and our customers so that we can address current customer issues and future needs as effectively as possible.
There are many tangible results of the program, and this book is a good example. Iwan took advantage of the bridge made possible through the program, collaborating with R&D and his peers. I supported his first edition, and it's my pleasure to write a foreword for this second edition also. This book demonstrates that breadth and depth of knowledge. It covers the overall Software-Defined Data Center (SDDC) architecture, which is relevant for everyone interested in virtualization and cloud computing, before diving into a number of performance and capacity management topics, providing the depth and detail needed by engineers and architects.
A non-negotiable requirement to be accepted into the CTO Ambassador program is direct customer relationships. A deep understanding of customers' requirements and how they expect VMware to be their partner are expected of our Ambassadors. As you read this book, it will be clear to you that it is written from the customers' viewpoint, and not from the product's perspective. It looks at what it takes to operationalize performance and capacity management of your SDDC.
I hope that you found the book immensely valuable in your IT transformation.
Paul Strong
CTO, Global Field, VMware
Iwan 'e1' Rahabok was the first VMware SE for strategic accounts in ASEAN. Joining VMware in 2008 from Sun Microsystems, he has seen how enterprises adopt virtualization and cloud computing and reap the benefits while overcoming the challenges. It is a journey that is very much ongoing and the book reflects a subset of that undertaking. Iwan was one of the first to achieve the VCAP-DCD certification globally and has since helped others to achieve the same, via his participation in the community. He started the user community in ASEAN, and today, the group is one of the largest VMware communities on Facebook. Iwan is a member of VMware CTO Ambassadors program since 2014, representing the Asia Pacific region at the global level and representing the product team and CTO office to the Asia Pacific customers. He is a vExpert since 2013, and has been helping others to achieve this global recognition for their contribution to the VMware community. After graduating from Bond University, Australia, Iwan moved to Singapore in 1994, where he has lived ever since.
Behind an author, there are always many people involved to make a book possible. I am grateful for the feedback, help and encouragement provided by the following individuals.
VMware vRealize Operations & Log Insight product team:
VMware Education team:
VMware Asia Pacific team:
VMware ASEAN team:
VMware Office of the CTO:
Members of CTO Ambassador program and vRealize Operations Curators:
Mark Achtemichuk currently works as a staff engineer within VMware's Central Engineering Performance team, focusing on education, benchmarking, collaterals, and performance architectures. He has also held various performance-focused field, specialist and technical marketing positions within VMware over the last 6 years. Mark is recognized as an industry expert and holds a VMware Certified Design Expert (VCDX#50) certification, one of less than 250 worldwide. He has worked on engagements with Fortune 50 companies, served as a technical editor for many books and publications, and is a sought-after speaker at numerous industry events.
Mark is a blogger and has been recognized as a VMware vExpert from 2013 to 2016. He is active on Twitter at @vmMarkA where he shares his knowledge of performance with the virtualization community. His experience and expertise from infrastructure to application helps customers ensure that performance is no longer a barrier, perceived or real, to virtualizing and operating an organization's software-defined assets.
Sunny Dua works as a senior consultant for VMware's professional services organization, which is focused on ASEAN countries. In addition to his PSO role in VMware, he is also an ambassador of the VMware CTO Office. He is a four-time vExpert (2013, 2014, 2015, and 2016) and an active member of the VMware community.
With his industry experience of more than 12 years, he has worked on large-scale virtualization and cloud deployments in various roles at VMware, Hewlett Packard, and Capgemini. In his current role, he is focusing on providing IT transformation roadmaps to large-enterprise customers on their journey toward the adoption of cloud computing. He also helps enterprise shops by providing them with directions on virtualizing business-critical applications on the VMware virtualization platform.
Operations management in the virtual infrastructure is one of his core competencies, and he has been sharing his experience on the transformation of IT operations through his personal blog, http://www.vxpresss.blogspot.com/. He is a guest blogger on VMware management and consulting blogs as well. The industry and vCommunity have recognized his work as his blog ranks in the top 50 in the virtualization and cloud industry. He is also a coauthor for vSphere Design Pocketbook, published by Create Space Independent Publishing Platform, written by highly respected members of the VMware virtualization community.
I would like to thank Iwan who gave me the opportunity to review his work. I would also like to thank my parents, my wonderful wife Roomi, and my son Samar to support and allow me to spend my personal time on projects like this.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
https://www2.packtpub.com/books/subscription/packtlib
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.
Get notified! Find out when new books are published by following @PacktEnterprise on Twitter or the Packt Enterprise Facebook page.
First of all, thank you for the feedback on the first edition. You will see that this second edition goes above and beyond the feedback. It actually has more new content than the existing content, enabling us to cover the topic deeper and wider.
The strongest feedback Packt Publishing and I got was to make it more pleasant to read as infrastructure is a dry topic. The topics we cover in the book are complex in nature, and the book goes deep into operationalizing performance and capacity management.
Another common feedback was to give more examples on the dashboards. You want more practical solution that you can implement. You want the book to guide you in your journey to operationalize your IaaS platform.
These two feedback plus other feedback and goals made the publisher and me took a fresh look on the topic. You will find the 2nd edition more complete, yet easier to read.
Content wise, the book is now distinctly split into three main parts. Each part happens to have five chapters each:
Chapter 1, VM – It Is Not What You Think!, aims to clear up the misunderstandings that customers have about virtualization. It explains why a VM is radically different from a physical server.
Chapter 2, Software-Defined Data Centers, takes the concept further and explains why a virtual data center is fundamentally different from a physical data center. You will see how it differs architecturally in almost all aspects.
Chapter 3, SDDC Management, covers the aspects of management that are affected with the new architecture.
Chapter 4, Performance Monitoring, takes the topic of the previous chapter deeper by discussing how performance management should be done in a virtual data center. It introduces a new paradigm that redefines the word Performance.
Chapter 5, Capacity Monitoring, complements Chapter 4 by explaining why capacity management needs to take into account performance before utilization. This chapter wraps up Part 1 of the book.
Chapter 6, Performance-Monitoring Dashboards, kicks off Part 2, where we cover the practical aspects of this book, as they show how sample solutions are implemented. We start by showing the steps to implement dashboards to monitor performance.
Chapter 7, Capacity-Monitoring Dashboards, takes the dashboards in Chapter 6 further by adding capacity monitoring requirement. You will see how they are closely related.
Chapter 8, Specific-Purpose Dashboards, complements those dashboards by covering specific use cases. They are often used by specific roles, such as network team, storage team, and senior management.
Chapter 9, Infrastructure Monitoring Using Blue Medora, takes the dashboards beyond VMware. It covers non-VMware components of your IaaS. Blue Medora is contributing their expertise here.
Chapter 10, Application Monitoring Using Blue Medora, completes our scope by going above the infrastructure layer. It covers commonly used applications in your VMware-based SDDC. This chapter also wraps up Part 2 of the book.
Chapter 11, SDDC Key Counters, sets the technical foundations of performance and capacity management by giving you a tour of the four infrastructure elements (CPU, RAM, network, and storage). It also maps these four elements into all the vSphere objects, so you know what is available at each level.
Chapter 12, CPU Counters, covers CPU counters in detail. It is the first of four chapters that cover the core infrastructure element (CPU, RAM, network, and storage). If you do not fully understand the various counters in vSphere and vRealize Operations, how they impact one another, and what values you consider healthy, then these four chapters are good for you. They dive deep into the counters, comparing the counters in vCenter and vRealize Operations. Knowing the counters is critical, as choosing the wrong counters or interpreting the values wrongly will lead to a wrong conclusion.
Chapter 13, Memory Counters, continues the deep dive by covering memory counters. It explains why memory is one of the most complex area to monitor and troubleshoot in SDDC.
Chapter 14, Storage Counters, continues the deep dive by covering storage counters. It explains the multiple layers of storage that occur as a result of virtualization. It also explains that distributed storage requires different monitoring approach.
Chapter 15, Network Counters, completes the deep dive by covering network counters and wraps up the book.
We assume that you have the products installed and configured. VMware vSphere, vRealize Operations, and Log Insight are the products used in this book. There are many blog articles and YouTube videos on design, installation, configuration, and product overview. Some of the bloggers, such as Sunny Dua, have many other materials, which will complete your learning. At a personal level and as a father of two young kids, I'm not keen on killing trees unless it's really necessary.
The book takes advantage of all relevant new features in the latest release. That means vSphere 6.0 Update 2, vRealize Operations 6.2, and Log Insight 3.3. As this is not a product book, almost all the content of the book can be implemented using earlier release. To assure you that you can do that, we've kept screenshots from older versions whenever possible.
The detailed steps of implementation will certainly vary if you are using the older release. For example, instead of using the View widget in vRealize Operations 6, you will have to use the Metric Graph and XML in vRealize Operations 5.8.
If the solution cannot be implemented with the previous release, we'd highlight it. For example, the data transformation feature in the View widget is hard to replicate in vRealize Operations 5.8.
This book is for VMware professionals. This can be a VMware administrator, architect, consultant, engineer, or technical support. You may be working for VMware customers, partners, or VMware itself. You may be an individual contributor or a technical leader.
This book is an intermediate-level book. It assumes that you have hands-on experience of vSphere, vRealize Operations, and Log Insight, and you are capable of performing some level of performance troubleshooting. You also have good overall knowledge of vCloud Suite, Virtual SAN, Horizon View, and NSX. Beyond VMware, you should also have intermediate knowledge of operating systems, storage, network, disaster recovery, and data center.
This book is also for IT professionals who deal with VMware professionals. As such, there is a wide range of roles, as virtualization and VMware cover many aspects of IT. Depending on your role, certain chapters will be more useful to you.
This book is a solution book, not a product book. It uses vRealize Operations and Log Insight to apply the solution. You can probably use other products to implement the dashboards.
Because it is not a product book, it does not cover all modules of vRealize Operations Suite. vCenter Infrastructure Navigator and VMware Configuration Manager are not covered. If you need a product book, Scott Norris and Christopher Slater have published one. There are also many blogs that cover installation and configuration.
The book focuses on the management of the SDDC. It does not cover the architecture. So no vCloud Suite design best practices are present in this book. It also does not cover all aspects of operation. For example, it does not cover process innovation, organizational structure, financial management, and audit. Specific to management, the book only focuses on the following most fundamental areas:
This book does not cover other areas of management, such as configuration, compliance, and availability management.
In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.
Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: This includes the drive where the OS resides (the C:\ drive in Windows systems).
A block of code is set as follows:
New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: A popular technology often branded under virtualization is hardware partitioning.
Warnings or important notes appear in a box like this.
Tips and tricks appear like this.
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.
To send us general feedback, simply e-mail <[email protected]>, and mention the book's title in the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from http://www.packtpub.com/sites/default/files/downloads/VMwarePerformanceAndCapacityManagementSecondEdition_ColorImages.pdf.
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.
To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at <[email protected]> with a link to the suspected pirated material.
We appreciate your help in protecting our authors and our ability to bring you valuable content.
If you have a problem with any aspect of this book, you can contact us at <[email protected]>, and we will do our best to address the problem.
Part 1 provides the technical foundation of SDDC Performance and Capacity Management. It aims to correct deep-rooted misunderstanding of many terms that are considered basic. Terms such as VM and SDDC will be redefined, and I hope you will gain new perspective.
It consists of 5 chapters
Chapter 1 and 2 redefine what we know as VM and SDDC. Once these core entites (VM and SDDC) are redefined, how we manage them changes drastically. Chapter 3 explains what exactly needs to be changed. Chapter 4 and 5 dive deeper into the two most fundamental of SDDC management, which are performance and capacity.
In this chapter, we will dive into why a seemingly simple technology, a virtualized x86 machine, has huge ramifications for the IT industry. In fact, it is turning a lot of things upside down and breaking down silos that have existed for decades in large IT organizations. We will cover the following topics:
Virtual Machines, or simply, VMs—who doesn't know what they are? Even a business user who has never seen one knows what it is. It is just a physical server, virtualized—nothing more.
Wise men say that small leaks sink the ship. I think that's a good way to explain why IT departments that manage physical servers well struggle when the same servers are virtualized.
We can also use the Pareto principle (80/20 rule): 80 percent of a VM is identical to a physical server. But it's the 20 percent of differences that hits you. We will highlight some of this 20 percent portion, focusing on areas that impact data center management.
The change caused by virtualization is much larger than the changes brought about by previous technologies. In the past two or more decades, we transitioned from mainframes to the client/server-based model and then to the web-based model. These are commonly agreed upon as the main evolutions in IT architecture. However, all of these are just technological changes. They changed the architecture, yes, but they did not change the operation in a fundamental way. Both the client-server and web shifts did not talk about the "journey". There was no journey to the client-server based model. However, with virtualization, we talk about the virtualization journey. It is a journey because the changes are massive and involve a lot of people.
Gartner correctly predicted the impact of virtualization in 2007 (http://www.gartner.com/newsroom/id/505040). More than 8 years later, we are still in the midst of the journey. Proving how pervasive the change is, here is the summary of the article from Gartner:
Notice how Gartner talks about a change in culture. So, virtualization has a cultural impact too. In fact, if your virtualization journey is not fast enough, look at your organization's structure and culture. Have you broken the silos? Do you empower your people to take risks and do things that have never been done before? Are you willing to flatten the organizational chart?
The silos that have served you well are likely your number one barrier to a hybrid cloud.
So why exactly is virtualization causing such a fundamental shift? To understand this, we need to go back to the basics, which is exactly what virtualization is. It's pretty common that chief information officers (CIOs) have a misconception about what it is.
Take a look at the following comments. Have you seen them in your organization?
If only life were that simple; we would all be 100-percent virtualized and have no headaches! Virtualization has been around for years, and yet, most organizations have not mastered it. The proof of mastering it is if you complete the journey and reach the highest level of the virtualization maturity model.
There are plenty of misconceptions about the topic of virtualization, especially among IT folks who are not familiar with virtualization. CIOs who have not felt the strategic impact of virtualization (be it a good or bad experience) tend to carry these misconceptions. Although virtualization looks similar to a physical system from the outside, it is completely re-architected under the hood.
So let's take a look at the first misconception: what exactly is virtualization?
Because it is an industry trend, virtualization is often generalized to include other technologies that are not virtualized. This is a typical strategy of IT vendors that have similar technology. A popular technology often branded under virtualization is hardware partitioning; once it is parked under the umbrella of virtualization, both are expected be managed in the same way. Since both are actually different, customers who try to manage both with a single piece of management software struggle to do well.
Partitioning and virtualization are two different architectures in computer engineering, resulting in major differences between their functionalities. They are shown in the following screenshot:
Virtualization versus partitioning
With partitioning, there is no hypervisor that virtualizes the underlying hardware. There is no software layer separating the VM and the physical motherboard. There is, in fact, no VM. This is why some technical manuals about partitioning technology do not even use the term "VM". They use the terms "domain", "partition", or "container" instead.
There are two variants of partitioning technology, hardware-level and OS-level partitioning, which are covered in the following bullet points:
We covered the difference from an engineering point of view. However, does it translate into different data center architectures and operations? We will focus on hardware partitioning as there are fundamental differences between hardware partitioning and software partitioning. The use cases for both are also different. Software partitioning is typically used in native cloud applications.
With that, let's do a comparison between hardware partitioning and virtualization. Let's take availability as a start.
With virtualization, all VMs become protected by vSphere High Availability (vSphere HA)—100 percent protection and that too done without VM awareness. Nothing needs to be done at the VM layer. No shared or quorum disk and no heartbeat-network VM is required to protect a VM with basic HA.
With hardware partitioning, protection has to be configured manually, one by one for each Logical Partition (LPAR) or Logical Domain (LDOM). The underlying platform does not provide it.
With virtualization, you can even go beyond five nines, that is, 99.999 percent, and move to 100 percent with vSphere Fault Tolerance. This is not possible in the partitioning approach as there is no hypervisor that replays CPU instructions. Also, because it is virtualized and transparent to the VM, you can turn on and off the Fault Tolerance capability on demand. Fault Tolerance is fully defined in the software.
Another area of difference between partitioning and virtualization is Disaster Recovery (DR). With partitioning technology, the DR site requires another instance to protect the production instance. It is a different instance, with its own OS image, hostname, and IP address. Yes, we can perform a Storage Area Network (SAN) boot, but that means another Logical Unit Number (LUN) is required to manage, zone, replicate, and so on. DR is not scalable to thousands of servers. To make it scalable, it has to be simpler.
Compared to partitioning, virtualization takes a different approach. The entire VM fits inside a folder; it becomes like a document and we migrate the entire folder as if it were one object. This is what vSphere Replication in vSphere or Site Recovery Manager does. It performs a replication per VM; there is no need to configure SAN boot. The entire DR exercise, which can cover thousands of virtual servers, is completely automated and has audit logs automatically generated. Many large enterprises have automated their DR with virtualization. There is probably no company that has automated DR for their entire LPAR, LDOM, or container.
In the previous paragraph, we're not implying LUN-based or hardware-based replication to be inferior solutions. We're merely driving the point that virtualization enables you to do things differently.
We're also not saying that hardware partitioning is an inferior technology. Every technology has its advantages and disadvantages and addresses different use cases. Before I joined VMware, I was a Sun Microsystems sales engineer for 5 years, so I'm aware of the benefit of UNIX partitioning. This book is merely trying to dispel the misunderstanding that hardware partitioning equals virtualization.
We've covered the differences between hardware partitioning and virtualization.
Let's switch gears to software partitioning. In 2016, the adoption of Linux containers will continue its rapid rise. You can actually use both containers and virtualization, and they complement each other in some use cases. There are two main approaches to deploying containers:
As both technologies evolve, the gap gets wider. As a result, managing a software partition is different from managing a VM. Securing a container is different to securing a VM. Be careful when opting for a management solution that claims to manage both. You will probably end up with the most common denominator. This is one reason why VMware is working on vSphere Integrated Containers and the Photon platform. Now that's a separate topic by itself!
A VM is not just a physical server that has been virtualized. Yes, there is a Physical-to-Virtual (P2V) process; however, once it is virtualized, it takes on a new shape. This shape has many new and changed properties, and some old properties are no longer applicable or available. My apologies if the following is not the best analogy:
We P2V the soul, not the body.
On the surface, a VM looks like a physical server. So, let's actually look at VM properties. The following screenshot shows a VM's settings in vSphere 5.5. It looks familiar as it has a CPU, memory, hard disk, network adapter, and so on. However, look at it closely. Do you see any properties that you don't see in a physical server?
VM properties in vSphere 5.5
Let's highlight some of the virtual server properties that do not exist in a physical server. I'll focus on the properties that have an impact on management, as management is the topic of this book.
At the top of the dialog box, there are four tabs:
The Virtual Hardware tab is the only tab that has similar properties to a physical server. The other three tabs do not have their equivalent physical server counterparts. For example, SDRS Rules pertains to Storage DRS. It means that the VM storage can be automatically moved by vCenter. Its location in the data center is not static. This includes the drive where the OS resides (the C:\ drive in Windows systems). This directly impacts your server management tool. It has to have awareness of Storage DRS and can no longer assume that a VM is always located in the same datastore or Logical Unit Number (LUN). Compare this with a physical server. Its OS typically resides on a local disk, which is part of the physical server. You don't want your physical server's OS drive being moved around in a data center, do you?
In theVirtual Hardware tab, notice the New device option at the bottom of the screen. Yes, you can add devices, some of them on the fly, while an OS such as Windows or Linux is running. All the VM's devices are defined in the software. This is a major difference compared to a physical server, where the physical hardware defines it and you cannot change it. With virtualization, you can have a VM with five sockets on an ESXi host with two sockets. Windows or Linux can run on five physical CPUs even though the underlying ESXi actually only runs on two physical CPUs.
Your server management tool needs to be aware of this and recognize that the new Configuration Management Database (CMDB) is vCenter. vCenter is certainly not a CMDB product. We're only saying that in a situation when there is a conflict between vCenter and a CMDB product, the one you trust is vCenter. In a Software-Defined Data Center (SDDC), the need for a CMDB is further reduced.
The following screenshot shows a bit more detail. Look at the CPU device. Again, what do you see that does not exist in a physical server?
VM CPU and network properties in vSphere 5.5
Let's highlight some of the options.
Look at the Reservation, Limit, and Shares options under CPU. None of them exist in a physical server, as a physical server is standalone by default. It does not share any resource on the motherboard (such as CPU or RAM) with another server. With these three levers, you can perform Quality of Service (QoS) on a virtual data center. So, QoS is actually built into the platform. This has an impact on management, as the platform is able to do some of the management by itself. There is no need to get another console to do what the platform provides you out of the box.
Other properties in the previous screenshot, such as Hardware virtualization, Performance counters, HT Sharing, and CPU/MMU Virtualization, also do not exist in a physical server. It is beyond the scope of this book to explain every feature, and there are many blogs and technical papers freely available on the Internet that explain them. Two of my favorites are http://blogs.vmware.com/performance/ and http://www.vmware.com/vmtn/resources/.
The next screenshot shows the VM Options tab. Again, which properties do you see that do not exist in a physical server?
VM Options in vSphere 5.5
I'd like to highlight a few of the properties present in the VM Options tab. The VMware Tools property is a key component. It provides you with drivers and improves manageability. The VMware Tools property is not present in a physical server. A physical server has drivers, but none of them are from VMware. A VM, however, is different. Its motherboard (virtual motherboard, naturally) is defined and supplied by VMware. Hence, the drivers are supplied by VMware. The VMware Tools property is the mechanism of supplying those drivers. The VMware Tools property comes in different versions. So, now you need to be aware of VMware Tools and it is something you need to manage.
We've just covered a few VM properties from the VM settings dialog box. There are literally hundreds of properties in VMs that do not exist in physical systems. Even the same properties are implemented differently. For example, although vSphere supports N_Port ID Virtualization (NPIV), the Guest OS does not see the World Wide Name (WWN). This means that data center management tools have to be aware of the specific implementation of vSphere. And these properties change with every vSphere release. Notice the line right at the bottom of the screenshot. It says Compatibility: ESXi 5.5 and later (VM version 10). This is your VM motherboard. It has a dependency on the ESXi version and yes, this becomes another new thing to manage too.
Every vSphere release typically adds new properties too, making a VM more manageable than a physical machine and differentiating a VM further from a physical server.
Hopefully, I've driven home the point that a VM is different from a physical server. I'll now list the differences from a management point of view. The following table shows the differences that impact how you manage your infrastructure. Let's begin with the core properties:
Properties
Physical server
Virtual Machine
BIOS
Every brand and model has a unique BIOS. Even the same model (for example, HP DL 380 Generation 9) can have multiple BIOS versions.
The BIOS needs updates and management, often with physical access to a data center. This requires downtime.
This is standardized in a VM. There is only one type, which is the VMware motherboard. This is independent from the ESXi motherboard.
The VM BIOS needs far fewer updates and management. The inventory management system no longer needs the BIOS management module.
Virtual HW
Not applicable.
This is a new layer below the BIOS.
It needs an update after every vSphere release. A data center management system needs to be aware of this as it requires a deep knowledge of vSphere. For example, to upgrade the virtual hardware, the VM has to be in the powered-off state.
Drivers
Many drivers are loaded and bundled with the OS. Often, you need to get the latest drivers from their respective hardware vendors.
All these drivers need to be managed. This can be a complex operation, as they vary from model to model and brand to brand. The management tool has rich functionalities, such as being able to check compatibility, roll out drivers, roll them back if there is an issue, and so on.
Relatively fewer drivers are loaded with the Guest OS; some drivers are replaced by the ones provided by VMware Tools.
Even with NPIV, the VM does not need the FC HBA driver. VMware Tools needs to be managed, with vCenter being the most common management tool.
How do all these differences impact the hardware upgrade process? Let's take a look:
Physical server
Virtual Machine
Downtime is required. It is done offline and is complex.
OS reinstallation and updates are required, hence it is a complex project in physical systems. Sometimes, a hardware upgrade is not even possible without upgrading the application.
It is done online and is simple. Virtualization decouples the application from hardware dependencies.
A VM can be upgraded from 5-year-old hardware to a new one, moving from the local SCSI disk to 10 Gigabit Fiber Channel over Ethernet (FCoE), from a dual-core to an 18-core CPU. So yes, MS-DOS can run on 10 Gigabit Ethernet, accessing SSD storage via the PCIe lane. You just need to migrate to the new hardware with vMotion. As a result, the operation is drastically simplified.
In the preceding table, we compared the core properties of a physical server with a VM. Every server needs storage, so let's compare their storage properties:
Physical server
Virtual Machine
Servers connected to a SAN can see the SAN and FC fabric. They need HBA drivers and have FC PCI cards, and they have multipathing software installed.
They normally need an advanced file system or volume manager to Redundant Array of Inexpensive Disks (RAID) local disk.
No VM is connected to the FC fabric or SAN. The VM only sees the local disk. Even with N_Port ID Virtualization (NPIV) and physical Raw Device Mapping (RDM), the VM does not send FC frames. Multipathing is provided by vSphere, transparent to the VM.
There is no need for a RAID local disk. It is one virtual disk, not two. Availability is provided at the hardware layer.
A backup agent and backup LAN are required in a majority of cases.
These are not needed in a majority of cases, as backup is done via the vSphere VADP API, which is a VMware vStorage API that backs up and restores vSphere VMs. An agent is only required for application-level backup.
There's a big difference in storage. How about network and security? Let's see:
Physical server
Virtual Machine
NIC teaming is common. This typically requires two cables per server.
NIC teaming is provided by ESXi. The VM is not aware and only sees one vNIC.
The Guest OS is VLAN-aware. It is configured inside the OS. Moving the VLAN requires reconfiguration.
The VLAN is generally provided by vSphere and not done inside the Guest OS. This means the VM can be moved from one VLAN to another with no downtime.
With network virtualization, the VM moves from a VLAN to VXLAN.
The AV agent is installed on the Guest and can be seen by an attacker.
An AV agent runs on the ESXi host as a VM (one per ESXi). It cannot be seen by the attacker from inside the Guest OS.
The AV consumes OS resources. AV signature updates cause high storage usage.
The AV consumes minimal Guest OS resources as it is offloaded to the ESXi Agent VM. AV signature updates do not require high Input/Output Operations Per Second (IOPS) inside the Guest OS. The total IOPS is also lower at the ESXi host level as it is not done per VM.
Finally, let's take a look at the impact on management. As can be seen here, even the way we manage a server changes once it is converted into a VM:
Property
Physical server
Virtual Machine
Monitoring approach
An agent is commonly deployed. It is typical for a server to have multiple agents.
In-Guest counters are accurate as the OS can see the physical hardware.
A physical server has an average of 5 percent CPU utilization due to a multicore chip. As a result, there is no need to monitor it closely.
An agent is typically not deployed. Certain areas, such as application and Guest OS monitoring, are still best served by an agent.
The key in-Guest counters are not accurate as the Guest OS does not see the physical hardware.
A VM has an average of 50 percent CPU utilization as it is rightsized. This is 10 times higher compared to a physical server. As a result, there is a need to monitor it closely, especially when physical resources are oversubscribed. Capacity management becomes a discipline in itself.
Availability approach
HA is provided by clusterware, such as Microsoft Windows Server Failover Clusters (WSFC) and Veritas Cluster Server (VCS). Clusterware tends to be complex and expensive.
Cloning a physical server is a complex task and requires the boot drive to be on the SAN or LAN, which is not typical.
A snapshot is rarely made, due to cost and complexity. Only very large IT departments are found to perform physical server snapshots.
HA is a built-in core component of vSphere. From what I see, most clustered physical servers end up as just a single VM since vSphere HA is good enough.
Cloning can be done easily. It can even be done live. The drawback is that the clone becomes a new area of management.
Snapshots can be made easily. In fact, this is done every time as part of the backup process. Snapshots also become a new area of management.
Company asset
The physical server is a company asset and it has book value in the accounting system. It needs proper asset management as components vary among servers.
Here, an annual stock-take process is required.
A VM is not an asset as it has no accounting value. It is like a document. It is technically a folder with files in it.
A stock-take process is no longer required as the VM cannot exist outside vSphere.
I hope you enjoyed the comparison and found it useful. We covered, to a great extent, the impact caused by virtualization and the changes it introduces. We started by clarifying that virtualization is a different technology compared to partitioning. We then explained that once a physical server is converted into a Virtual Machine, it takes on a different form and has radically different properties. The changes range from the core property of the server itself to how we manage it.
The changes create a ripple effect in the bigger picture. The entire data center changes once we virtualize it, and this the topic of our next chapter.
In this chapter, we will take the point introduced in Chapter 1, VM – It Is Not What You Think!, further. We will explain why the software-defined data center (SDDC) is much more than a virtualized data center.
We will cover the following topics:
In Chapter 1, VM – It Is Not What You Think!, we covered how a VM differs drastically from a physical server. Now, let's take a look at the big picture, which is at the data-center level. A data center consists of three major functions—compute, network, and storage. Security is not a function on its own, but a key property that each function has to deliver. We use the term "compute" to represent processing power, namely, CPU and memory. In today's data centers, compute is also used when referencing converged infrastructure, where the server and storage have physically converged into one box. The industry term for this is Hyper-Converged Infrastructure (HCI). You will see later in the book that this convergence impacts how you architect and operate an SDDC.
VMware has moved to virtualizing the network and storage functions as well, resulting in a data center that is fully virtualized and thus defined in the software. The software is the data center. This has resulted in the term "SDDC". This book will make extensive comparisons with the physical data center. For ease of reference, let's call the physical data center the hardware-defined data center (HDDC).
In SDDC, we no longer define the architecture in the physical layer. The physical layer is just there to provide resources. These resources are not aware of one another. The stickiness is reduced and they become a commodity. In many cases, the hardware can even be replaced without incurring downtime on the VMs running on top.
The next diagram shows one possibility of a data center defined in software. I have drawn the diagram to state a point, so don't take this as the best practice for SDDC architecture. In the diagram, there are many virtual data centers (I have drawn three due to space constraints). Each virtual data center has its own set of virtual infrastructure (server, storage, network, and security). They are independent of one another.
A virtual data center is no longer contained in a single building bound by a physical boundary. Although bandwidth and latency are still limiting factors in 2016, the main thing here is that you can architect your physical data centers as one or more logical data centers. You should be able to, with just a few clicks in VMware Site Recovery Manager (SRM
