21,99 €
Virtualization has become a "megatrend"--and for good reason. Implementing virtualization allows for more efficient utilization of network server capacity, simpler storage administration, reduced energy costs, and better use of corporate capital. In other words: virtualization helps you save money, energy, and space. Not bad, huh? If you're thinking about "going virtual" but have the feeling everyone else in the world understands exactly what that means while you're still virtually in the dark, take heart. Virtualization for Dummies gives you a thorough introduction to this hot topic and helps you evaluate if making the switch to a virtual environment is right for you. This fun and friendly guide starts with a detailed overview of exactly what virtualization is and exactly how it works, and then takes you on a tour of the benefits of a virtualized environment, such as added space in overcrowded data centers, lower operations costs through more efficient infrastructure administration, and reduced energy costs through server consolidation. Next, you'll get step-by-step guidance on how to: * Perform a server virtualization cost versus benefit analysis * Weigh server virtualization options * Choose hardware for your server virtualization project * Create a virtualized software environment * Migrate to--and manage--your new virtualized environment Whether you're an IT manager looking to sell the idea to your boss, or just want to learn more about how to create, migrate to, and successfully manage a virtualized environment, Virtualization for Dummies is your go-to guide for virtually everything you need to know.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 580
Veröffentlichungsjahr: 2011
by Bernard Golden
Virtualization For Dummies®
Published byWiley Publishing, Inc.111 River St.Hoboken, NJ 07030-5774www.wiley.com
Copyright © 2008 by Wiley Publishing, Inc., Indianapolis, Indiana
Published by Wiley Publishing, Inc., Indianapolis, Indiana
Published simultaneously in Canada
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Legal Department, Wiley Publishing, Inc., 10475 Crosspoint Blvd., Indianapolis, IN 46256, (317) 572-3447, fax (317) 572-4355, or online at http://www.wiley.com/go/permissions
Trademarks: Wiley, the Wiley Publishing logo, For Dummies, the Dummies Man logo, A Reference for the Rest of Us!, The Dummies Way, Dummies Daily, The Fun and Easy Way, Dummies.com, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries, and may not be used without written permission. All other trademarks are the property of their respective owners. Wiley Publishing, Inc., is not associated with any product or vendor mentioned in this book.
LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Website is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or Website may provide or recommendations it may make. Further, readers should be aware that Internet Websites listed in this work may have changed or disappeared between when this work was written and when it is read.
For general information on our other products and services, please contact our Customer Care Department within the U.S. at 800-762-2974, outside the U.S. at 317-572-3993, or fax 317-572-4002.
For technical support, please visit www.wiley.com/techsupport
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
Library of Congress Control Number: 2007940109
ISBN: 978-0-470-14831-0
Manufactured in the United States of America
10 9 8 7 6 5 4 3 2 1
When Bernard invited me to write an introduction to this book, I found myself reminded of a frequently repeated conversation with my father, who is a retired engineer. Typically, it goes like this: “Simon, what does virtualization do?” –– followed by a lengthy reply from me and then a long pause from my father –– “And why is that useful?” Now, I certainly don’t think that my father really has much use for server virtualization, but a lot more people do need it –– and need to understand it –– than currently use it.
Although virtualization is all the rage in the tech industry press, and savvy market watchers have observed the exciting IPO of VMware, and Citrix’s acquisition of my own company, XenSource, the market for virtualization software is largely unaddressed. Depending on whose research you read, only 7 percent or so of x86 servers are virtualized, and only a tiny fraction of desktop or mobile PCs are virtualized. But the virtualization market is white hot, and every day new announcements in storage, server, and network virtualization make the picture more complex and harder to understand.
Virtualization For Dummies is the perfect way to develop a complete understanding of both the technology and the benefits of virtualization. Arguably, virtualization is simply a consequence of Moore’s Law –– the guideline developed by Intel founder Gordon Moore that predicts a doubling in the number of transistors per unit area on a CPU every couple of years. With PCs and servers becoming so incredibly powerful, the typical software suites that most users would install on a single physical server a few years ago now consume only a few percent of the resources of a modern machine. Virtualization is simply a consequence of the obvious waste of resources –– allowing a machine to run multiple virtualized servers or client operating systems simultaneously. But if that were all that were needed, there wouldn’t be such a fuss about virtualization. Instead, virtualization is having a profound impact on data center architectures and growth, on software lifecycle management, on security and manageability of software, and the agility of IT departments to meet with new challenges. And it is these opportunities and challenges that urgently need to be articulated to technologists and business leaders alike in an accessible and understandable way.
Having spent many enjoyable hours with Bernard Golden, a recognized open source guru, President and CEO of Navica, and self-taught virtualization expert, I cannot think of a better-qualified author for a book whose objective is to cut through the hype and clearly and succinctly deal with virtualization and its effects on IT and users alike. I always look forward to reading Bernard’s frequently published commentaries on Xen, VMware, and Linux, which combine his hands-on experience with those products and a rare depth of insight into industry dynamics. I know firsthand that Bernard is a master of the subject of virtualization because he is one of the most persistent and demanding beta testers of XenEnterprise, XenSource’s server virtualization product, where his feedback has provided us with terrific guidance on how to improve the product overall. This, together with Bernard’s incisive, clear, and articulate style, makes this book a pleasure to read and a terrific contribution to the virtualization industry –– a concise categorization of virtualization that will further the understanding of the technology and its benefits, driving uptake of virtualization generally. It is with great pleasure that I strongly recommend that you read this book.
Simon Crosby
CTO, XenSource, Inc.
Bernard Golden has been called “a renowned open source expert” (IT Business Edge) and “an open source guru” (SearchCRM.com) and is regularly featured in magazines like Computerworld,InformationWeek, and Inc. His blog “The Open Source” is one of the most popular features of CIO Magazine’s Web site. Bernard is a frequent speaker at industry conferences like LinuxWorld, the Open Source Business Conference, and the Red Hat Summit. He is the author of Succeeding with Open Source, (Addison-Wesley, 2005, published in four languages), which is used in over a dozen university open source programs throughout the world. Bernard is the CEO of Navica, a Silicon Valley IT management consulting firm.
To Sebastian and Oliver, the bright stars Kocab and Pherkad, Guardians of the Pole of the Golden family constellation. May your lives be blessed in the ways you’ve blessed mine.
So many people have helped in the writing of this book that it would be justified in calling it a collaboration of the willing. Their enthusiasm in sharing information and perspective has been invaluable. I’d like to thank everyone who offered help and encouragement and to especially thank the following:
Kyle Looper and Paul Levesque of Wiley Publishing, Inc., who gently yet irresistibly pushed me toward finishing the book. Kyle generously contracted for the book, and Paul affably helped shape it in the direct and comprehensible For Dummies style.
David Marshall, who performed the key duty of technical reviewer for the book, providing much valuable feedback. David is a real virtualization guru who writes the weekly virtualization newsletter for InfoWorld and also works at the virtualization startup Inovawave.
From HP: Andy Scholl helped me comprehend HP’s myriad of virtualization technologies and products.
From IBM: Chris Almond, Greg Kelleher, Jerone Young, and Bob Zuber helped me comprehend this very large organization’s various virtualization initiatives, and I appreciate their assistance.
From Novell: Jonathan Ervine, Kerry Kim, and Justin Steinman provided insight about Novell’s virtualization objectives and technology.
From Platespin: Richard Azevedo and Bojan Dusevic were very generous and helpful with their time, much appreciated in helping me sort out the complex topic of P2V migration.
From Red Hat: Joel Berman, Nick Carr, Jan Mark Holzer, Rob Kenna, and Brian Stevens all very generously shared their time and expertise, especially aiding with the Fedora hands-on chapter.
From Sun: Joanne Kisling, Chris Ratcliffe, Paul Steeves, Joost Pronk van Hoogeveen, and Bob Wientzen described Sun’s virtualization efforts and clarified Sun’s future plans.
From VMware: Joe Andrews, Bogomil Balkansky, and Melinda Wilken were extremely helpful in understanding the different components and products that incorporate the VMware technology.
From XenSource: John Bara, Christof Berlin, Peter Blum, Simon Crosby, and Roger Klorese helped enabled me to describe the Xen architecture and technology.
We’re proud of this book; please send us your comments through our online registration form located at www.dummies.com/register/.
Some of the people who helped bring this book to market include the following:
Acquisitions and Editorial
Senior Project Editor: Paul Levesque
Acquisitions Editor: Kyle Looper
Copy Editor: Virginia Sanders
Technical Editors: David Marshall
Editorial Manager: Leah Cameron
Editorial Assistant: Amanda Foxworth
Sr. Editorial Assistant: Cherie Case
Cartoons: Rich Tennant (www.the5thwave.com)
Composition Services
Project Coordinator: Kristie Rees
Layout and Graphics: Reuben W. Davis, Alissa Ellet, Melissa K. Jester, Barbara Moore, Christine Williams
Proofreaders: Laura L. Bowman, John Greenough
Indexer: Ty Koontz
Anniversary Logo Design: Richard Pacifico
Publishing and Editorial for Technology Dummies
Richard Swadley, Vice President and Executive Group Publisher
Andy Cummings, Vice President and Publisher
Mary Bednarek, Executive Acquisitions Director
Mary C. Corder, Editorial Director
Publishing for Consumer Dummies
Diane Graves Steele, Vice President and Publisher
Joyce Pepple, Acquisitions Director
Composition Services
Gerry Fahey, Vice President of Production Services
Debbie Stailey, Director of Composition Services
Title
Introduction
Why Buy This Book?
Foolish Assumptions
How This Book Is Organized
Icons Used in This Book
Where to Go from Here
Part I : Getting Started with a Virtualization Project
Chapter 1: Wrapping Your Head around Virtualization
Virtualization: A Definition
Why Virtualization Is Hot, Hot, Hot — The Four Drivers of Virtualization
Sorting Out the Types of Virtualization
Creating the Virtualized Enterprise
Chapter 2: Making a Business Case for Virtualization
Virtualization Lowers Hardware Costs
Virtualization Increases IT Operational Flexibility
Virtualization Reduces IT Operations Costs
Virtualization Lowers Energy Costs
Software Licensing Costs: A Challenge for Virtualization
Chapter 3: Understanding Virtualization: Technologies and Applications
Virtualization Technologies
Virtualization Applications
Chapter 4: Peeking at the Future of Virtualization
Virtualization Gets Integrated into Operating Systems
Virtualized Software: Delivered to Your Door Preinstalled
Virtualization Diffusing into the Internet
The Changing Skill Set of IT Personnel
Software Pricing: How Will It Respond to Virtualization?
Part II : Server Virtualization
Chapter 5: Deciding Whether Server Virtualization Is Right for You
How to Decide Whether You Should Use Server Virtualization
When Not to Use Virtualization
Chapter 6: Performing a Server Virtualization Cost-Benefit Analysis
Getting Your Cost-Benefit Ducks in a Row
The Cost-Benefit Bottom Line
Chapter 7: Managing a Virtualization Project
Understanding the Virtualization Life Cycle
Creating Your Virtualization Plan
Implementing Your Virtualization Solution
Chapter 8: Choosing Hardware for Your Server Virtualization Project
Taking Hardware Seriously
Choosing Servers
Making the Hard Hardware Choices
But Wait, There’s More: Future Virtualization Hardware Development
Part III : Server Virtualization Software Options
Chapter 9: Migrating to Your New Virtualized Environment
Moving from Physical to Virtual: An Overview
Getting Ready to Move: Preparing the Virtualized Environment
Migrating Your Physical Servers
Moving to Production
Chapter 10: Managing Your Virtualized Environment
Managing Virtualization: The Next Challenge
Managing Free Virtualization
Virtualization Management: The Two Philosophies
Making Sense of Virtualization Management
Deciding on Your Virtualization Management Approach
Chapter 11: Creating a Virtualized Storage Environment
Storage Overview
Choosing Storage for Virtualization
Storage and the Different Types of Virtualization
Storage and the Virtualization Journey
Part IV : Implementing Virtualization
Chapter 12: Implementing VMware Server
Understanding VMware Server Architecture: Pros and Cons
Getting Your (Free) Copy of VMware Server
Creating a Guest Virtual Machine
Installing an Operating System
Can I Skip the Boring OS Installation Process?
Can I Skip the Boring Application Installation Process?
Chapter 13: Implementing Fedora Virtualization
Obtaining Fedora 7
Installing Fedora 7
Creating a Guest Virtual Machine
Installing a Guest Operating System
Chapter 14: Implementing XenExpress
What Is XenSource, Anyway?
Obtaining XenSource XenExpress
Installing XenExpress
Installing XenConsole
Working with XenConsole
Creating a Guest Virtual Machine
Installing Paravirtualized Drivers
Accessing a Windows Guest VM with an RDP Client
Part V : The Part of Tens
Chapter 15: Ten Steps to Your First Virtualization Project
Recite After Me: Virtualization Is a Journey, Not a Product
Evaluate Your Use Cases
Review Your Operations Organizational Structure
Define Your Virtualization Architecture
Select Your Virtualization Product(s)
Select Your Virtualization Hardware
Perform a Pilot Implementation
Implement Your Production Environment
Migrate Your Physical Servers
Manage Your Virtualized Infrastructure
Chapter 16: Ten Virtualization Pitfalls to Avoid
Don’t Wait for All the Kinks to Be Worked Out
Don’t Skimp on Training
Don’t Apply Virtualization in Areas That Are Not Appropriate
Don’t Imagine That Virtualization Is Static
Don’t Skip the “Boring” Stuff
Don’t Overlook a Business Case
Don’t Overlook the Importance of Organization
Don’t Forget to Research Your Software Vendor Support Policies
Don’t Overlook the Importance of Hardware
Don’t Forget to Have a Project Party
Chapter 17: Ten Great Resources on Virtualization
Get Free Virtualization Software
Get Great Content about Virtualization
Get the Latest News about Virtualization
Read Blogs about Virtualization
Keep Up with Hardware Developments Relating to Virtualization
Find Out More about Virtualization
Attend Virtualization Events
Take Advantage of Vendor Information
Keep Up with Storage Virtualization
Get the Latest and Last Word on Virtualization
: Further Reading
If you work in tech, there’s no way you haven’t heard the term virtualization. Even if you don’t work in tech, you might have been exposed to virtualization. In August 2007, virtualization’s leading company, VMware, went public with the year’s most highly anticipated IPO. Even people who confuse virtualization with visualization sit up and pay attention when a blockbuster IPO comes to market. To show how hot the sector is, VMware was bought by the storage company EMC for $625 million in 2004, but it has, as of this writing, a market capitalization of $25.6 billion.
The excitement and big dollars illustrate a fundamental reality about virtualization: It’s transforming the way computing works. Virtualization is going to fundamentally change the way you implement and manage data centers, the way you obtain and install software, and the way you think about the speed with which you can respond to changing business conditions. The changes that virtualization will cause in your work environment will be so profound that, in ten years time, you’ll look back on the traditional ways of managing hardware and software the way your grandparents looked back on operator-assisted telephone dialing after the introduction of direct dialing.
I wrote this book because I’m convinced that the world is on the cusp of an enormous change in the use of information technology, also known as IT. In the past, IT was expensive, so it was limited to must-have applications such as accounting and order tracking. In the past, IT was complex, so it had to be managed by a group of wizards with their own special language and incantations. That’s all changing.
In the future, IT will be cheap, so applications will be ubiquitous, and low-priority applications will finally get their day in the sun. In the future, implementing IT will be simple, so groups outside of IT will shun the wizards’ robes and arcane language and implement their own applications, which will, of course, make central IT’s role even more important because it will have to create a robust yet malleable infrastructure.
Instead of IT being this special thing that supports only certain aspects of a business, it will become pervasive, suffusing throughout every business operation and every business interaction. It’s an incredibly exciting time for IT; I compare it to the rise of mass production made possible by Henry Ford. Because of Henry Ford, automobiles went from playthings for the wealthy to everyday belongings of the masses, and society was transformed by mobility and speed. Virtualization is the mass production of IT.
Just as the automobile industry underwent rapid transformation after Ford invented mass production in 1913, the virtualization marketplace is transforming the IT industry today. One of the biggest challenges for this book is to present a coherent and unified view of the topic even though virtualization is evolving at an incredible pace. At times, I felt that writing this book was like trying to nail Jell-O to the wall. During just one week of the writing of this book, the IPO of VMware went from an event no one had even considered to the technology financial event of the year; in the same week, XenSource, the commercial sponsor of the open source Xen virtualization project, was purchased by Citrix for $500 million. Furthermore, myriads of virtualization technology and product announcements occurred, making me, at times, wish I could push a Pause button on the market so that I could have a hope of completing an up-to-date book. Alas, virtualization’s fevered evolution shows no sign of diminishing — good for the virtualization user, challenging for the virtualization writer.
Even though virtualization is changing the face of technology, it is, unfortunately, still riddled with the complexities and — especially — the arcane language of tech. Two seconds into a conversation about virtualization, you’ll start hearing terms like hypervisor and bare metal, which sound, respectively, like something from Star Wars and an auto shop class.
It’s unfortunate that virtualization can be difficult to approach because of this specialized terminology. It’s especially unfortunate because understanding and applying virtualization will be, in the near future, a fundamental skill for everyone in IT — and for many people working in other disciplines like marketing and finance. Consequently, having a strong grounding in virtualization is critical for people wanting to participate in the IT world of the future.
This book is designed to provide a thorough introduction to the subject. It assumes that you have no knowledge of virtualization and will leave you (I hope) with a good grasp of the topic. My objective is for you to be completely comfortable with virtualization and its many elements and to be able to participate and contribute to your organization’s virtualization initiatives. The book also serves as a jumping-off point for deeper research and work in virtualization.
This book doesn’t assume you know much about virtualization beyond having heard the term. You don’t have to know any of the technical details of the topic, and you certainly don’t need to have done hands-on work with virtualization. (The book provides the opportunity to do hands-on work with virtualization, with three chapters devoted to installing and implementing different virtualization products.)
I define every virtualization term you encounter. I also make it a point to thoroughly explain complex topics so that you can understand the connections between different virtualization elements.
The book does assume that you have a basic understanding of computers, operating systems, and applications and how they work together to enable computers to do useful work. Because virtualization shuffles the placement and interaction of existing system software and hardware layers, it’s important to have a grasp of how things are traditionally done. However, if you’ve worked with computers, used an operating system, and installed applications, you should have the knowledge base to make use of the book’s content.
As is the case with other For Dummies books, this book doesn’t assume that you’ll begin on page one and read straight through to the end. Each chapter is written to stand alone, with enough contextual information provided so that you can understand the chapter’s content. Cross-references are provided to other chapters that go into more detail about topics lightly touched on in a given chapter.
You’ll soon notice, though, that individual chapters are grouped together in a somewhat-less-than-random order. The organizing principle here is the part, and this book has five of them.
Getting a good grounding in a subject is critical to understanding it and, more important, to recognizing how you can best take advantage of it. Part I provides a whirlwind tour of the world of virtualization — from where it is today to beyond where it will be tomorrow.
Chapter 1 is where you get an overview of virtualization, including an introduction to why it’s such a hot topic. Chapter 1 also discusses the basic philosophy of virtualization — the abstraction of computer functionality from physical resources. Chapter 2 describes the business reasons that are driving virtualization’s explosive growth, and it discusses how you can make a business case for your virtualization project. If you want a deeper understanding of the different technologies that make up virtualization as well as the different ways virtualization is applied in everyday use, Chapter 3 is for you. Finally, if you want to get a sense of where virtualization is heading, Chapter 4 provides a glimpse of the exciting initiatives that are being made possible by virtualization.
Server virtualization is where the hottest action is in today’s virtualization world. The most obvious use cases and the most immediate payoffs are available with server virtualization, and this part covers it all.
Chapter 5 gives you information for a litmus test to inform you whether server virtualization makes sense for you. The chapter lets you do a bit of self-testing to see whether your organization is ready to implement virtualization. Just as important, the chapter gives you the tools you’ll need to find out whether it doesn’t make sense for you to implement virtualization. Chapter 6 provides in-depth information on how to make a financial assessment of your virtualization project, including how to create a spreadsheet to calculate all the costs and benefits of a potential virtualization project. Chapter 7 discusses the all-important topic of how to manage a virtualization project; there’s far more involved here than just installing a virtualization hypervisor. Finally, Chapter 8 discusses a very important topic — the hardware you’ll use to run your virtualization software. There are many exciting developments in hardware with significant influence on the operational and financial benefits of virtualization.
Sometimes I see a movie with a happy ending and wonder “Yeah, but how did the rest of their lives turn out?” Virtualization can be something like that. If you listen to vendors, you just install their software and — presto! — instant virtualized infrastructure. Just like real life isn’t like the movies, real infrastructure isn’t like that, either, and Part III helps you have a true happy ending.
Chapter 9 deals with the critical issue of how to migrate an existing physical infrastructure to a virtualized one. (Hint: It’s more complex than the vendors claim.) Chapter 10 addresses managing a virtualized infrastructure; there are a plethora of options, and this chapter provides help in deciding which option is a good choice for you. Chapter 11 addresses a topic that’s often an afterthought in virtualization: storage. For many organizations, virtualization provides the impetus to move to shared (also known as virtualized) storage. It’s important to know what your storage options are and how to select one.
If you’re like me, theoretical understanding goes just so far. Then I want to roll up my sleeves and get my hands dirty. If you’re like that as well, rejoice! Part IV can feed your hands-on hunger. In this part, I present three different examples of how to install and use virtualization. Best of all, each of the products used for the examples is available at no cost, making it possible for you to work along with them with no financial commitment.
Chapter 12 illustrates how to implement VMware Server as well as how to install a guest virtual machine. Chapter 13 works through Xen virtualization via the open source Linux distribution Fedora. Chapter 14 also illustrates a Xen-based virtualization, but the chapter uses the free XenExpress product from XenSource to share a different way of applying Xen virtualization.
Every For Dummies book concludes with a few chapters that provide a final burst of valuable information delivered in a sleek, stripped-down format — the time-honored ten-point list.
In Chapter 15, you get a list of the ten must-do steps for your first virtualization project. Chapter 16 shares ten no-no’s to avoid in a virtualization project. And Chapter 17 gives you ten great virtualization resources for you to use after you finish this book.
This icon flags useful, helpful tips, or shortcuts.
This icon marks something that might be good to store away for future reference.
Pay attention. The bother you save might be your own.
This icon highlights tidbits for the more technically inclined that I hope augment their understanding — but I won’t be offended if less-technically inclined readers hurry through with eyes averted.
Pick a page and start reading! You can use the Table of Contents as a guide, or my parts description in this introduction, or you can leaf through the index for a particular topic. If you’re an accounting type, you might jump right into Chapter 6 — the chapter with all the lovely spreadsheets. If you’re a hardcore techie type, you might want to check out Chapter 8 — yes, yes, the hardware chapter. Wherever you start, you’ll soon find yourself immersed in one of the more exciting stories to come down the tech pipe in a long time.
In this part . . .
V irtualization is a hot topic. But what if you don’t know enough to even fake it around the water cooler? Not to worry. This part gives even the virtualization-challenged a thorough introduction to the subject ofvirtualization.
It begins with an overview of virtualization –– what it is and what all the different types of virtualization are. You didn’t know there are different types? Well, this part is perfect for you.
For good measure, this part also includes an in-depth look at the technologies and applications of virtualization so that you can form an opinion of how it might be usefully applied to your own environment.
I conclude this part with a look at the future of virtualization. The area is evolving rapidly, and looking forward to future developments is exciting and, perhaps, sobering. Certainly I expect that virtualization will transform the way hardware and software is sold, so a peek into the future is well worth your time.
Finding out what virtualization is
Figuring out what has everyone so excited
Seeing how virtualization is applied
Working through the challenges of virtualization
It seems like everywhere you go these days, someone is talking about virtualization. Technical magazines trumpet the technology on their covers. Virtualization sessions are featured prominently at technology conferences. And, predictably enough, technology vendors are describing how their product is the latest word in virtualization.
If you have the feeling that everyone else in the world understands virtualization perfectly while you’re still trying to understand just what it is and how you might take advantage of it, take heart. Virtualization is a new technology. (Actually, it’s a pretty well-established technology, but a confluence of conditions happening just now has brought it into new prominence — more on that later in this chapter.) Virtualization is a technology being widely applied today with excellent operational and financial results, but it’s by no means universally used or understood. That’s the purpose of this book: to provide you with an introduction to the subject so that you can understand its promise and perils and create an action plan to decide whether virtualization is right for you, as well as move forward with implementing it should you decide it is right for you.
Sadly, not even this book can protect you from the overblown claims of vendors; there is no vaccine strong enough for that disease. This book helps you sort out the hope from the hype and gives you tools to feel confident in making your virtualization decisions.
Virtualization refers to a concept in which access to a single underlying piece of hardware, like a server, is coordinated so that multiple guest operating systems can share that single piece of hardware, with no guest operating system being aware that it is actually sharing anything at all. (A guest operating system is an operating system that’s hosted by the underlying virtualization software layer, which is often, you guessed it, called the host system.) A guest operating system appears to the applications running on it as a complete operating system (OS), and the guest OS itself is completely unaware that it’s running on top of a layer of virtualization software rather than directly on the physical hardware.
Actually, you’ve had experience with something like this when you used a computer. When you interact with a particular application, the operating system “virtualizes” access to the underlying hardware so that only the application you’re using has access to it — only your program is able to manipulate the files it accesses, write to the screen, and so on. Although this description oversimplifies the reality of how operating systems work, it captures a central reality: The operating system takes care of controlling how applications access the hardware so that each application can do its work without worrying about the state of the hardware. The operating system encapsulates the hardware, allowing multiple applications to use it.
In server virtualization — the most common type of virtualization — you can think of virtualization as inserting another layer of encapsulation so that multiple operating systems can operate on a single piece of hardware. In this scenario, each operating system believes it has sole control of the underlying hardware, but in reality, the virtualization software controls access to it in such a way that a number of operating systems can work without colliding with one another. The genius of virtualization is that it provides new capability without imposing the need for significant product or process change.
Actually, that last statement is a bit overbroad. A type of virtualization called paravirtualization does require some modification to the software that uses it. However, the resulting excellent performance can make up for the fact that it’s a little less convenient to use. Get used to this exception business; the subject of virtualization is riddled with general truths that have specific exceptions. Although you have to take account of those exceptions in your particular project plans, don’t let these exceptions deter you from the overarching circumstances. The big picture is what you need to focus on to understand how virtualization can help you.
Virtualization is actually a simple concept made complex by all the exceptions that arise in particular circumstances. It can be frustrating to find yourself stymied by what seems to be a niggling detail, but unfortunately, that’s the reality of virtualization. If you stop to think about it, the complexity makes sense — you’re moving multiple operating systems and applications onto a new piece of software called a hypervisor, which in turn talks to underlying hardware. Of course it’s complex! But don’t worry, if you hang in there, it usually comes out right in the end. Chapters 12 through 14 offer real examples of how to install several flavors of virtualization software and successfully put guest OSes onto the software. Work through the examples, and you’ll be an expert in no time!
So, if you take nothing more away from this section than the fact that virtualization enables you to share a hardware resource among a number of other software systems, that’s enough for you to understand the next topic — what’s making virtualization so important now.
Despite all the recent buzz about it, virtualization is by no means a new technology. Mainframe computers have offered the ability to host multiple operating systems for over 30 years. In fact, if you begin to discuss it, you might suffer the misfortune of having someone begin to regale you with tales of how he did virtualization in the old days.
The truth is that your old gaffer is right. Yeah, virtualization as a technology is nothing new, and yeah, it’s been around for many years, but it was confined to “big iron” (that is, mainframes). Four trends have come together in just the past couple of years that have moved virtualization from the dusty mainframe backroom to a front-and-center position in today’s computing environment.
When you take a look at these trends, you can immediately recognize why virtualization is much more than the latest technology fad from an industry that has brought you more fads than the fashion industry.
I recently had an opportunity to visit the Intel Museum located at the company’s headquarters in Santa Clara, California. The museum contains a treasure trove of computing — from a giant design-it-yourself chip game to a set of sterile clothes (called a bunny suit) you can put on while viewing a live camera feed from a chip-manufacturing plant. It’s well worth a visit. But tucked near the back of the museum, in a rather undistinguished case, is ensconced one of the critical documents of computing. This document, despite its humble presentation, contains an idea that has been key to the development of computing in our time.
I refer to the article by Intel cofounder Gordon Moore in the April 1965 issue of Electronics Magazine, in which he first offered his observation about the compounding power of processor computing power, which has come to be known as “Moore’s Law.”
In describing the increasing power of computing power, Moore stated: “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.” Clearly, Moore wasn’t in charge of marketing at Intel, but if you translate this into something the average human can understand, he means that each year (actually, most people estimate the timeframe at around 18 months), for a given size processor, twice as many individual components can be squeezed onto a similarly sized piece of silicon. Put another way, every new generation of chip delivers twice as much processing power as the previous generation — at the same price.
This rapid doubling of processing power has had a tremendous impact on daily life, to an extent that’s difficult to comprehend for most people. Just over ten years ago, I ran the engineering group of a large enterprise software vendor. Everybody in the group knew that a major part of the hassles involved in getting a new release out the door involved trying to get acceptable performance out of the product on the then-latest generation of hardware. The hardware just wasn’t that capable. Today’s hardware, based upon the inexorable march of Moore’s Law, is around 100,000 times as powerful — in ten short years!
What you need to keep in mind to understand Moore’s Law is that the numbers that are continuing to double are themselves getting larger. So, if you take year one as a base, with, say, processing power of 100 million instructions per second (MIPS) available, then in year two, there will be 200; in year three, 400; and so on. Impressive, eh? When you get out to year seven or eight, the increase is from something like 6,400 to 12,800 in one generation. It has grown by 6,400. And the next year, it will grow by 12,800. It’s mind boggling, really.
Moore’s Law demonstrates increasing returns — the amount of improvement itself grows over time because there’s an exponential increase in capacity for every generation of processor improvement. It’s that exponential increase that’s responsible for the mind-boggling improvements in computing — and the increasing need for virtualization.
And that brings me to the meaning of this trend. Unlike ten years ago, when folks had to sweat to get software to run on the puny hardware that was available at that time, today the hardware is so powerful that software typically uses only a small portion of the available processing power. And this causes a different type of problem.
Today, many data centers have machines running at only 10 or 15 percent of total processing capacity. In other words, 85 or 90 percent of the machine’s power is unused. In a way, Moore’s Law is no longer relevant to most companies because they aren’t able to take advantage of the increased power available to them. After all, if you’re hauling a 50-pound bag of cement, having a truck come out that can carry 20,000 pounds instead of this year’s model that can only carry 10,000 pounds is pretty irrelevant for your purposes. However, a lightly loaded machine still takes up room and draws electricity, so the cost of today’s underutilized machine is nearly the same as if it was running at full capacity.
It doesn’t take a rocket scientist to recognize that this situation is a waste of computing resources. And, guess what? With the steady march of Moore’s Law, next year’s machine will have twice as much spare capacity as this year’s — and so on, for the foreseeable future. Obviously, there ought to be a better way to match computing capacity with load. And that’s what virtualization does — by enabling a single piece of hardware to seamlessly support multiple systems. By applying virtualization, organizations can raise their hardware use rates from 10 or 15 percent to 70 or 80 percent, thereby making much more efficient use of corporate capital.
Moore’s Law not only enables virtualization, but effectively makes it mandatory. Otherwise, increasing amounts of computing power will go to waste each year.
So, the first trend that’s causing virtualization to be a mainstream concern is the unending growth of computing power brought to you by the friendly folks of the chip industry. By the way, the same trend that’s described in chips by Moore’s Law can be observed in the data storage and networking arenas as well. They just don’t have a fancy name for their exponential growth — so maybe Gordon Moore was a marketing genius after all. However, the rapid improvement in these other technology areas means that virtualization is being explored for them as well — and because I like being complete, I work in coverage of these areas later on in this book.
The business world has undergone an enormous transformation over the past 20 years. In 1985, the vast majority of business processes were paper based. Computerized systems were confined to so-called backroom automation: payroll, accounting, and the like.
That has all changed, thanks to the steady march of Moore’s Law. Business process after business process has been captured in software and automated, moving from paper to computers.
The rise of the Internet has exponentially increased this transformation. Companies want to communicate with customers and partners in real time, using the worldwide connectivity of the Internet. Naturally, this has accelerated the move to computerized business processes.
To offer a dramatic example, Boeing’s latest airliner, the 787 Dreamliner, is being designed and built in a radically new way. Boeing and each of its suppliers use Computer-Aided Design (CAD) software to design their respective parts of the plane. All communication about the project uses these CAD designs as the basis for discussion. Use of CAD software enables testing to be done in computer models rather than the traditional method of building physical prototypes, thereby speeding completion of the plane by a year or more.
As you might imagine, the Dreamliner project generates enormous amounts of data. Just one piece of the project — a data warehouse containing project plans — runs to 19 terabytes of data.
Boeing’s experience is common across all companies and all industries. How big is the explosion of data? In 2003, the world’s computer users created and stored 5 exabytes (each exabyte is 1 million terabytes) of new data. A recent study by the Enterprise Strategy Group predicted that governments and corporations will store over 25 exabytes of data by the year 2010. Certainly, the trend of data growth within organizations is accelerating. The growth of data can be easily seen in one key statistic: In 2006, the storage industry shipped as much storage in one month as it did in the entire year of 2000. The research firm IDC estimates that total storage shipped will increase 50 percent per year for the next five years.
The net effect of all this is that huge numbers of servers have been put into use over the past decade, which is causing a real-estate problem for companies: They’re running out of space in their data centers. And, by the way, that explosion of data calls for new methods of data storage, which I also address in this book. These methods go by the common moniker of storage virtualization, which, as you might predict, encapsulates storage and abstracts it from underlying network storage devices.
Virtualization, by offering the ability to host multiple guest systems on a single physical server, helps organizations to reclaim data center territory, thereby avoiding the expense of building out more data center space. This is an enormous benefit of virtualization because data centers cost in the tens of millions of dollars to construct. You can find out more about this in Chapter 4, where I discuss this trend, which is usually referred to as consolidation and is one of the major drivers for organizations to turn to virtualization.
Take a look at your data center to understand any capacity constraints you’re operating with. If you’re near capacity, you need virtualization — stat!
In most companies’ strategic thinking, budgeting power costs used to rank somewhere below deciding what brand of soda to keep in the vending machines. Companies could assume that electrical power was cheap and endlessly available.
Several events over the past few years have changed that mindset dramatically:
The increasing march of computerization discussed in Trend #2, earlier in this chapter, means that every company is using more power as their computing processes expand.
The assumption regarding availability of reliable power was challenged during the California power scares of a few years ago. Although later evidence caused some reevaluation about whether there was a true power shortage (can you say “Enron”?), the events caused companies to consider whether they should look for ways to be less power dependent.
As a result of the power scares and Pacific Gas & Electric’s resulting bankruptcy, power costs in California, home to Silicon Valley, have skyrocketed, making power a more significant part of every company’s budget. In fact, for many companies, electricity now ranks as one of the top five costs in their operating budgets.
The cost of running computers, coupled with the fact that many of the machines filling up data centers are running at low utilization rates, means that virtualization’s ability to reduce the total number of physical servers can significantly reduce the overall cost of energy for companies.
Data center power is such an issue that energy companies are putting virtualization programs into place to address it. See Chapter 5 to find out about an innovative virtualization rebate program Pacific Gas & Electric has put into place.
Computers don’t operate on their own. Every server requires care and feeding by system administrators who, as part of the operations group, ensure that the server runs properly. Common system administration tasks include monitoring hardware status; replacing defective hardware components; installing operating system (OS) and application software; installing OS and application patches; monitoring critical server resources such as memory and disk use; and backing up server data to other storage mediums for security and redundancy purposes.
As you might imagine, this job is pretty labor intensive. System administrators don’t come cheap. And, unlike programmers, who can be located in less expensive offshore locales, system administrators are usually located with the servers due to their need to access the physical hardware.
The steady increase in server numbers has meant that the job market for system administrators has been good — very good.
As part of an effort to rein in operations cost increases, virtualization offers the opportunity to reduce overall system administration costs by reducing the overall number of machines that need to be taken care of. Although many of the tasks associated with system administration (OS and application patching, doing backups) continue even in a virtualized environment, some of them disappear as physical servers are migrated to virtual instances. Overall, virtualization can reduce system administration requirements by 30 to 50 percent per virtualized server, making virtualization an excellent option to address the increasing cost of operations personnel.
Virtualization reduces the amount of system administration work necessary for hardware, but it doesn’t reduce the amount of system administration required for guest OSes. Therefore, virtualization improves system administration, but doesn’t make it vanish.
Looking at these four trends, you can see why virtualization is a technology whose time has come. The exponential power growth of computers, the substitution of automated processes for manual work, the increasing cost to power the multitude of computers, and the high personnel cost to manage that multitude all cry out for a less expensive way to run data centers. In fact, a newer, more efficient method of running data centers is critical because, given the four trends, the traditional methods of delivering computing are becoming cost prohibitive. Virtualization is the solution to the problems caused by the four trends I outline here.
If you’ve made it this far in this chapter, you (hopefully) have a rough idea of virtualization and why it’s an important development. Your next step involves determining what your options are when it comes to virtualization. In other words, what are some common applications of the technology?
Virtualization has a number of common uses, all centered around the concept that virtualization represents an abstraction from physical resources. In fact, enough kinds of virtualization exist to make it a bit confusing to sort out how you might apply it in your organization.
I do what I can to sort out the virtualization mare’s nest. If you’re okay with gross generalizations, I can tell you that there are three main types of virtualization: client, server, and storage. Within each main type are different approaches or flavors, each of which has its benefits and drawbacks. The next few sections give brief descriptions of each of the three types of virtualization, along with examples of common implementations of them.
Client virtualization refers to virtualization capabilities residing on a client (a desktop or laptop PC). Given that much of the earlier discussion of the driving forces behind virtualization focuses on the problems of the data center, you might wonder why virtualization is necessary for client machines at all.
The primary reason organizations are interested in pursuing client virtualization solutions has to do with the challenges they face in managing large numbers of computers controlled by end users. Although machines located in data centers typically have strict procedures about what software is loaded on them and when they’re updated with new software releases, end user machines are a whole different story.
Because loading software is as easy as sticking a disc into the machine’s CD drive (or a thumb drive into a USB slot), client machines can have endless amounts of non-IT-approved software installed. Each application can potentially cause problems with the machine’s operating system as well as other approved applications. Beyond that, other nefarious software can get onto client machines in endless ways: via e-mail viruses, accidental spyware downloads, and so on. And, the hard truth is that Microsoft Windows, the dominant client operating system, is notorious for attracting attacks in the form of malware applications.
Added to the end user–caused problems are the problems inherent to client machines in general: keeping approved software applications up to date, ensuring the latest operating system patches are installed, and getting recent virus definitions downloaded to the machine’s antivirus software.
Mixed together, this stew is a miserable recipe for IT. Anything that makes the management of client machines easier and more secure is of definite interest to IT. Client virtualization offers the potential to accomplish this.
Three main types — or flavors, if you will — of client virtualization exist: application packaging, application streaming, and hardware emulation.
Although the specifics of how application packaging is accomplished vary from one vendor to another, all the methods share a common approach: isolating an application that runs on a client machine from the underlying operating system. By isolating the application from the operating system, the application is unable to modify underlying critical operating system resources, making it much less likely that the OS will end up compromised by malware or viruses.
You can accomplish this application-packaging approach by executing the application on top of a software product that gives each application its own virtual set of system resources — stuff like files and registry entries. Another way to accomplish application packaging is by bundling the application and the virtualization software into a single executable program that is downloaded or installed; when the executable program is run, the application and the virtualization software cooperate and run in an isolated (or sandboxed) fashion, thereby separating the application from the underlying operating system.
Application packaging is a great way to isolate programs from one another and reduce virus transmission, but it doesn’t solve the problem of end users installing nonpackaged software on client machines.
One thing to keep in mind with this approach is that it causes additional work as the IT folks prepare the application packages that are needed and then distribute them to client machines. And, of course, this approach does nothing to solve the problem of end users installing other software on the machine that bypasses the application packaging approach altogether. If you’re loading a game onto your business laptop, you’re hardly likely to go to IT and request that someone create a new application package so that you can run your game securely, are you?
Products that provide application packaging include SVS from Altiris, Thinstall’s Virtualization Suite, and Microsoft’s SoftGrid.
Application streaming solves the problem of how to keep client machines loaded with up-to-date software in a completely different fashion than application packaging. Because it’s so difficult to keep the proper versions of applications installed on client machines, this approach avoids installing them altogether. Instead, it stores the proper versions of applications on servers in the data center, and when an end user wants to use a particular application, it’s downloaded on the fly to the end user’s machine, whereupon he or she uses it as though it were natively installed on the machine.
This approach to client virtualization can reduce the amount of IT work necessary to keep machines updated. Furthermore, it happens transparently to the end user because the updated application is automatically delivered to the end user, without any physical software installation on the client. It also has the virtue of possibly allowing client machines less capability to be deployed because less disk space is required to permanently store applications on the client hard drive. Furthermore, if this approach is taken to its logical conclusion and the client machine has no hard drive, it is possible that less memory is required because only the official IT applications can be executed on the machine. This result is because the end user can’t execute any programs other than the ones available from the central server.
Although at first glance, this approach might seem like a useful form of virtualization, it is really appropriate only in certain circumstances — primarily situations in which end users have constant connectivity to enable application downloads when required. Examples of these situations include call centers and office environments where workers rarely leave the premises to perform work duties. In today’s increasingly mobile workforce world, these circumstances apply to a small percentage of the total workforce. Perhaps the best way to think about this form of virtualization is as one that can be very useful in a restricted number of work environments.
This type of virtualization is offered by AppStream’s Virtual Image Distribution, Softricity’s Softgrid for Desktops, and Citrix’s Presentation Server. Softricity has recently been acquired by Microsoft, and its SoftGrid product will soon be available as part of the Windows Server platform. SoftGrid will offer the capability of streaming applications to remote desktops.
Application streaming is best suited for static work environments where people don’t move around much, such as call centers and form-processing centers, although some organizations are exploring using it for remote employees who have consistent network connectivity to ensure that applications can be streamed as necessary.
Hardware emulation is a very well-established form of virtualization in which the virtualization software presents a software representation of the underlying hardware that an operating system would typically interact with. (I discuss hardware emulation in more detail in the “Server virtualization” section, later in this chapter.) This is a very common type of virtualization used in data centers as part of a strategy to get higher utilization from the expensive servers that reside in them.
Because of the spread of commodity hardware (that’s to say, hardware based on Intel’s x86 chip architecture; these chips power everything from basic desktop machines to huge servers), the same hardware emulation type of virtualization that can be used in data centers can also be used on client machines. (The term commodity refers to the fact that the huge volumes of x86 processors sold make them so ubiquitous and inexpensive that they’re almost like any other mass-produced, unspecialized product — almost as common as the canned goods you can get in any grocery store.
In this form of client virtualization, the virtualization software is loaded onto a client machine that has a base operating system already loaded — typically Windows, but client hardware emulation virtualization is also available for systems running Mac and Linux operating systems.
After the hardware emulation software is loaded onto the machine, it’s ready to support guest operating systems. Guest OSes are installed via the virtualization software; that is, rather than just sticking a CD into the machine’s drive and rebooting it to install the operating system directly onto the hardware, you use the virtualization software’s control panel to indicate your desire to install a guest OS (which can be either Windows or Linux). It sets up the container (often called the virtual machine, or VM for short) for the guest operating system and then directs you to put the CD in the drive, whereupon the normal installation procedure occurs.
After the installation completes, you control the virtual machine (which is a normal Windows or Linux system) through the virtualization software’s control panel. You can start, stop, suspend, and destroy a VM from the control panel.
Interacting with the VM guest OS is just like interacting with it if it were the only OS on the machine. A guest OS displays graphics on the screen, the VM responds to keyboard and mouse commands, and so on. That’s why it’s called virtualization!
Products offering this type of virtualization are VMware’s VMware Server and Microsoft’s Virtual Server. On the Macintosh, SWsoft’s Parallels product provides hardware emulation virtualization.
When discussing trends driving virtualization, you’ll soon discover that most of the examples that come up are focused on issues of the data center — the server farms that contain vast arrays of machines dedicated to running enterprise applications, databases, and Web sites.
That’s not an accident. Most of the action in the virtualization world right now focuses on server virtualization — no surprise, then, if you see me spending most of my time in this book on precisely that topic.
IT organizations are avidly pursuing virtualization to gain more control of their sprawling server farms. Although client virtualization is interesting to them, server virtualization is critical because many IT organizations are running out of room in their data centers. Their inability to add more machines means they can’t respond to important business initiatives, meaning they can’t deliver necessary resources so that the other parts of the business can implement the company’s strategy. Obviously, this inability to provide IT resources is unacceptable, and many, many IT organizations are turning to server virtualization to solve this problem.
Three main types of server virtualization exist:
Operating system virtualization: Often referred to as containers
Hardware emulation: Similar to the same type of virtualization described in the client virtualization section, earlier in the chapter
Paravirtualization: A relatively new concept designed to deliver a lighter-weight (in terms of virtualization application size), higher-performance approach to virtualization
Check out the next few sections for an in-depth treatment of each of these three types.
Each type of server virtualization has its pros and cons. It’s important to evaluate your likely use of virtualization to understand which virtualization technology is best suited for your needs. See Chapter 7 for a discussion of how to evaluate virtualization use.
In the preceding “Client virtualization” section, I talk about hardware emulation being a virtualization architecture that installs a piece of software onto a machine. Guest operating systems are subsequently installed using the hardware emulation software. Many times, this approach to virtualization, in which the virtualization software is installed directly onto the machine, is described as a bare-metal approach, meaning there is no software between the virtualization software and the underlying hardware.
Operating system virtualization, by contrast, is installed on top of an existing operating system. It doesn’t enable installation of virtual machines, each of which is isolated from any other virtualization machine. Rather, operating system virtualization runs on top of an existing host operating system and provides a set of libraries that applications interact with, giving each application the illusion that it is running on a machine dedicated to its use.
If this seems a bit confusing, take a look at Figure 1-1, which illustrates the concept. Here you can see a server running a host operating system. That operating system is running software that provides operating system virtualization, and a number of virtual OSes are running within the operating system virtualization software. Each of the virtual OSes has one or more applications running within it. The key thing to understand is that, from the application’s execution perspective, it sees and interacts only with those applications running within its virtual OS, and it interacts with its virtual OS as though it has sole control of the resources of the virtual OS. Crucially, it can’t see the applications or the OS resources located in another virtual OS. It’s as though multiple operating systems are running on top of the real host OS. You can see why this approach to virtualization is often referred to as containers: Each set of applications is contained within its assigned virtual OS and cannot interact with other virtual OSes or the applications running in those virtual OSes.