Virtualization Essentials - Matthew Portnoy - E-Book

Virtualization Essentials E-Book

Matthew Portnoy

0,0
25,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Learn virtualization skills by building your own virtual machine Virtualization Essentials, Second Edition provides new and aspiring IT professionals with immersive training in working with virtualization environments. Clear, straightforward discussion simplifies complex concepts, and the hands-on tutorial approach helps you quickly get up to speed on the fundamentals. You'll begin by learning what virtualization is and how it works within the computing environment, then you'll dive right into building your own virtual machine. You'll learn how to set up the CPU, memory, storage, networking, and more as you master the skills that put you in-demand on the job market. Each chapter focuses on a specific goal, and concludes with review questions that test your understanding as well as suggested exercises that help you reinforce what you've learned. As more and more companies are leveraging virtualization, it's imperative that IT professionals have the skills and knowledge to interface with virtualization-centric infrastructures. This book takes a learning-by-doing approach to give you hands-on training and a core understanding of virtualization. * Understand how virtualization works * Create a virtual machine by scratch and migration * Configure and manage basic components and supporting devices * Develop the necessary skill set to work in today's virtual world Virtualization was initially used to build test labs, but its use has expanded to become best practice for a tremendous variety of IT solutions including high availability, business continuity, dynamic IT, and more. Cloud computing and DevOps rely on virtualization technologies, and the exponential spread of these and similar applications make virtualization proficiency a major value-add for any IT professional. Virtualization Essentials, Second Edition provides accessible, user-friendly, informative virtualization training for the forward-looking pro.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 405

Veröffentlichungsjahr: 2016

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Acknowledgments

About the Author

Introduction

Who Should Read This Book

What Is Covered in This Book

How to Contact the Author

Chapter 1: Understanding Virtualization

Describing Virtualization

Understanding the Importance of Virtualization

Understanding Virtualization Software Operation

Chapter 2: Understanding Hypervisors

Describing a Hypervisor

Understanding the Role of a Hypervisor

Comparing Today's Hypervisors

Chapter 3: Understanding Virtual Machines

Describing a Virtual Machine

Understanding How a Virtual Machine Works

Working with Virtual Machines

Chapter 4: Creating a Virtual Machine

Performing P2V Conversions

Loading Your Environment

Building a New Virtual Machine

Chapter 5: Installing Windows on a Virtual Machine

Loading Windows into a Virtual Machine

Understanding Configuration Options

Optimizing a New Virtual Machine

Chapter 6: Installing Linux on a Virtual Machine

Loading Linux into a Virtual Machine

Understanding Configuration Options

Optimizing a New Linux Virtual Machine

Chapter 7: Managing CPUs for a Virtual Machine

Understanding CPU Virtualization

Configuring VM CPU Options

Tuning Practices for VM CPUs

Chapter 8: Managing Memory for a Virtual Machine

Understanding Memory Virtualization

Configuring VM Memory Options

Tuning Practices for VM Memory

Chapter 9: Managing Storage for a Virtual Machine

Understanding Storage Virtualization

Configuring VM Storage Options

Tuning Practices for VM Storage

Chapter 10: Managing Networking for a Virtual Machine

Understanding Network Virtualization

Configuring VM Network Options

Tuning Practices for Virtual Networks

Chapter 11: Copying a Virtual Machine

Cloning a Virtual Machine

Working with Templates

Saving a Virtual Machine State

Chapter 12: Managing Additional Devices in Virtual Machines

Using Virtual Machine Tools

Understanding Virtual Devices

Configuring a CD/DVD Drive

Configuring a Floppy Disk Drive

Configuring a Sound Card

Configuring USB Devices

Configuring Graphic Displays

Configuring Other Devices

Chapter 13: Understanding Availability

Increasing Availability

Protecting a Virtual Machine

Protecting Multiple Virtual Machines

Protecting Data Centers

Chapter 14: Understanding Applications in a Virtual Machine

Examining Virtual Infrastructure Performance Capabilities

Deploying Applications in a Virtual Environment

Understanding Virtual Appliances and vApps

Open Stack and Containers

Appendix: Answers to Additional Exercises

Chapter 1

Chapter 2

Chapter 3

Chapter 4

Chapter 5

Chapter 6

Chapter 7

Chapter 8

Chapter 9

Chapter 10

Chapter 11

Chapter 12

Chapter 13

Chapter 14

Glossary

End User License Agreement

Pages

v

vii

ix

xvii

xviii

xix

xx

xxi

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

Guide

Table of Contents

Begin Reading

List of Illustrations

Chapter 1: Understanding Virtualization

Figure 1.1 A basic virtual machine monitor (VMM)

Figure 1.2 Moore's Law: transistor count and processor speed

Figure 1.3 Server consolidation

Chapter 2: Understanding Hypervisors

Figure 2.1 Where the hypervisor resides

Figure 2.2 A virtual machine monitor

Figure 2.3 A Type 1 hypervisor

Figure 2.4 A guest failure

Figure 2.5 A Type 2 hypervisor

Figure 2.6 Abstracting hardware from the guests

Figure 2.7 Processing a guest I/O

Figure 2.8 The ESXi architecture

Figure 2.9 The Xen hypervisor architecture

Figure 2.10 Microsoft Hyper-V architecture

Chapter 3: Understanding Virtual Machines

Figure 3.1 A virtual machine

Figure 3.2 Windows Device Manager in a VM

Figure 3.3 CPU settings in a VM

Figure 3.4 Memory settings in a VM

Figure 3.5 A simple virtual network

Figure 3.6 Network resources in a VM

Figure 3.7 Virtual machine storage

Figure 3.8 Storage resources in a VM

Figure 3.9 A simplified data request

Figure 3.10 A simplified data request in a virtual environment

Figure 3.11 Cloning a VM

Figure 3.12 Creating a VM from a template

Figure 3.13 A snapshot disk chain

Chapter 4: Creating a Virtual Machine

Figure 4.1 Downloading VMware Workstation Player

Figure 4.2 The VMware Workstation Player package

Figure 4.3 The Player Setup window

Figure 4.4 The License Agreement window

Figure 4.5 The Custom Setup window

Figure 4.6 The User Experience Settings window

Figure 4.7 The Shortcuts window

Figure 4.8 The Ready to Install window

Figure 4.9 The installation progress screen

Figure 4.10 Installation complete

Figure 4.11 The VMware Workstation Player main window

Figure 4.12 Player preferences

Figure 4.13 Downloading VirtualBox

Figure 4.14 The VirtualBox installation package

Figure 4.15 The VirtualBox Setup window

Figure 4.16 The Custom Setup screen

Figure 4.17 Another Custom Setup window

Figure 4.18 The Network Interfaces warning

Figure 4.19 The Ready to Install window

Figure 4.20 Installation progress

Figure 4.21 The VirtualBox installation is completed.

Figure 4.22 The Oracle VM VirtualBox Manager

Figure 4.23 The New Virtual Machine Wizard

Figure 4.24 The Select a Guest Operating System screen

Figure 4.25 The Name the Virtual Machine screen

Figure 4.26 The Specify Disk Capacity screen

Figure 4.27 Customize the hardware.

Figure 4.28 Create the virtual machine.

Chapter 5: Installing Windows on a Virtual Machine

Figure 5.1 The Windows image

Figure 5.2 Select the VM.

Figure 5.3 Edit the virtual machine settings.

Figure 5.4 Using the ISO image to connect

Figure 5.5 Removable devices

Figure 5.6 Windows installation

Figure 5.7 Select Install Now.

Figure 5.8 The license terms

Figure 5.9 The installation type

Figure 5.10 Disk choice and options

Figure 5.11 Installation progress

Figure 5.12 The Express Settings screen

Figure 5.13 Connection choices

Figure 5.14 Create the username, password, and hint.

Figure 5.15 Network sharing

Figure 5.16 The completed Windows 10 installation

Figure 5.17 Install VMware Tools.

Figure 5.18 VMware Tools DVD drive options

Figure 5.19 The VMware Tools Welcome screen

Figure 5.20 Setup type

Figure 5.21 Ready to install

Figure 5.22 The installation is complete.

Figure 5.23 Restart the system.

Figure 5.24 A running Windows 10 VM

Figure 5.25 The About VMware Tools screen

Figure 5.26 Windows 10 devices

Figure 5.27 System properties

Figure 5.28 Disk sizes

Figure 5.29 Memory sizes

Figure 5.30 Adjusting the memory in a VM

Chapter 6: Installing Linux on a Virtual Machine

Figure 6.1 The VirtualBox main window

Figure 6.2 The VirtualBox Preferences screen

Figure 6.3 The Ubuntu Linux ISO image

Figure 6.4 The Create Virtual Machine Wizard

Figure 6.5 The Memory Size screen

Figure 6.6 Creating a hard disk

Figure 6.7 The Hard Disk File Type screen

Figure 6.8 Hard disk storage type

Figure 6.9 Hard disk location and size

Figure 6.10 The Ubuntu virtual machine

Figure 6.11 Choose Disk Image

Figure 6.12 Selecting the Ubuntu ISO image

Figure 6.13 The Ubuntu Welcome screen

Figure 6.14 Preparing to Install Ubuntu screen

Figure 6.15 Installation types

Figure 6.16 The Write the Changes to Disks screen

Figure 6.17 The Where Are You screen

Figure 6.18 The Who Are You screen

Figure 6.19 Rebooting the VM

Figure 6.20 The initial Ubuntu desktop

Figure 6.21 The Insert Guest Additions CD Image option

Figure 6.22 Automatically run the disk image.

Figure 6.23 The Authenticate screen

Figure 6.24 Installing the Guest Additions

Figure 6.25 Unmounting the disk image

Figure 6.26 The Ubuntu Displays utility

Figure 6.27 Processes in the System Monitor

Figure 6.28 Resources in the System Monitor

Figure 6.29 File Systems in the System Monitor

Chapter 7: Managing CPUs for a Virtual Machine

Figure 7.1 VMs using a host CPU

Figure 7.2 Processors in a virtual machine

Figure 7.3 CPU in the Task Manager

Figure 7.4 Physical and logical CPU information

Chapter 8: Managing Memory for a Virtual Machine

Figure 8.1 Memory in a virtual machine

Figure 8.2 Moving memory pages

Figure 8.3 Memory management in a virtual machine

Figure 8.4 Memory in virtual machines and their host

Figure 8.5 Ballooning memory

Figure 8.6 Memory overcommitment

Figure 8.7 Page sharing

Chapter 9: Managing Storage for a Virtual Machine

Figure 9.1 Virtual storage pathway

Figure 9.2 Virtual storage pathway in the Xen model

Figure 9.3 Pooled storage without a storage array

Figure 9.4 Virtual hard disk options

Figure 9.5 The Add Hardware Wizard

Figure 9.6 Select a disk type.

Figure 9.7 Select a disk.

Figure 9.8 Specify the disk capacity.

Figure 9.9 Specify the disk file.

Figure 9.10 A new hard disk

Figure 9.11 Initialize the new disk.

Figure 9.12 The New Simple Volume option

Figure 9.13 The new drive is ready.

Figure 9.14 Both hard drives

Figure 9.15 Deduplication

Figure 9.16 Thin provisioning

Figure 9.17 Storage I/O control

Chapter 10: Managing Networking for a Virtual Machine

Figure 10.1 A simple virtual network path

Figure 10.2 Networking in a VMware host

Figure 10.3 Multiple external switches

Figure 10.4 Networking in a Xen or Hyper-V host

Figure 10.5 A storage virtual switch

Figure 10.6 Determining an IP address

Figure 10.7 Network adapter properties in a VM

Figure 10.8 Virtual network adapter properties

Figure 10.9 Virtual machine network-adapter connection types

Figure 10.10 The Virtual Network Editor

Figure 10.11 A simple bridged network

Figure 10.12 Automatic bridging settings

Figure 10.13 Host-only network settings

Figure 10.14 A simple NAT configuration

Figure 10.15 NAT network configuration settings

Chapter 11: Copying a Virtual Machine

Figure 11.1 The

VM Copy

directory

Figure 11.2 A virtual machine's files

Figure 11.3 Editing the configuration file

Figure 11.4 Renamed virtual machine files

Figure 11.5 Moved or copied notice

Figure 11.6 Examining the network configuration

Figure 11.7 The Machine menu

Figure 11.8 The New Machine Name screen

Figure 11.9 The completed virtual machine clone

Figure 11.10 The virtual machine clone

Figure 11.11 Template creation choices

Figure 11.12 Manage a virtual machine.

Figure 11.13 The Clone Virtual Machine Wizard

Figure 11.14 The Clone Source screen

Figure 11.15 The Clone Type screen

Figure 11.16 Naming the clone

Figure 11.17 A first snapshot

Figure 11.18 A second snapshot

Figure 11.19 Physical files of a snapshot

Figure 11.20 The Workstation Pro Snapshot Manager

Figure 11.21 Changing the virtual machine

Figure 11.22 Physical files of a second snapshot

Figure 11.23 A second snapshot

Figure 11.24 Reverting to a previous snapshot

Figure 11.25 Deleting the second snapshot

Figure 11.26 Deleting the first snapshot

Chapter 12: Managing Additional Devices in Virtual Machines

Figure 12.1 The CD/DVD device configuration

Figure 12.2 Floppy disk configuration

Figure 12.3 Floppy disk management options

Figure 12.4 The floppy disk image file

Figure 12.5 Sound card options

Figure 12.6 USB management options

Figure 12.7 Connecting a USB device from a host

Figure 12.8 Display device options

Figure 12.9 The Serial Port Type screen

Figure 12.10 The Parallel Port Type screen

Figure 12.11 Generic SCSI device options

Chapter 13: Understanding Availability

Figure 13.1 NIC teaming

Figure 13.2 A virtual platform cluster

Figure 13.3 A fault-tolerant VM

Figure 13.4 VM migration during maintenance

Figure 13.5 Storage migration

Figure 13.6 Site Recovery Manager

Chapter 14: Understanding Applications in a Virtual Machine

Figure 14.1 Virtual machine resource settings

Figure 14.2 Resource pools

Figure 14.3 Three-tier architecture—physical

Figure 14.4 Three-tier architecture—virtual

Figure 14.5 Saving the

jar

file

Figure 14.6 Executing the benchmark test

Figure 14.7 The System Monitor

Figure 14.8 Benchmark effects

Figure 14.9 Examining the virtualization host

Figure 14.10 Performance Monitor on the host

List of Tables

Chapter 1: Understanding Virtualization

Table 1.1 Byte Sizes

Table 1.2 Processor Speed Increases Over Six Years

Chapter 7: Managing CPUs for a Virtual Machine

Table 7.1 Cores Available in Various Processor Configurations

Chapter 8: Managing Memory for a Virtual Machine

Table 8.1 Memory Optimization Techniques

Chapter 13: Understanding Availability

Table 13.1 Availability Percentages

VIRTUALIZATION

ESSENTIALS

SECOND EDITION

 

 

Matthew Portnoy

 

 

 

Executive Editor: Jody Lefevere

Development Editor: Kelly Talbot

Technical Editor: Van Van Noy

Production Editor: Barath Kumar Rajasekaran

Copy Editor: Kathy Grider-Carlyle

Editorial Manager: Mary Beth Wakefield

Production Manager: Kathleen Wisor

Proofreader: Nancy Bell

Indexer: Johnna VanHoose Dinse

Project Coordinator, Cover: Brent Savage

Cover Designer : Wiley

Cover Image: ©DrHitch/Shutterstock

Copyright © 2016 by John Wiley & Sons, Inc., Indianapolis, Indiana

Published simultaneously in Canada

ISBN: 978-1-119-26772-0

ISBN: 978-1-119-26774-4 (ebk.)

ISBN: 978-1-119-26773-7 (ebk.)

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Web site is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or Web site may provide or recommendations it may make. Further, readers should be aware that Internet Web sites listed in this work may have changed or disappeared between when this work was written and when it is read.

For general information on our other products and services or to obtain technical support, please contact our Customer Care Department within the U.S. at (877) 762-2974, outside the U.S. at (317) 572-3993 or fax (317) 572-4002.

Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.

Library of Congress Control Number: 2016944315

TRADEMARKS: Wiley, the Wiley logo, and the Sybex logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates, in the United States and other countries, and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

To my friendsand family,near and far.

Acknowledgments

A project is rarely a solo affair, and this one depended on a large crew for it to arrive. I need to thank Scott Lowe for shoveling the path and aiming me at the correct door. My deepest gratitude goes to Mark Milow for helping me climb aboard this rocket, to Mike Szfranski for your always open book of knowledge, to Nick Gamache for the insights, and to Tony Damiano for keeping our vehicle in the fast lane.

My heartfelt thanks also go to the virtual team at Sybex: Kelly Talbot, Stephanie McComb, Van Van Noy, Kathy Grider-Carlyle, and Barath Kumar Rajasekaran for their steadfast support, forcing me to improve with each chapter and keeping it all neat and clean. Special thanks go to Agatha Kim for getting this whole adventure rolling.

I need to thank my family beginning with my parents, teachers both, who instilled me with a love of reading and writing and set me on a path that somehow led here. Thank you to my boys, Lucas and Noah, who fill our days with laughter and music. And finally, a huge hug to my wife, Elizabeth, who encouraged me even when she had no idea what I was writing about. I love you.

About the Author

Matthew Portnoyhas been an information technology professional for more than 30 years, working in organizations such as NCR, Sperry/Unisys, Stratus Computer, Oracle, and VMware. He has been in the center of many of the core technological trends during this period, including the birth of the PC, client-server computing, fault tolerance and availability, the rise of the Internet, and now virtualization, which is the foundation for cloud computing. As both a presales and post-sales analyst, he has worked with all of the disciplines computing offers, including innumerable programming languages, operating systems, application design and development, database operations, networking, security, availability, and virtualization. He has spoken at the industry's largest virtualization conference, VMworld, and is a frequent speaker at user group meetings. He also has been teaching virtualization and database classes as an adjunct professor at Wake Tech Community College in Raleigh, North Carolina, since 2007.

Introduction

We live in an exciting time. The information age is exploding around us, giving us access to dizzying amounts of data the instant it becomes available. Smart phones and tablets provide an untethered experience that offers streaming video, audio, and other media formats to just about any place on the planet. Even people who are not “computer literate” use Facebook to catch up with friends and family, use Google to research a new restaurant choice and print directions to get there, or Tweet their reactions once they have sampled the fare. The budding Internet-of-things will only catalyze this data eruption. The infrastructure supporting these services is also growing exponentially, and the technology that facilitates this rapid growth is virtualization.

On one hand, virtualization is nothing more than an increasingly efficient use of existing resources that delivers huge cost savings in a brief amount of time. On the other, virtualization also offers organizations new models of application deployment for greater uptime to meet user expectations, modular packages to provide new services in minutes instead of weeks, and advanced features that bring automatic load balancing, scalability without downtime, self-healing, self-service provisioning, and many other capabilities to support business-critical applications that improve on traditional architecture. Large companies have been using this technology for 10 to 15 years, while smaller and medium-sized businesses are just getting there now. Some of them might miss the movement altogether and jump directly to cloud computing, the next evolution of application deployment. Virtualization is the foundation for cloud computing as well.

This quantum change in our world echoes similar trends from our recent history as electrical power and telephony capabilities spread and then changed our day-to-day lives. During those periods, whole industries sprang up out of nothing, providing employment and opportunity to people who had the foresight and chutzpah to seize the moment. That same spirit and opportunity is available today as this area is still being defined and created right before our eyes. If not virtualization vendors, there are hardware partners who provide servers, networking vendors for connectivity, storage partners for data storage, and everyone provides services. Software vendors are designing and deploying new applications specifically for these new architectures. Third parties are creating tools to monitor and manage these applications and infrastructure areas. As cloud computing begins to become the de facto model for development, deployment, and maintaining application services, this area will expand even further.

The first generation of virtualization specialists acquired their knowledge out of necessity: They were server administrators who needed to understand the new infrastructure being deployed in their data centers. Along the way, they picked up some networking knowledge to manage the virtual networks, storage knowledge to connect to storage arrays, and application information to better interface with the application teams. Few people have experience in all of those areas. Whether you have some virtualization experience or none at all, this text will give you the foundation to understand what virtualization is, why it is a crucial portion of today's and tomorrow's information technology infrastructure, and the opportunity to explore and experience one of the most exciting and fastest growing topics in technology today.

Good reading and happy virtualizing!

Who Should Read This Book

This text is designed to provide the basics of virtualization technology to someone who has little or no prior knowledge of the subject. This book will be of interest to you if you are an IT student looking for information about virtualization or if you are an IT manager who needs a better understanding of virtualization fundamentals as part of your role. This book might also be of interest if you are an IT professional who specializes in a particular discipline (such as server administration, networking, or storage) and are looking for an introduction into virtualization or cloud computing as a way to advance inside your organization.

The expectation is that you have:

Some basic PC experience

An understanding of what an operating system is and does

Conceptual knowledge of computing resources (CPU, memory, storage, and network)

A high-level understanding of how programs use resources

This text would not be of interest if you are already a virtualization professional and you are looking for a guidebook or reference.

What You Need

The exercises and illustrations used in this text were created on a system with Windows 10 as the operating system. VMware Workstation Player version 12 is used as the virtualization platform. It is available as a free download from http://downloads.vmware.com/d/. It is recommended that you have at least 2 GB of memory, though more will be better. The installation requires 150 MB of disk storage. Also used is Oracle VirtualBox version 5. It is available as a free download from http://www.virtualbox.org. It is recommended that you have at least 2GB of memory. VirtualBox itself requires only about 30 MB of disk storage, but virtual machines will require more.

The examples demonstrate the creation and use of two virtual machines: one running Windows 10, the other running Ubuntu Linux. You will need the installation media for those as well. Each of the virtual machines requires about 30 GB of disk space.

What Is Covered in This Book

Here's a glance at what is in each chapter.

Chapter 1

: Understanding Virtualization

Introduces the basic concepts of computer virtualization beginning with mainframes and continues with the computing trends that have led to current technologies.

Chapter 2

: Understanding Hypervisors

Focuses on hypervisors, the software that provides the virtualization layer, and compares some of the current offerings in today's marketplace.

Chapter 3

: Understanding Virtual Machines

Describes what a virtual machine is composed of, explains how it interacts with the hypervisor that supports its existence, and provides an overview of managing virtual machine resources.

Chapter 4

: Creating a Virtual Machine

Begins with the topic of converting existing physical servers into virtual machines and provides a walkthrough of installing VMware Workstation Player and Oracle VirtualBox, the virtualization platforms used in this text, and a walkthrough of the creation of a virtual machine.

Chapter 5

: Installing Windows on a Virtual Machine

Provides a guide for loading Microsoft Windows in the created virtual machine and then describes configuration and tuning options.

Chapter 6

: Installing Linux on a Virtual Machine

Provides a guide for loading Ubuntu Linux in a virtual machine and then walks through a number of configuration and optimization options.

Chapter 7

: Managing CPUs for a Virtual Machine

Discusses how CPU resources are virtualized and then describes various tuning options and optimizations. Included topics are hyper-threading and Intel versus AMD.

Chapter 8

: Managing Memory for a Virtual Machine

Covers how memory is managed in a virtual environment and the configuration options available. It concludes with a discussion of various memory optimization technologies that are available and how they work.

Chapter 9

: Managing Storage for a Virtual Machine

Examines how virtual machines access storage arrays and the different connection options they can utilize. Included are virtual machine storage options and storage optimization technologies such as deduplication.

Chapter 10

: Managing Networking for a Virtual Machine

Begins with a discussion of virtual networking and how virtual machines use virtual switches to communicate with each other and the outside world. It concludes with virtual network configuration options and optimization practices.

Chapter 11

: Copying a Virtual Machine

Discusses how virtual machines are backed up and provisioned through techniques such as cloning and using templates. It finishes with a powerful feature called snapshots that can preserve a virtual machine state.

Chapter 12

: Managing Additional Devices in Virtual Machines

Begins by discussing virtual machine tools, vendor-provided application packages that optimize a virtual machine's performance, and concludes with individual discussions of virtual support for other peripheral devices like CD/DVD drives and USB devices.

Chapter 13

: Understanding Availability

Positions the importance of availability in the virtual environment and then discusses various availability technologies that protect individual virtual machines, virtualization servers, and entire data centers from planned and unplanned downtime.

Chapter 14

: Understanding Applications in a Virtual Machine

Focuses on the methodology and practices for deploying applications in a virtual environment. Topics include application performance, using resource pools, and deploying virtual appliances.

Appendix

: Answers to Additional Exercises

Contains all of the answers to the additional exercises found at the end of every chapter.

Glossary

Lists the most commonly used terms throughout the book.

How to Contact the Author

I welcome feedback from you about this book or about books you'd like to see from me in the future. You can reach me by writing to [email protected].

Sybex strives to keep you supplied with the latest tools and information you need for your work. Please check their website at www.wiley.com/go/virtualizationess2e, where we'll post additional content and updates that supplement this book if the need arises.

Chapter 1Understanding Virtualization

We are in the midst of a substantial change in the way computing services are provided. As a consumer, you surf the Web on your cell phone, get directions from a GPS device, and stream movies and music from the cloud. At the heart of these services is virtualization—the ability to abstract a physical server into a virtual machine.

In this chapter, you will explore some of the basic concepts of virtualization, review how the need for virtualization came about, and learn why virtualization is a key building block to the future of computing.

Describing virtualization

Understanding the importance of virtualization

Understanding virtualization software operation

Describing Virtualization

Over the last 50 years, certain key trends created fundamental changes in how computing services are provided. Mainframe processing drove the sixties and seventies. Personal computers, the digitization of the physical desktop, and client/server technology headlined the eighties and nineties. The Internet, boom and bubble, spanned the last and current centuries and continues today. We are, though, in the midst of another of those model-changing trends: virtualization.

Virtualization is a disruptive technology, shattering the status quo of how physical computers are handled, services are delivered, and budgets are allocated. To understand why virtualization has had such a profound effect on today's computing environment, you need to have a better understanding of what has gone on in the past.

The word virtual has undergone a change in recent years. Not the word itself, of course, but its usage has been expanded in conjunction with the expansion of computing, especially with the widespread use of the Internet and smart phones. Online applications have allowed us to shop in virtual stores, examine potential vacation spots through virtual tours, and even keep our virtual books in virtual libraries. Many people invest considerable time and actual dollars as they explore and adventure through entire worlds that exist only in someone's imagination and on a gaming server.

Virtualization in computing often refers to the abstraction of some physical component into a logical object. By virtualizing an object, you can obtain some greater measure of utility from the resource the object provides. For example, virtual LANs (local area networks), or VLANs, provide greater network performance and improved manageability by being separated from the physical hardware. Likewise, storage area networks (SANs) provide greater flexibility, improved availability, and more efficient use of storage resources by abstracting the physical devices into logical objects that can be quickly and easily manipulated. Our focus, however, will be on the virtualization of entire computers.

Some examples of virtual reality in popular culture are the file retrieval interface in Michael Crichton's Disclosure, The Matrix, Tron, and Star Trek: The Next Generation's holodeck..

If you are not yet familiar with the idea of computer virtualization, your initial thoughts might be along the lines of virtual reality—the technology that, through the use of sophisticated visual projection and sensory feedback, can give a person the experience of actually being in that created environment. At a fundamental level, this is exactly what computer virtualization is all about: it is how a computer application experiences its created environment.

The first mainstream virtualization was done on IBM mainframes in the 1960s, but Gerald J. Popek and Robert P. Goldberg codified the framework that describes the requirements for a computer system to support virtualization. Their 1974 article “Formal Requirements for Virtualizable Third Generation Architectures” describes the roles and properties of virtual machines and virtual machine monitors that we still use today. The article is available for purchase or rent at http://dl.acm.org/citation.cfm?doid=361011.361073. By their definition, a virtual machine (VM) can virtualize all of the hardware resources, including processors, memory, storage, and network connectivity. A virtual machine monitor (VMM), which today is commonly called a hypervisor, is the software that provides the environment in which the VMs operate. Figure 1.1 shows a simple illustration of a VMM.

Figure 1.1 A basic virtual machine monitor (VMM)

According to Popek and Goldberg, a VMM needs to exhibit three properties in order to correctly satisfy their definition:

Fidelity

The environment it creates for the VM is essentially identical to the original (hardware) physical machine.

Isolation or Safety

The VMM must have complete control of the system resources.

Performance

There should be little or no difference in performance between the VM and a physical equivalent.

Because most VMMs have the first two properties, VMMs that also meet the final criterion are considered efficient VMMs. We will go into these properties in much more depth as we examine hypervisors in Chapter 2, “Understanding Hypervisors,” and virtual machines in Chapter 3, “Understanding Virtual Machines.”

Let's go back to the virtual reality analogy. Why would you want to give a computer program a virtual world to work in, anyway? It turns out that it was very necessary. To help explain that necessity, let's review a little history. It would be outside the scope of this text to cover all the details about how server-based computing evolved, but for our purposes, we can compress it to a number of key occurrences.

Between the late 1970s and mid-1980s, there were more than 70 different personal computer operating systems.

Microsoft Windows Drives Server Growth

Microsoft Windows was developed during the 1980s primarily as a personal computer operating system. Others existed, CP/M and OS/2 for example, but as you know Windows eventually dominated the market and today it is still the primary operating system deployed on PCs. During that same time frame, businesses were depending more and more on computers for their operations. Companies moved from paper-based records to running their accounting, human resources, and many other industry-specific and custom-built applications on mainframes or minicomputers. These computers usually ran vendor-specific operating systems, making it difficult, if not impossible, for companies and IT professionals to easily transfer information among incompatible systems. This led to the need for standards, agreed upon methods for exchanging information, but also the idea that the same, or similar, operating systems and programs should be able to run on many different vendors' hardware. The first of these was Bell Laboratories' commercially available UNIX operating systems.

Companies had both Windows-based PCs and other operating systems in-house, managed and maintained by their IT staffs, but it wasn't cost effective to train IT staffs on multiple platforms. With increasing amounts of memory, faster processors, and larger and faster storage subsystems, the hardware that Windows could run on became capable of hosting more powerful applications that had in the past primarily run on minicomputers and mainframes. These applications were being migrated to, or being designed to run on, Windows servers. This worked well for companies because they already had Windows expertise in house and no longer required multiple teams to support their IT infrastructure. This move, however, also led to a number of challenges. Because Windows was originally designed to be a single-user operating system, a single application on a single Windows server ran fine, but often when a second program was introduced, the requirements of each program caused various types of resource contention and even out and out operating system failures. This behavior drove many companies, application designers, developers, IT professionals, and vendors to adopt a “one server, one application” best practice; so for every application that was deployed, one or more servers needed to be acquired, provisioned, and managed.

Current versions of Microsoft Windows run concurrent applications much more efficiently than their predecessors.

Another factor that drove the growing server population was corporate politics. The various organizations within a single company did not want any common infrastructure. Human Resource and Payroll departments declared their data was too sensitive to allow the potential of another group using their systems. Marketing, Finance, and Sales all believed the same thing to protect their fiscal information. Research and Development also had dedicated servers to ensure the safety of their corporate intellectual property. Sometimes companies had redundant applications, four or more email systems, maybe from different vendors, due to this proprietary ownership attitude. By demanding solitary control of their application infrastructure, departments felt that they could control their data, but this type of control also increased their capital costs.

Aiding the effects of these politics was the fact that business demand, competition, Moore's Law, and improvements in server and storage technologies all drastically drove down the cost of hardware. This made the entry point for a department to build and manage its own IT infrastructure much more affordable. The processing power and storage that in the past had cost hundreds of thousands of dollars could be had for a fraction of that cost in the form of even more Windows servers.

Business computers initially had specialized rooms in which to operate. These computer rooms were anything from oversized closets to specially constructed areas for housing a company's technology infrastructure. They typically had raised floors under which the cables and sometimes air conditioning conduits were run. They held the computers, network equipment, and often telecomm equipment. They needed to be outfitted with enough power to service all of that equipment. Because all of those electronics in a contained space generated considerable heat, commensurate cooling through huge air-conditioning handlers was mandatory as well. Cables to interconnect all of these devices, fire-suppression systems in case of emergency, and separate security systems to protect the room itself, all added to the considerable and ever-rising costs of doing business in a modern corporation. As companies depended more and more on technology to drive their business, they added many more servers to support that need. Eventually, this expansion created data centers. A data center could be anything from a larger computer room, to an entire floor in a building, to a separate building constructed and dedicated to the health and well-being of a company's computing infrastructure. Entire buildings existed solely to support servers, and then at the end of twentieth century, the Internet blossomed into existence.

“E-business or out of business” was the cry that went up as businesses tried to stake out their territories in this new online world. To keep up with their competition, existing companies deployed even more servers as they web-enabled old applications to be more customer facing and customer serving. Innovative companies, such as Amazon and Google, appeared from nowhere, creating disruptive business models that depended on large farms of servers to rapidly deliver millions of web pages populated with petabytes of information (see Table 1.1). IT infrastructure was mushrooming at an alarming rate, and it was only going to get worse. New consumer-based services were delivered not just through traditional online channels, but newer devices such as mobile phones compounded data centers' growth. Between 2000 and 2006, the Environmental Protection Agency (EPA) reported that energy use by United States data centers doubled, and that over the next five years they expected it to double again. Not only that, but servers were consuming about 2 percent of the total electricity produced in the country, and the energy used to cool them consumed about the same amount. Recent studies show that energy use by data centers continues to increase with no sign of decreasing any time soon.

Table 1.1 Byte Sizes

Name

Abbreviation

Size

Byte

B

8-bits (a single character)

Kilobyte

KB

1,024 B

Megabyte

MB

1,024 KB

Gigabyte

GB

1,024 MB

Terabyte

TB

1,024 GB

Petabyte

PB

1,024 TB

Exabyte

EB

1,024 PB

Zettabyte

ZB

1,024 EB

Yottabyte

YB

1,024 ZB

Let's take a closer look at these data centers. Many were reaching their physical limits on many levels. They were running out of actual square footage for the servers they needed to contain, and companies were searching for alternatives. Often the building that housed a data center could not get more electrical power or additional cooling capacity. Building larger or additional data centers was and still is an expensive proposition. In addition to running out of room, the data centers often had grown faster than the people managing them could maintain them. It was common to hear tales of lost servers. (A lost server is a server that is running, but no one actually knows which line of business owns it or what it is doing.) These lost servers couldn't be interrupted for fear of inadvertently disrupting some crucial part of the business. In some data centers, cabling was so thick and intertwined that when nonfunctioning cables needed to be replaced, or old cables were no longer needed, it was easier to just leave them where they were, rather than try to unthread them from the mass. Of course, these are the more extreme examples, but most data centers had challenges to some degree in one or more of these areas.

Explaining Moore's Law

So far you have seen how a combination of events—the rise of Windows, corporations increasing their reliance on server technology, and the appearance and mushrooming of the Internet and other content-driven channels—all contributed to accelerated growth of the worldwide server population. One 2006 study estimated that the 16 million servers in use in 2000 had grown to almost 30 million by 2005. This trend continues today. Companies like Microsoft, Amazon, and Google each have hundreds of thousands of servers to run their businesses. Think about all of the many ways you can pull information from the world around you; computers, mobile devices, gaming platforms, and television set tops are only some of the methods, and new ones appear every day. Each of them has a wide and deep infrastructure to support those services, but this is only part of the story. The other piece of the tale has to do with how efficient those computers were becoming.

If you are reading an electronic copy of this text on a traditional computer, or maybe on a smart phone or even a tablet, you probably have already gone through the process of replacing that device at least once. Phone companies typically give their customers the ability to swap out older smart phones every couple of years for newer, more up-to-date models, assuming you opt for another contract extension. A computer that you bought in 2010 has probably been supplanted by one you purchased in the last three to five years, and if it is closer to five years, you are probably thinking about replacing that one as well. This has little to do with obsolescence, although electronic devices today are rarely engineered to outlive their useful lifespan. It has more to do with the incredible advances that technology constantly makes, packing more and more capability into faster, smaller, and newer packages. For example, digital cameras first captured images at less than 1 megapixel resolution and now routinely provide more than 12 megapixel resolutions. PCs, and now smart phones, initially offered memory (RAM) measured in kilobytes; today the standard is gigabytes, an increase of two orders of magnitude. Not surprisingly, there is a rule of thumb that governs how fast these increases take place. It is called Moore's Law, and it deals with the rate at which certain technologies improve (see Figure 1.2).

Figure 1.2 Moore's Law: transistor count and processor speed

Gordon Moore, one of the founders of Intel, gets credit for recognizing and describing the phenomenon that bears his name. His original thought was publicized back in 1965, and although it has been refined a few times along the way, it is still very true today. Simply stated, Moore's Law says that processing power roughly doubles every 18 months. That means a computer you buy 18 months from now will be twice as powerful as one you buy today. As it turns out, Moore's Law applies not just to processing power (the speed and capacity of computer chips) but to many other related technologies as well (such as memory capacity and the megapixel count in digital cameras). You might think that after almost 50 years, we would be hitting some type of technological barrier that would prevent this exponential growth from continuing, but scientists believe that it will hold true for somewhere between 20 years on the low side and centuries on the high. But what does this have to do with straining data centers and ballooning server growth?

Servers are routinely replaced. There are two main models for this process. Companies buy servers and then buy newer models in three to five years when those assets are depreciated. Other corporations lease servers, and when that lease runs its course, they lease newer servers, also in three to five year intervals. The servers that were initially purchased for use were probably sized to do a certain job; in other words, they were bought, for example, to run a database. The model and size of the server was determined with help from an application vendor who provided a recommended server configuration based on the company's specific need. That need was not the company's requirement on the day the server was purchased; it was purchased based on the company's projected need for the future and for emergencies. This extra capacity is also known as headroom. To use the server for three to five years, it had to be large enough to handle growth until the end of the server's life, whether it actually ever used that extra capacity or not. When the server was replaced, it was often replaced with a similarly configured model (with the same number of processors and the same amount of memory or more) for the next term, but the newer server was not the same.

Let's take six years as an example span of time and examine the effect of Moore's Law on the change in a server (see Table 1.2). A company that is on a three-year model has replaced the initial server twice—once at the end of year three and again at the end of year six. According to Moore's Law, the processing power of the server has doubled four times, and the server is 16 times more powerful than the original computer! Even if they are on the five-year model, and have only swapped servers once, they now own a machine that is eight times faster than the first server.

Table 1.2 Processor Speed Increases Over Six Years

Year

2015

2016

2017

2018

2019

2020

Processor Speed

1x

2x

4x

4x

8x

16x

Three-year plan

purchase

purchase

Five-year plan

purchase

In addition to faster CPUs and faster processing, newer servers usually have more memory, another benefit of Moore's Law. The bottom line is that the replacement servers are considerably larger and much more powerful than the original server, which was already oversized for the workload it was handling.