Windows Server 2012 Hyper-V Installation and Configuration Guide - Aidan Finn - E-Book

Windows Server 2012 Hyper-V Installation and Configuration Guide E-Book

Aidan Finn

5,0
38,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Go-to guide for using Microsoft's updated Hyper-V as a virtualization solution Windows Server 2012 Hyper-V offers greater scalability, new components, and more options than ever before for large enterprise systems and small/medium businesses. Windows Server 2012 Hyper-V Installation and Configuration Guide is the place to start learning about this new cloud operating system. You'll get up to speed on the architecture, basic deployment and upgrading, creating virtual workloads, designing and implementing advanced network architectures, creating multitenant clouds, backup, disaster recovery, and more. The international team of expert authors offers deep technical detail, as well as hands-on exercises and plenty of real-world scenarios, so you thoroughly understand all features and how best to use them. * Explains how to deploy, use, manage, and maintain the Windows Server 2012 Hyper-V virtualization solutions in large enterprises and small- to medium-businesses * Provides deep technical detail and plenty of exercises showing you how to work with Hyper-V in real-world settings * Shows you how to quickly configure Hyper-V from the GUI and use PowerShell to script and automate common tasks * Covers deploying Hyper-V hosts, managing virtual machines, network fabrics, cloud computing, and using file servers * Also explores virtual SAN storage, creating guest clusters, backup and disaster recovery, using Hyper-V for Virtual Desktop Infrastructure (VDI), and other topics Help make your Hyper-V virtualization solution a success with Windows Server 2012 Hyper-V Installation and Configuration Guide.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 978

Bewertungen
5,0 (18 Bewertungen)
18
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Acknowledgments

About the Authors

Introduction

Who Should Read This Book

What’s Inside

How to Contact the Authors

Part 1: The Basics

Chapter 1: Introducing Windows Server 2012 Hyper-V

Virtualization and Cloud Computing

Windows Server 2012 Hyper-V

Licensing Windows Server 2012 in Virtualization

VMware

Other Essential Knowledge

Chapter 2: Deploying Hyper-V

Preparing a Hyper-V Deployment

Building the First Hyper-V Host

Managing Hyper-V

Upgrading Hyper-V

Real World Solutions

Chapter 3: Managing Virtual Machines

Creating Virtual Machines

Designing Virtual Machines

Performing Virtual Machine Operations

Installing Operating Systems and Applications

Real World Solutions

Part 2: Advanced Networking and Cloud Computing

Chapter 4: Networking

Basic Hyper-V Networking

Networking Hardware Enhancements

Advanced Networking

Real World Solutions

Chapter 5: Cloud Computing

Clouds, Tenants, and Segregation

Microsoft Network Virtualization

PVLANs

Port Access Control Lists

Hyper-V Virtual Machine Metrics

Real World Solutions

Part 3: Storage and High Availibility

Chapter 6: Microsoft iSCSI Software Target

Introducing the Microsoft iSCSI Software Target

Building the iSCSI Target

Managing the iSCSI Target Server

Migrating

Chapter 7: Using File Servers

Introducing Scale-Out File Servers

Installing and Configuring Scale-Out File Servers

Windows Server 2012 SMB PowerShell

Windows Server 2012 Hyper-V over SMB 3.0

Troubleshooting Scale-Out File Servers

Real World Solutions

Chapter 8: Building Hyper-V Clusters

Introduction to Building Hyper-V Clusters

Active Directory Integration

Failover Clustering Installation

Cluster Shared Volumes

BitLocker

Cluster-Aware Updating

Highly Available Virtual Machine

Virtual Machine Mobility

Real World Solutions

Chapter 9: Virtual SAN Storage and Guest Clustering

Introduction to Virtual SAN Storage

Guest Clustering

Virtual Machine Monitoring

Real World Solutions

Part 4: Advanced Hyper-V

Chapter 10: Backup and Recovery

How Backup Works with Hyper-V

Improvements in Windows Server 2012 Hyper-V Backup

Using Windows Server Backup

The Impact of Backup on the Network

Real World Solutions

Chapter 11: Disaster Recovery

Introducing Disaster Recovery

DR Architecture for Windows Server 2012 Hyper-V

Implementation of a Hyper-V Multi-site Cluster

Real World Solutions

Chapter 12: Hyper-V Replica

Introducing Hyper-V Replica

Enabling Hyper-V Replica between Nonclustered Hosts

Enabling Virtual Machine Replication

Using Authentication with Certificates

Using Advanced Authorization and Storage

Using Hyper-V Replica with Clusters

Exploring Hyper-V Replica in Greater Detail

Managing Hyper-V Replica

Setting Up Failover Networking

Failing Over Virtual Machines

Real World Solutions

Chapter 13: Using Hyper-V for Virtual Desktop Infrastructure

Using Virtual Desktops, the Modern Work Style

Building a Microsoft VDI Environment

Real World Solutions

Index

Acquisitions Editor: Mariann Barsolo

Development Editor: David Clark

Technical Editor: Hans Vredevoort

Production Editor: Eric Charbonneau

Copy Editor: Sharon Wilkey

Editorial Manager: Pete Gaughan

Production Manager: Tim Tate

Vice President and Executive Group Publisher: Richard Swadley

Vice President and Publisher: Neil Edde

Book Designers: Judy Fung and Maureen Forys, Happenstance Type-O-Rama

Compositor: Cody Gates, Happenstance Type-O-Rama

Proofreader: Rebecca Rider

Indexer: Ted Laux

Project Coordinator, Cover: Katherine Crocker

Cover Designer: Ryan Sneed

Cover Image: © Michael Knight / iStockphoto

Copyright © 2013 by John Wiley & Sons, Inc., Indianapolis, Indiana

Published simultaneously in Canada

ISBN: 978-1-118-48649-8

ISBN: 978-1-118-67701-8 (ebk.)

ISBN: 978-1-118-65143-8 (ebk.)

ISBN: 978-1-118-65149-0 (ebk.)

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Web site is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or Web site may provide or recommendations it may make. Further, readers should be aware that Internet Web sites listed in this work may have changed or disappeared between when this work was written and when it is read.

For general information on our other products and services or to obtain technical support, please contact our Customer Care Department within the U.S. at (877) 762-2974, outside the U.S. at (317) 572-3993 or fax (317) 572-4002.

Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.

Library of Congress Control Number: 2012956397

TRADEMARKS: Wiley, the Wiley logo, and the Sybex logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates, in the United States and other countries, and may not be used without written permission. Windows Server and Hyper-V are registered trademarks of Microsoft Corporation. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

10 9 8 7 6 5 4 3 2 1

Dear Reader,

Thank you for choosing Windows Server 2012 Hyper-V Installation and Configuration Guide. This book is part of a family of premium-quality Sybex books, all of which are written by outstanding authors who combine practical experience with a gift for teaching.

Sybex was founded in 1976. More than 30 years later, we’re still committed to producing consistently exceptional books. With each of our titles, we’re working hard to set a new standard for the industry. From the paper we print on to the authors we work with, our goal is to bring you the best books available.

I hope you see all that reflected in these pages. I’d be very interested to hear your comments and get your feedback on how we’re doing. Feel free to let me know what you think about this or any other Sybex book by sending me an email at [email protected]. If you think you’ve found a technical error in this book, please visit http://sybex.custhelp.com. Customer feedback is critical to our efforts at Sybex.

Best regards,

Neil Edde

Vice President and Publisher

Sybex, an Imprint of Wiley

To my family and friends, who have made this possible by helping and supporting me over the years. —Aidan Finn

I would like to dedicate this book to my family, friends, colleagues, and most of all to my wife, Lisa, and our precious children. —Patrick Lownds

For my family, friends, and colleagues who have been supporting and inspiring me all the time. —Michel Luescher

This book is dedicated to my brilliant and beautiful wife, Breege. She has been my inspiration, my motivation, and my rock. —Damian Flynn

Acknowledgments

When I first thought about writing this book back in 2011, I thought it might be something that I could do alone over a short period. But then we started to learn how much had changed in Windows Server 2012, and how much bigger Hyper-V had become. I knew that I would need a team of experts to work with on this project. Patrick Lownds, Michel Luescher, Damian Flynn, and Hans Vredevoort were the best people for the job. Luckily, they were willing to sign up for the months of hard work that would be required to learn this new version of Windows Server 2012 and Hyper-V, do the research, annoy the Microsoft project managers, and reach out to other members of the community. Thank you to my coauthors, Patrick, Michel, and Damian, for the hard work that you have done over the past few months; I have learned a lot from each of you during this endeavor. When it came to picking a technical reviewer, there was one unanimous choice, and that was Hans, a respected expert in Hyper-V and System Center. Hans’ name might not be on the cover, but his input can be found in every chapter. Thank you (again) Hans, for taking the time to minimize our mistakes.

Patrick, Damian, and Hans are Microsoft Most Valuable Professionals (MVPs) like myself. The MVP program is a network of experts in various technologies. There are many benefits to achieving this award from Microsoft, but one of the best is the opportunity to meet those experts. Many of these people helped with this project and you’ll see just some of their names in these acknowledgments.

Starting to write a book on a product that is still being developed is quite a challenge. There is little documentation, and the target keeps moving. Many people helped me during this endeavor. Who would think that a person who barely passed lower-grade English when he finished school could go on to have his name on the covers of five technical books? Mark Minasi (MVP) is the man I have to thank (or is it blame?) for getting me into writing books. Mark once again was there to help when I needed some information on BitLocker. Jeff Wouters, a consultant in the Netherlands, loves a PowerShell challenge. Jeff got a nice challenge when a PowerShell “noob” asked for help. Thanks to Jeff, I figured out some things and was able to give the reader some better real-world solutions to common problems. If you’re searching for information on Windows Server 2012 storage, there’s a good chance that you will come across Didier Van Hoye (aka Workinghardinit). Didier is a fellow Virtual Machine (Hyper-V) MVP and has been there to answer quick or complex questions. Brian Ehlert (MVP) is an important contributor on the TechNet Hyper-V forum and is an interesting person to talk to for alternative points of view. Brian helped me see the forest for the trees a number of times. We have a great Hyper-V MVP community in Europe; Carsten Rachfahl found some functionality that we weren’t aware of and helped us understand it. A new guy on the MVP scene is Thomas Maurer, and his blog posts were useful in understanding some features.

Thanks to the MVP program, we gain access to some of the people who make the products we work with and write about. Numerous Microsoft program managers answered questions or explained features to me. Ben Armstrong (aka the Virtual PC Guy) leads the way in Virtual Machine expertise, has answered many questions for us as a group, provides great information on his blog, and has been a huge resource for us. Thanks too to Senthil Rajaram for doing his best to explain 4K sector support to me; any mistakes here are mine! Charley Wen, John Howard, and Don Stanwyck all helped me come to grips with the massive amount of change in Windows Server networking. Joydeep Buragohain also provided me with great information on Windows Server Backup. We Hyper-V folks rely on Failover Clustering, and we also had great help from their program managers, with Rob Hindman and Elden Christensen leading the way. Thanks to all for your patience, and I hope I have reproduced your information correctly.

I would also like to thank MicroWarehouse, my employer, for the flexibility to allow me to work on projects like this book. The opportunity that I have to learn and to share in my job is quite unique. I work with some of the best customer-focused experts around, and I’ve learned quite a bit from them.

Of course, the book wouldn’t be possible at all without the Sybex team. This book kept growing, and there was a lot more work than originally estimated. Pete Gaughan, the acquisitions and developmental editor, David Clark, Eric Charbonneau, and a whole team of editors made this possible. In particular, I want to pay special thanks to Mariann Barsolo, who believed in this project from day 1, and made a huge effort to get things moving.

My family are the ones who made everything possible. Thank you to my mom, dad, and sister for the encouragement and help, in good times and bad. From the first moment, I was encouraged to learn, to question why and how, to think independently, and to eventually become a pain in the backside for some! Without my family, I would not be writing these acknowledgments.

—Aidan Finn

Third time lucky! It takes personal commitment and dedication to write a book, but it takes a lot of support as well. It would not be possible without help from family, friends, and colleagues. I would like to thank my wife, Lisa, for helping to keep everything together, and my children for being especially patient. A special thanks to the editors at Sybex for taking on this book project and for making the dream a reality; my coauthors, Aidan, Damian, and Michel; plus our technical reviewer, Hans. Finally, I would like to thank a number of people for helping me along the way: Ben Armstrong, Patrick Lang, Rob Hindman, Mallikarjun Chadalapaka, Subhasish Bhattacharya, Jose Barreto, and Allison Hope.

—Patrick Lownds

I never thought that I would write a book, as I’m not a big fan of reading books. But when Aidan and Patrick asked me in early 2012 if I would think about providing a few chapters on a Windows Server 2012 Hyper-V book, I couldn’t resist. Working with this excellent team of knowledgeable experts was a great experience that I didn’t want to miss, and it was also an honor to be part of it. Thank you guys for this great opportunity!

It was quite a challenge writing a book on a product that is still under development. Therefore, I would like to express my special thanks to the great people who took time out from their busy schedules to share their experience, discuss features, or give me very good advice for this book. A big thank you goes to the following people: Nigel Cain, Paul Despe, Ronny Frehner, Florian Frommherz, Michael Gray, Asaf Kuper, Thomas Roettinger, Cristian Edwards Sabathe, Jian Yan, and Joel Yoker.

Hans Vredevoort deserves a very special thanks for all the great feedback provided and the interesting discussions we had. Of course I also would like to thank the Sybex team for their support and patience. Even though I squirmed when I received your status mails telling me I missed another deadline, you helped me keep pushing to make this all happen.

And last but certainly not least, thanks a lot, Carmen, for supporting me with all my crazy ideas and projects. This all wouldn’t be possible without you.

—Michel Luescher

During the process of writing my first book, I promised myself that I would never do it again. So, what changed? As the project progressed, and the products continued to be revised through their release milestones, somewhere along the path to publishing the challenge of writing also changed to become enjoyable. When Aidan then suggested the idea for this book while we were walking around Seattle one cold night in February, I was surprised to hear myself agreeing to the idea and feeling the excitement of being involved! It was not many weeks after that when we had the pleasure of meeting our representative from Sybex in Las Vegas to sell the plan; thanks to Aidan we were on a roll.

Collecting, selecting, and validating all the details that goes into the chapters of a technical book clearly requires a lot of input from many different people, especially respected experts and co-authors, Aidan, Patrick, and Michel, with whom it has been an honor working alongside. Our technical editor, Hans, deserves a very special consideration. It was his job to read our work in its earliest format, dissect our content to ensure its accuracy, and create labs to reproduce our implementation guides and recommendations. This was no minor achievement, yet he continued to excel at finding and squashing the bugs, and forcing us to rethink all the time. Thank you Hans.

In addition, a very special thanks to my work colleagues at Lionbridge, especially Oyvind, Steve, Benny, and the “Corp IT” Team for supporting and encouraging me, and my infamous “Lab.” I would also like to acknowledge the fantastic team at Microsoft, who has, over the years, put up with my “constructive” criticism (of products) and helped me out of many complex road blocks, especially Pat Fetty, Nigel Cain, and Travis Wright. The reality is that there are many people who helped along the way, too many to list individually; I offer my sincere appreciation to you all.

I would like to thank my amazing wife for always providing direction to my life; my parents for their enduring support and encouragement; my family—immediate, extended, and acquired by marriage! Their constant support and belief in me are the best gifts they could ever give.

—Damian Flynn

About the Authors

Aidan Finn, MVP, has been working in IT since 1996. He is employed as the Technical Sales Lead by MicroWarehouse, a distributor (and Microsoft Value Added Distributor) in Dublin, Ireland. In this role, he works with Microsoft partners in the Republic of Ireland and Northern Ireland, evangelizing Microsoft products such as Windows Server, Hyper-V, Windows client operating systems, Microsoft System Center, and cloud computing. Previously, Aidan worked as a consultant and administrator for the likes of Amdahl DMR, Fujitsu, Barclays, and Hypo Real Estate Bank International, where he dealt with large and complex IT infrastructures. Aidan has worked in the server hosting and outsourcing industry in Ireland, where he focused on server management, including VMware VI3, Hyper-V, and System Center.

Aidan was given the Microsoft Most Valuable Professional (MVP) award in 2008 in the Configuration Manager expertise. He switched to the Virtual Machine expertise in 2009 and has been renewed annually since then. Aidan has worked closely with Microsoft in Ireland and the United Kingdom, including presentations, road shows, online content, podcasts, and launch events. He has also worked in the community around the world, presenting at conferences and participating in podcasts.

When Aidan isn’t at work, he’s out and about with camera in hand, lying in a ditch, wading through a bog, or sitting in a hide, trying to be a wildlife photographer. Aidan was the lead author of Mastering Hyper-V Deployment (Sybex, 2010). He is one of the contributing authors of Microsoft Private Cloud Computing (Sybex, 2012), Mastering Windows Server 2008 R2 (Sybex, 2009), and Mastering Windows 7 Deployment (Sybex, 2011).

Aidan runs a blog at www.aidanfinn.com, where he covers Windows Server, Hyper-V, System Center, desktop management, and associated technologies. Aidan is also on Twitter as @joe_elway.

Patrick Lownds is a senior solution architect at Hewlett Packard’s TS Consulting, EMEA in the Data Center Consulting practice and is based out of London. Patrick is a current Virtual Machine Most Valuable Professional (MVP) and a Microsoft Virtual Technology Solution Professional (v-TSP). Patrick has worked in the IT industry since 1988 and has worked with a number of technologies, including Windows Server Hyper-V and System Center.

In his current role, he works mainly with the most recent versions of Windows Server and System Center and has participated in both the Windows Server 2012 and System Center 2012 SP1 Technology Adoption Programs.

Patrick has also contributed to Mastering Hyper-V Deployment (Sybex 2010) and Microsoft Private Cloud Computing (Sybex, 2012). He blogs and tweets in his spare time and can be found on Twitter as @patricklownds.

Michel Luescher is a senior consultant in the Consulting Services division at Microsoft Switzerland. Primarily, Michel is focused on datacenter architectures and works with Microsoft’s enterprise customers. In this role, he works mainly with the latest versions of Windows Server and System Center to build datacenter solutions, also known as the Microsoft private cloud. He joined Microsoft in January 2009 and has since been working very closely with the different divisions and communities, including several product groups at Microsoft. Michel has worked with Windows Server 2012 since the first release back in September 2011 and is involved in various rapid deployment programs (RDPs) and technology adoption programs (TAPs), helping Microsoft customers with the early adoption of the pre-released software.

Michel is a well-known virtualization and datacenter specialist and regularly presents at events. On his blog at www.server-talk.eu, Michel writes about Microsoft virtualization and private cloud. On Twitter you will find him as @michelluescher.

Damian Flynn, Cloud and Datacenter Management MVP, is an infrastructure architect at Lionbridge Technology, a Microsoft Gold Certified Partner. Damian, based in Ireland, is responsible for incubating new projects, architecting business infrastructure and services, and sharing knowledge, while leveraging his continuous active participation in multiple Microsoft TAPs with over 18 years IT experience. He blogs at www.damianflynn.com and tweets from time to time as @damian_flynn. He has published numerous technical articles, coauthored Microsoft Private Cloud Computing (Sybex, 2012), presented at various conferences including Microsoft TechEd, and contributes code on CodePlex.

Introduction

Windows Server 2012 Hyper-V brings something new to the market. Microsoft marketing materials claim that this release goes “beyond virtualization.” That might seem like hyperbole at first, but take some time to look at how you can change the way IT works by building a private, public, or hybrid cloud with Hyper-V as the engine of the compute cluster. Then you’ll understand how much work Microsoft put into this release.

The original release of Hyper-V was the butt of many jokes in the IT industry. The second release, Windows Server 2008 R2, brought respectability to Hyper-V, and combined with the System Center suite, was a unique offering. It was clear that Microsoft was focusing on service, not servers, recognizing what businesses value, and empowering IT staff to focus on engineering rather than on monotonous mouse-click engineering. Then came the Windows Server 2012 announcements at the Build conference in Anaheim, California, in 2011. Even Microsoft’s rivals were staggered by the scale of the improvements, choosing to believe that the final release would include just a fraction of them.

We now know that Microsoft took an entire year after the release of Windows Server 2008 R2 to talk to customers, gather requirements and desires, and plan the new release. They listened; pain points such as the lack of supported NIC teaming were added, difficulties with backup in Hyper-V clusters were fixed, and little niggles that caused administration annoyance had their corners rounded. More important, Microsoft had a vision: Windows Server 2012 would be “built from the cloud up” (another line from Microsoft’s marketing). This is the first hypervisor designed to be used in a cloud rather than trying to build wrappers around something that focuses on servers first. Many features were added and improved to enable a business to deploy a private cloud, or a service provider to build a flexible, secure, and measured multi-tenant public cloud. Much of this release is ready to go now, but Microsoft built for the future too, with support for emerging technologies and scalability that is not yet achievable in the real world.

Usually with a Microsoft release, you’ll hear headlines that make you think that the product is designed just for massive enterprises with hundreds of thousands of employees. Windows Server 2012 Hyper-V includes features that honestly are intended for the upper end of the market, but some of the headline features, such as SMB3.0 storage or Hyper-V Replica, were designed to deal with the complexities that small/medium enterprises have to deal with too.

This book is intended to be your reference for all things Windows Server 2012 Hyper-V. The book was written by three MVPs and a Microsoft consultant who give you their insight on this product. Every chapter aims to give you as much information as possible. Starting from the basics, each chapter will bring you through concepts, showing you how to use and configure features, and lead you to the most complex designs. Most chapters include scenarios that show you how to use Windows Server 2012 Hyper-V in production, in customer sites or your own.

PowerShell was added in Windows Server 2012, and you’ll find lots of PowerShell examples in this book. This was a deliberate strategy. Most IT pros who have not used PowerShell are scared of this administration and scripting language, because it is different from how they normally work. Pardon the pun, but it is powerful, enabling simple tasks to be completed more quickly, and enabling complex tasks (such as building a cluster) to be done with a mouse click. You don’t need to be a programmer to get to a point where you use PowerShell. None of this book’s authors are programmers, and we use the language to make our jobs easier. If you read this book, you will find yourself wanting to use and understand the examples, and hopefully you’ll start writing and sharing some scripts of your own.

The book starts with the basics, such as explaining why virtualization exists. It then moves through the foundations of Hyper-V that are common to small or large enterprises; gets into the fun, deep, technical complexities; and returns to common solutions once again, such as disaster recovery, backup, and virtual desktop infrastructure.

Who Should Read This Book

We are making certain assumptions regarding the reader here. You are

Experienced in working with IT

Familiar with terminology such as VLAN, LAN, and so on

Comfortable with installing Windows Server

This book is not intended to be read by a person starting out in the IT industry. You should be comfortable with the basics of server administration and engineering concepts.

The intended audience includes administrators, engineers, and consultants who are working, or starting to work, with virtualization. If you are a Hyper-V veteran, you should know that this release includes more new functionality than was in previous releases combined. If you have experience with another virtualization product, don’t assume that your knowledge transfers directly across; every hypervisor does things differently, and Windows Server 2012 Hyper-V includes functionality not yet seen in any of its rivals.

You don’t have to work for a Fortune 500 company to get value from this book. Let’s face it; that would be a rather small market for a publisher to sell to! This book is aimed at people working in all parts of the market. Whether you are a field engineer providing managed services to small businesses or an architect working for a huge corporation, we have something for you here. We’ll teach you the theory and then show you different ways to apply that knowledge.

What’s Inside

Here is a glance at what’s in each chapter:

Chapter 1: Introducing Windows Server 2012 Hyper-V presents you with the newest version of Microsoft’s hypervisor. The chapter starts with a brief history of the evolution of IT, up to the present with virtualization, and introduces you to where businesses are going with cloud computing. The chapter also deals with the thorny issues of licensing Windows Server 2012 and licensing for various virtualization scenarios.
Chapter 2: Deploying Hyper-V Hosts is where you will learn how to get Hyper-V up and running. This is the starting point for all deployments, large or small. The chapter also covers the host settings of Hyper-V.
Chapter 3: Managing Virtual Machines is a long chapter where you will learn how to deploy and configure virtual machines by using the wizards and PowerShell. This chapter also discusses how Dynamic Memory works in Windows Server 2012 and the all new and bigger Live Migration.
Chapter 4: Networking is the chapter that discusses how to connect the services in your virtual machines to a network. The chapter starts with the basics, such as how to create virtual switches, and understanding extensibility, and moves on to more-advanced topics such as supporting hardware offloads/enhancements, Quality of Service (QoS), and converged fabric design. This is also the chapter where you will find NIC teaming.
Chapter 5: Cloud Computing is a logical extension of the Networking chapter, building on many of the concepts there to create clouds. You will learn about private VLANs (PVLANs), network virtualization, resource pools, and resource metering, which will give you all the components to start building the computer cluster of your very own cloud.
Chapter 6: Microsoft iSCSI Software Targetwill be a popular subject for many readers. Windows Server 2012 has a built-in iSCSI target, allowing you to provide storage over the known and trusted storage protocol. Whether you are a small business that wants iSCSI storage on a budget, or you are building a lab where you need to simulate a SAN, this chapter will give you the material you need.
Chapter 7: Using File Servers Storing your virtual machines on file shares is now supported. This is made possible thanks to technologies such as SMB Multichannel and SMB Direct, which, when combined, can match or even beat legacy storage protocols. You’ll learn how to use this new tier of storage, as well as how to build the new scalable and continuously available Scale-Out File Server architecture.
Chapter 8: Building Hyper-V Clusters gives you the knowledge of how to build highly available Hyper-V virtualization or cloud infrastructures. You’ll learn about the architecture, the roles of the networks, and best practices for building these clusters. Other subjects include host maintenance and Cluster-Aware Updating.
Chapter 9: Virtual SAN Storage and Guest Clustering reminds us that high availability is not limited to just hosts. The reason we have IT is to have services, and those services often require high availability. This chapter shows you how to build guest clusters, as well as how to take advantage of the new ability to virtualize Fibre Channel SANs.
Chapter 10: Backup and Recovery covers this critical task for IT in any business. Virtualization should make this easier. This chapter discusses how the Volume Shadow Copy Service (VSS) works with Hyper-V virtual machines, and how Windows Server 2012 has improved to support better backup of highly available virtual machines, as well as virtual machines that are stored on SMB3 file shares. This chapter also shows you how small businesses and lab environments can use Windows Server Backup to back up running virtual machines with application consistency.
Chapter 11: Disaster Recovery has great value to businesses. Being able to keep the business operating in the face of a disaster is something that all IT pros and businesses know should be done, but often has proven to be too difficult or expensive. This chapter discusses the theory of disaster recovery (DR) and business continuity planning (BCP), and how Hyper-V can make this achievable.
Chapter 12: Hyper-V Replica is a feature that has gotten a lot of attention since it was first announced; this is built-in disaster recovery replication that is designed to scale for large clouds and to deal with the complexities of the small business. This chapter explains how Hyper-V Replica works, how to deploy it, how to survive a disaster, and how to get your business back to a production site afterward.
Chapter 13: Using Hyper-V for Virtual Desktop Infrastructure gives you a free and scalable solution. Here you will learn how to engineer Hyper-V in this scenario and see how to deal with the unique demands of virtual machines that replace PCs instead of servers.

How to Contact the Authors

We welcome feedback from you about this book or about books you’d like to see from us in the future.

Aidan Finn can be reached by writing to [email protected]. For more information about his work, visit his website at www.aidanfinn.com. You can also follow Aidan on Twitter at @joe_elway.

Patrick Lownds can be contacted via email at [email protected], you can also follow him on Twitter at @PatrickLownds.

Michel can be contacted by mail at [email protected], on Twitter at @michelluescher. And for more information, read his blog at www.server-talk.eu.

Damian Flynn can be reached via email at [email protected], you can follow him on Twitter at @damian_flynn, and you can read his technology blog at www.damianflynn.com.

Sybex strives to keep you supplied with the latest tools and information you need for your work. Please check their website at www.sybex.com/go/winserver2012hypervguide, where we’ll post additional content and updates that supplement this book should the need arise.

Part 1

The Basics

Chapter 1: Introducing Windows Server 2012 Hyper-V

Chapter 2: Deploying Hyper-V

Chapter 3: Managing Virtual Machines

Chapter 1

Introducing Windows Server 2012 Hyper-V

One thing has remained constant in IT since the invention of the computer: change. Our industry has moved from highly centralized mainframes with distributed terminals, through distributed servers and PCs, and is moving back to a highly centralized model based on virtualization technologies such as Hyper-V. In this chapter, you will look at the shift that has been happening and will learn what has started to happen with cloud computing. That will lead you to Windows Server 2012 Hyper-V.

With the high level and business stuff out of the way, you’ll move on to technology, looking at the requirements for Hyper-V, the scalability, and the supported guest operating systems.

You cannot successfully design, implement, manage, or troubleshoot Hyper-V without understanding the underlying architecture. This will help with understanding why you need to install or update some special software in virtual machines, why some features of virtual machines perform better than others, and why some advanced technologies such as Single-Root I/O Virtualization exist.

One subject that all techies love to hate is licensing, but it’s an important subject. Correctly licensing virtualization means that you keep the company legal, but it also can save the organization money. Licensing is like a sand dune, constantly changing and moving, but in this chapter you’ll look at how it works, no matter what virtualization platform you use.

We cannot pretend that VMware, the company that had uncontested domination of the virtualization market, does not exist. So this chapter presents a quick comparison of their solution and Microsoft’s products. This chapter also gives those who are experienced with VMware a quick introduction to Hyper-V.

We wrap up the chapter by talking about some other important things for you to learn. The most important step of the entire project is the assessment; it’s almost impossible to be successful without correct sizing and planning. Microsoft makes this possible via the free Microsoft Assessment and Planning Toolkit. One of the most important new features in Windows Server 2012 is PowerShell. This might not be a PowerShell book, but you will see a lot of PowerShell in these pages. We introduce you to PowerShell, explain why you will want to learn it, and show you how to get started.

In this chapter, you’ll learn about

Virtualization and cloud computing

Hyper-V architecture, requirements, and supported guest operating systems

Sizing a Hyper-V project and using PowerShell

Virtualization and Cloud Computing

You have to understand where you have come from in order to know where you are going. In this section, you are going to look at how the IT world started in the mainframe era and is now moving toward cloud computing. You’ll also learn why this is relevant to Windows Server 2012 Hyper-V.

Computing of the Past: Client/Server

How computing has been done has changed—and in some ways, almost gone full circle—over the past few decades. Huge and expensive mainframes dominated the early days, providing a highly contended compute resource that a relatively small number of people used from dumb terminals. Those mainframes were a single and very expensive point of failure. Their inflexibility and cost became their downfall when the era of client/server computing started.

Cheap PCs that eventually settled mostly on the Windows operating system replaced the green-screen terminal. This gave users a more powerful device that enabled them to run many tasks locally. The lower cost and distributed computing power also enabled every office worker to use a PC, and PCs appeared in lots of unusual places in various forms, such as a touch-screen device on a factory floor, a handheld device that could be sterilized in a hospital, or a toughened and secure laptop in a military forward operating base.

The lower cost of servers allowed a few things to happen. Mainframes require lots of change control and are inflexible because of the risk of mistakes impacting all business operations. A server, or group of servers, typically runs a single application. That meant that a business could be more flexible. Need a new application? Get a new server. Need to upgrade that application? Go ahead, after the prerequisites are there on the server. Servers started to appear in huge numbers, and not just in a central computer room or datacenter. We now had server sprawl across the entire network.

In the mid-1990s, a company called Citrix Systems made famous a technology that went through many names over the years. Whether you called it WinFrame, MetaFrame, or XenApp, we saw the start of a return to the centralized computing environment. Many businesses struggled with managing PCs that were scattered around the WAN/Internet. There were also server applications that preferred the end user to be local, but those users might be located around the city, the country, or even around the world. Citrix introduced server-based computing, whereby users used a software client on a PC or terminal to log in to a shared server to get their own desktop, just as they would on a local PC. The Citrix server or farm was located in a central datacenter beside the application servers. End-user performance for those applications was improved. This technology simplified administration in some ways while complicating it in others (user settings, peripheral devices, and rich content transmission continue to be issues to this day). Over the years, server processor power improved, memory density increased on the motherboard, and more users could log in to a single Citrix server. Meanwhile, using a symbiotic relationship with Citrix, Microsoft introduced us to Terminal Services, which became Remote Desktop Services in Windows Server 2008.

Server-based computing was all the rage in the late 1990s. Many of those end-of-year predictions told us that the era of the PC was dead, and we’d all be logging into Terminal Servers or something similar in the year 2000, assuming that the Y2K (year 2000 programming bug) didn’t end the world. Strangely, the world ignored these experts and continued to use the PC because of the local compute power that was more economical, more available, more flexible, and had fewer compatibility issues than datacenter compute power.

Back in the server world, we also started to see several kinds of reactions to server sprawl. Network appliance vendors created technologies to move servers back into a central datacenter, while retaining client software performance and meeting end-user expectations, by enabling better remote working and consolidation. Operating systems and applications also tried to enable centralization. Client/server computing was a reaction to the extreme centralization of the mainframe, but here the industry was fighting to get back to those heady days. Why? There were two big problems:

There was a lot of duplication with almost identical servers in every branch office, and this increased administrative effort and costs.

There aren’t that many good server administrators, and remote servers were often poorly managed.

Every application required at least one operating system (OS) installation. Every OS required one server. Every server was slow to purchase and install, consumed rack space and power, generated heat (which required more power to cool), and was inflexible (a server hardware failure could disable an application). Making things worse, those administrators with adequate monitoring saw that their servers were hugely underutilized, barely using their CPUs, RAM, disk speed, and network bandwidth. This was an expensive way to continue providing IT services, especially when IT is not a profit-making cost center in most businesses.

Computing of the Recent Past: Virtualization

The stage was set for the return of another old-school concept. Some mainframes and high-end servers had the ability to run multiple operating systems simultaneously by sharing processor power. Virtualization is a technology whereby software will simulate the hardware of individual computers on a single computer (the host). Each of these simulated computers is called a virtual machine (also known as a VM or guest). Each virtual machine has a simulated hardware specification with an allocation of processor, storage, memory, and network that are consumed from the host. The host runs either a few or many virtual machines, and each virtual machine consumes a share of the resources.

A virtual machine is created instead of deploying a physical server. The virtual machine has its own guest OS that is completely isolated from the host. The virtual machine has its own MAC address(es) on the network. The guest OS has its own IPv4 and/or IPv6 address(es). The virtual machine is isolated from the host, having its own security boundary. The only things making it different from the physical server alternative are that it is a simulated machine that cannot be touched, and that it shares the host’s resources with other virtual machines.

Host Resources Are Finite
Despite virtualization being around for over a decade, and being a mainstream technology that is considered a CV/résumé must-have, many people still don’t understand that a host has finite resources. One unfortunate misunderstanding is the belief that virtual machines will extract processor, memory, network bandwidth, storage capacity/bandwidth out of some parallel underutilized universe.
In reality, every virtual machine consumes capacity from its host. If a virtual machine is using 500 GB of storage, it is taking 500 GB of storage from the host. If a virtual machine is going to use 75 percent of a six-core processor, that machine is going to take that processor resource from the host. Each virtual machine is competing with every other virtual machine for host resources. It is important to understand this, to size hosts adequately for their virtual machines, and to implement management systems that will load-balance virtual machines across hosts.

There are two types of virtualization software for machine virtualization, shown in Figure 1-1:

Type 1 Also known as a hypervisor, a Type 1 virtualization solution runs directly on the hardware.
Type 2 A Type 2 virtualization solution is installed on an operating system and relies on that operating system to function.

VMware’s ESX (and then ESXi, a component of vSphere) is a Type 1 virtualization product. Microsoft’s virtual server virtualization solution, Virtual Server, was a Type 2 product, and was installed on top of Windows Server 2003 and Windows Server 2003 R2. Type 2 virtualization did have some limited deployment but was limited in scale and performance and was dependent on its host operating system. Type 1 hypervisors have gone on to be widely deployed because of their superior scalability, performance, and stability. Microsoft released Hyper-V with Windows Server 2008. Hyper-V is a true Type 1 product, even though you do install Windows Server first to enable it.

Figure 1-1 Comparing Type 1 and Type 2 virtualization

The early goal of virtualization was to take all of those underutilized servers and run them as virtual machines on fewer hosts. This would reduce the costs of purchasing, rack space, power, licensing, and cooling. Back in 2007, an ideal goal was to have 10 virtual machines on every host. Few would have considered running database servers, or heavy-duty or critical workloads, on virtual machines. Virtualization was just for lightweight and/or low-importance applications.

The IT world began to get a better understanding of virtualization and started to take advantage of some of its traits. A virtual machine is usually just a collection of files. Simulated hard disks are files that contain a file system, operating system, application installations, and data. Machine configurations are just a few small files. Files are easy to back up. Files are easy to replicate. Files are easy to move. Virtual machines are usually just a few files, and that makes them relatively easy to move from host to host, either with no downtime or as an automated reaction to host failure. Virtualization had much more to offer than cost reduction. It could increase flexibility, and that meant the business had to pay attention to this potential asset:

Virtual machines can be rapidly deployed as a reaction to requests from the business.

Services can have previously impossible levels of availability despite preventative maintenance, failure, or resource contention.

Backup of machines can be made easier because virtual machines are just files (usually).

Business continuity, or disaster recovery, should be a business issue and not just an IT one; virtualization can make replication of services and data easier than traditional servers because a few files are easier to replicate than a physical installation.

Intel and AMD improved processor power and core densities. Memory manufacturers made bigger DIMMs. Server manufacturers recognized that virtualization was now the norm, and servers should be designed to be hosts instead of following the traditional model of one server equals one OS. Servers also could have more compute power and more memory. Networks started the jump from 1 GbE to 10 GbE. And all this means that hosts could run much more than just 10 lightweight virtual machines.

Businesses want all the benefits of virtualization, particularly flexibility, for all their services. They want to dispense with physical server installations and run as many virtual machines as possible on fewer hosts. This means that hosts are bigger, virtualization is more capable, the 10:1 ratio is considered ancient, and bigger and critical workloads are running as virtual machines when the host hardware and virtualization can live up to the requirements of the services.

Virtualization wasn’t just for the server. Technologies such as Remote Desktop Services had proven that a remote user could get a good experience while logging in to a desktop on a server. One of the challenges with that kind of server-based computing was that users were logging in to a shared server, where they ran applications that were provided by the IT department. A failure on a single server could impact dozens of users. Change control procedures could delay responses to requests for help. What some businesses wanted was the isolation and flexibility of the PC combined with the centralization of Remote Desktop Services. This was made possible with virtual desktop infrastructure (VDI). The remote connection client, installed on terminal or PC, connected to a broker when the user started work. The broker would forward the user’s connection to a waiting virtual machine (on a host in the datacenter) where they would log in. This virtual machine wasn’t running a server guest OS; it was running a desktop OS such as Windows Vista or Windows 7, and that guest OS had all of the user’s required applications installed on it. Each user had their own virtual machine and their own independent working environment.

The end-of-year predictions from the analysts declared it the year of VDI, for about five years running. Each year was to be the end of the PC as we switched over to VDI. Some businesses did make a switch, but they tended to be smaller. In reality, the PC continues to dominate, with Remote Desktop Services (now often running as virtual machines) and VDI playing roles to solve specific problems for some users or offices.

Computing of the Present: Cloud Computing

We could argue quite successfully that the smartphone and the tablet computer changed how businesses view IT. Users, managers, and directors bought devices for themselves and learned that they could install apps on their new toys without involving the IT department, which always has something more important to do and is often perceived as slowing business responsiveness to threats and opportunities. OK, IT still has a place; someone has to build services, integrate them, manage networks, guarantee levels of service, secure the environment, and implement regulatory compliance.

What if the business could deploy services in some similar fashion to the app on the smartphone? When we say the business, we mean application developers, testers, and managers; no one expects the accountant who struggles with their username every Monday to deploy a complex IT service. With this self-service, the business could deploy services when they need them. This is where cloud computing becomes relevant.

Cloud computing is a term that started to become well-known in 2007. The cloud can confuse, and even scare, those who are unfamiliar with it. Most consider cloud computing to mean outsourcing, a term that sends shivers down the spine of any employee. This is just one way that the cloud can be used. The National Institute of Standards and Technology (NIST), an agency of the United States Department of Commerce, published The NIST Definition of Cloud Computing (http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf) that has become generally accepted and is recommended reading.

There are several traits of a cloud:

Self-Service Users can deploy the service when they need it without an intervention by IT.
Broad Network Access There is a wide range of network connectivity for the service.
Resource Pooling There is a centralized and reusable collection of compute power and resources.
Rapid Elasticity There is ample compute power and resources available if more is required, enabling the user to consume resources as required with no long-term commitment.
Measured Service Resource utilization can be measured, and the information can be used for reporting or cross-charging.

Nothing in the traits of a cloud says that cloud computing is outsourcing. In reality, outsourcing is just one deployment model of possible clouds, each of which must have all of the traits of a cloud:

Public A public cloud is one that is run by a service provider in its own facility. The resources are shared by the tenants (customers).
Private A private cloud comes in two forms. It could be a cloud that is run by a service provider but is dedicated to a single customer. Or a private cloud could be one that is run internally by an organization, with absolutely no outsourcing. The private cloud is the ultimate in server centralization.
Hybrid This is where there is a connection between a private cloud and a public cloud, and the user can choose the best location for the new service, which could even be to span both clouds.
Community In a community cloud, numerous organizations work together to combine their compute resources. This will be a rare deployment in private enterprise, but could be useful in collaborative research environments.

Microsoft’s Windows Azure and Office 365, Amazon Elastic Compute Cloud (EC2), Google Docs, Salesforce, and even Facebook are all variations of a public cloud. Microsoft also has a private cloud solution that is based on server virtualization (see Microsoft Private Cloud Computing, Sybex 2012). These are all very different service models that fall into one of three categories:

Software as a Service A customer can subscribe to a Software as a Service (SaaS) product instead of deploying a service in their datacenter. This gives them rapid access to a new application. Office 365 and Salesforce are examples of SaaS.
Platform as a Service A developer can deploy a database and/or application on a Platform as a Service (PaaS) instead of on a server or a virtual machine’s guest OS. This removes the need to manage a guest OS. Facebook is a PaaS for game developers, and Windows Azure offers PaaS.
Infrastructure as a Service Infrastructure as a Service (IaaS) provides machine virtualization through one of the deployment models and complying with the traits of a cloud. This offers a familiar working environment with maximized flexibility and mobility between clouds.

Windows Server 2012 Hyper-V can be used to create the compute resources of an IaaS cloud of any deployment type that complies with the traits of a cloud. To complete the solution, you will have to use System Center 2012 with Service Pack 1, which can also include VMware vSphere and Citrix XenServer as compute resources in the cloud.

Cloud computing has emerged as the preferred way to deploy services in an infrastructure, particularly for medium to large enterprises. This is because those organizations usually have different teams or divisions for managing infrastructure and applications, and the self-service nature of a cloud empowers the application developers or managers to deploy new services as required, while the IT staff manage, improve, and secure the infrastructure.

The cloud might not be for everyone. If the same team is responsible for infrastructure and applications, self-service makes no sense! What they need is automation. Small to medium enterprises may like some aspects of cloud computing such as self-service or resource metering, but the entire solution might be a bit much for the scale of their infrastructure.

Windows Server 2012: Beyond Virtualization

Microsoft was late to the machine virtualization competition when they released Hyper-V with Windows Server 2008. Subsequent versions of Hyper-V were released with Windows Server 2008 R2 and Service Pack 1 for Windows Server 2008 R2. After that, Microsoft spent a year talking to customers (hosting companies, corporations, industry experts, and so on) and planning the next version of Windows. Microsoft wasn’t satisfied with having a competitive or even the best virtualization product. Microsoft wanted to take Hyper-V beyond virtualization—and to steal their marketing tag line, they built Windows Server 2012 “from the cloud up.”

Microsoft has arguably more experience at running mission-critical and huge clouds than any organization. Hotmail (since the mid-1990s) and Office 365 are SaaS public clouds. Azure started out as a PaaS public cloud but has started to include IaaS as well. Microsoft has been doing cloud computing longer, bigger, and across more services than anyone else. They understood cloud computing a decade before the term was invented. And that gave Microsoft a unique advantage when redesigning Hyper-V to be their strategic foundation of the Microsoft cloud (public, private, and hybrid).

Several strategic areas were targeted with the release of Windows Server 2012 and the newest version of Hyper-V:

Automation A cloud requires automation. Microsoft built their scripting and administration language, PowerShell, into Windows Server 2012. The operating system has over 2,500 cmdlets (pronounced command-lets) that manage Windows Server functionality. There are over 160 PowerShell cmdlets for Hyper-V.
Using PowerShell, an administrator can quickly make a configuration to lots of virtual machines. An engineer can put together a script to deploy complex networking on a host. A consultant can write a script to build a cluster. A cloud can use PowerShell to automate complex tasks that enable self-service deployment or configuration.
Networking One of the traits of a cloud is broad network access. This can mean many things to many people. It appears that Microsoft started with a blank sheet with Windows Server 2012 and redeveloped networking for the cloud. Performance was increased, availability was boosted with built-in NIC teaming, the limit of VLAN scalability in the datacenter was eliminated by introducing network virtualization and software-defined networking, partner extensibility was added to the heart of Hyper-V networking, and the boundary of subnets for service mobility was removed.
Storage It became clear to Microsoft that customers and service providers were struggling with storage. It was difficult to manage (a problem for self-service), it was expensive (a major problem for service providers), and customers wanted to make the most of their existing investments.
Some of the advances in networking enabled Microsoft to introduce the file server as a new, supported, economical, scalable, and continuously available platform for storing virtual machines. Industry standards were added to support management of storage and to increase the performance of storage.
Worker Mobility It’s one thing to have great services, but they are pretty useless if users cannot access them the way that users want to. Previous releases introduced some new features to Windows Server, but Microsoft didn’t rest.
Direct Access is Microsoft’s seamless VPN alternative that is not used that much. In Windows Server 2012, the deployment of Direct Access was simplified (to a few mouse clicks in Server Manager), the requirements were reduced (you no longer need IPv6 in the datacenter or Forefront User Access Gateway), and performance was increased at the client end in Windows 8 Enterprise.
Microsoft’s VDI solution in Windows Server 2008 R2 was mind-boggling, with many moving pieces in the puzzle. Microsoft simplified the architecture of their VDI to be a scenario wizard in Server Manager. The Remote Desktop Protocol (RDP), the protocol used to connect users to remote desktops such as VDI virtual machines, was improved so much that Microsoft had to rename it RemoteFX. Microsoft has tackled the challenges of peripherals being used on the client, streaming rich media, and quality of service over long-distance connections such as WANs and the Internet.
The Cloud Pretty much every improvement made in Windows Server 2012 Hyper-V plays a role in a public, private, or hybrid cloud. A number of cloud-specific technologies were put in place specifically for cloud deployments, such as Resource Metering. This new feature records the resource utilization of individual virtual machines, giving you one of the NIST traits of a cloud.

We could argue that in the past Microsoft’s Hyper-V competed with VMware’s ESXi on a price verus required functionality basis. If you license your virtual machines correctly (and that means legally and in the most economical way), Hyper-V is free. Microsoft’s enterprise management, automation, and cloud package, System Center, was Microsoft’s differentiator, providing an all-in-one, deeply integrated, end-to-end deployment, management, and service-delivery package. The release of Windows Server 2012 Hyper-V is different. This is a release of Hyper-V that is more scalable than the competition, is more flexible than the competition, and does things that the competition cannot do (at the time of writing this book). Being able to compete both on price and functionality and being designed to be a cloud compute resource makes Hyper-V very interesting for the small and medium enterprise (SME), the large enterprise, and the service provider.

Windows Server 2012 Hyper-V

In this section, you will start to look at the technical aspects of Windows Server 2012 Hyper-V.

The Technical Requirements of Hyper-V

The technology requirements of Windows Server 2012 Hyper-V are pretty simple:

Windows Server 2012 Logo To get support from Microsoft, you should ensure that your hardware (including optional components) have been successfully Windows Server 2012 logo tested. You can check with the manufacturer and on the Microsoft Hardware Compatibility List (HCL) for Windows Server (www.windowsservercatalog.com).
If you’re just going to be testing, the logo isn’t a requirement but will be helpful. There is a very strong chance that if your machine will run Windows Server 2008 x64 or Windows Vista (this includes PCs and laptops), it will run Windows Server 2012. You should check with the hardware manufacturer for support.
64-Bit Processor Microsoft is releasing only 64-bit versions of Windows Server, and Hyper-V requires an x64 processor.
32-Bit and 64-Bit Guest Operating Systems
You can run both x86 and x64 guest operating systems in Hyper-V virtual machines.
CPU-Assisted Virtualization The processor must support CPU-assisted virtualization, and this feature must be turned on in the settings of the host machine. Intel refers to this as VT-x, and AMD calls it AMD-V.
Data Execution Prevention In a buffer overrun attack, a hacker writes an instruction into data memory with the deliberate intention of getting the processor to execute malicious code. With Data Execution Protection (DEP) enabled, memory with data is tagged so that it can never be executed by the processor. This prevents the attack from succeeding. DEP must be available in the server’s BIOS and must be enabled in the host machine’s settings for Hyper-V to install or start up. This protects the inner workings of Hyper-V from malicious attacks by someone who has logged in to a virtual machine on the host. Intel refers to DEP as the XD bit (Execute Disable bit), and AMD calls it the NX bit (No Execute bit). See your hardware manufacturer’s documentation for more information. Every server from a major manufacturer should have this support. Usually issues occur only on consumer-grade PCs and laptops.
Second Level Address Translation
There was some confusion when Microsoft announced that the desktop version of Windows, Windows 8 (Pro and Enterprise editions), would support Client Hyper-V. This is the same Hyper-V as on Windows Server 2012, but without server functionality such as clustering, Live Migration, NIC teaming, and so on.