Building VMware Software-Defined Data Centers - Valentin Hamburger - E-Book

Building VMware Software-Defined Data Centers E-Book

Valentin Hamburger

0,0
50,39 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Make the most of software-defined data centers with revolutionary VMware technologies

About This Book

  • Learn how you can automate your data center operations and deploy and manage applications and services across your public, private, and hybrid infrastructure in minutes
  • Drive great business results with cost-effective solutions without compromising on ease, security, and controls
  • Transform your business processes and operations in a way that delivers any application, anywhere, with complete peace of mind

Who This Book Is For

If you are an IT professional or VMware administrator who virtualizes data centers and IT infrastructures, this book is for you. Developers and DevOps engineers who deploy applications and services would also find this book useful. Data center architects and those at the CXO level who make decisions will appreciate the value in the content.

What You Will Learn

  • Understand and optimize end-to-end processes in your data center
  • Translate IT processes and business needs into a technical design
  • Apply and create vRO workflow automation functionalities to services
  • Deploy NSX in a virtual environment
  • Technically accomplish DevOps offerings
  • Set up and use vROPs to master the SDDC resource demands
  • Troubleshoot all the components of SDDC

In Detail

VMware offers the industry-leading software-defined data center (SDDC) architecture that combines compute, storage, networking, and management offerings into a single unified platform. This book uses the most up-to-date, cutting-edge VMware products to help you deliver a complete unified hybrid cloud experience within your infrastructure.

It will help you build a unified hybrid cloud based on SDDC architecture and practices to deliver a fully virtualized infrastructure with cost-effective IT outcomes. In the process, you will use some of the most advanced VMware products such as VSphere, VCloud, and NSX.

You will learn how to use vSphere virtualization in a software-defined approach, which will help you to achieve a fully-virtualized infrastructure and to extend this infrastructure for compute, network, and storage-related data center services. You will also learn how to use EVO:RAIL. Next, you will see how to provision applications and IT services on private clouds or IaaS with seamless accessibility and mobility across the hybrid environment.

This book will ensure you develop an SDDC approach for your datacenter that fulfills your organization's needs and tremendously boosts your agility and flexibility. It will also teach you how to draft, design, and deploy toolsets and software to automate your datacenter and speed up IT delivery to meet your lines of businesses demands. At the end, you will build unified hybrid clouds that dramatically boost your IT outcomes.

Style and approach

With the ever-changing nature of businesses and enterprises, having the capability to navigate through the complexities is of utmost importance. This book takes an approach that combines industry expertise with revolutionary VMware products to deliver a complete SDDC experience through practical examples and techniques, with proven cost-effective benefits.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 484

Veröffentlichungsjahr: 2016

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Building VMware Software-Defined Data Centers
Credits
About the Author
About the Reviewer
www.PacktPub.com
eBooks, discount offers, and more
Why subscribe?
Preface
What this book covers
What you need for this book
Who this book is for
Conventions
Reader feedback
Customer support
Downloading the color images of this book
Errata
Piracy
Questions
1. The Software-Defined Data Center
The demand for change
Business challenges:  The use case
The business view
The IT view
Tools to enable SDDC
The implementation journey
The process category
The process change example in Tom's organization
The people category
The people example in Tom's organization
The technology category
The technology example in Tom's organization
Why are these three topics so important?
Additional possibilities and opportunities
The self-healing data center
The self-scaling data center
Summary
2. Identify Automation and Standardization Opportunities
Automation principles
Day two automation
The 80:20 rule
Think big, start small
The efficiency bottleneck
Bringing it all together
Script or workflow
Identifying processes and how to automate them
IT delivery frameworks
What if no CMDB or ticket management is in place
Achieving standardization
Deployment standards
Organization automation examples
Simple VM deployment
The hybrid cloud deployment
The analysis of the hybrid cloud deployment
The better approach
Summary
3. VMware vSphere: The SDDC Foundation
Basics and recommendations for vSphere in the SDDC
Distributed Resource Scheduler
Resource pools
Storage DRS
Distributed Virtual Switch
Host Profiles
vSphere configuration considerations
Separate management cluster
Management cluster resource considerations
Separate management VDS
The payload cluster
The resource pool approach
The cluster approach
Storage Policy Based Management
SPBM definition
Integrated vSphere automation
Best practices and recommendations
Summary
4. SDDC Design Considerations
The business use case
The business challenge
The CIO challenge
Constraints, assumptions, and limitations
Constraints
Limits
Assumptions
Scalability and future growth
vRealize Automation
vRealize Code Stream
vRealize Orchestrator
vRealize Operations Manager
vRealize Business
vRealize Log Insight
NSX
Design and relations of SDDC components
Logical overview of the SDDC clusters
Logical overview of the solution components
The vRealize Automation design
Small
Enterprise
Infrastructure design examples
Network
Storage
Compute
Designing the tenants
Tenants, business groups, and infrastructure fabrics
What is a tenant?
What is a business group?
What is a fabric group?
What is the infrastructure fabric?
What must be included in the design
What if the vSphere environment is already running?
Summary
5. VMware vRealize Automation
vRA installation
First things first
Advanced installation configuration
vRA concepts
vRA's little helper
DEM
The IaaS server
vRealize Orchestrator
The Infrastructure tab
Endpoints
Compute Resources
Reservations
Managed Machines
The Administration tab
Approval Policies
Directories Management
Catalog Management
Property Dictionary
Reclamation
Branding
Notifications
Events
vRO configuration
vRA concepts
As a Service synonyms
IaaS
PaaS
XaaS
Blueprints
Single machine blueprints
Multimachine blueprints
Application automation
Sample configurations
Template preparation in vCenter
Creating a network pool
Creating a set of properties
Creating the IaaS blueprint
Publishing the blueprint as a service
Summary
6. vRealize Orchestrator
vRealize Orchestrator principles
Workflow elements and design
Attributes, inputs, and outputs
Inputs
Attributes
Outputs
Configurations
Workflow elements
Workflow creation 101
Creating the workflow
Integrating the workflow into vRA
Adding the properties to the blueprint
External services
Connecting vRO to vCenter
vRO context actions in vCenter
Finding and enabling context actions
Enabling a context-based workflow
Summary
7. Service Catalog Creation
Service catalogs
Defining a catalog
Multiple catalogs
Catalogs: As less as possible as many as required
Provide basic catalogs as well as specific catalogs
Choose a descriptive and short name
Outcome-oriented versus technology-oriented
Know your audience
Service catalog creation in vRA
First step: Creating the catalog
Second step: Publishing catalog items
Third step: Entitling a service
Multimachine blueprint design example
Software components
Sample application design
Defining the components
Apache web server
PHP web component
MySQL web component
FST Industries web component
FST Industries DB component
Defining the blueprint
Summary
8. Network Virtualization using NSX
Network Virtualization 101
Current networking infrastructures
VLAN: Network virtualization known for almost 30 years
Traditional routing and security
Modern network approach
L3 Networking - the new architecture
Network virtualization for the rescue
NSX terminology
VXLAN
EDGE
Logical Switches
VTEP
NSX controller
NSX setup and preparation
ESXi prerequisites for VXLAN / NSX
Network prerequisites for NSX
Step 1: Installing NSX manager
Step 2: Setting up the components
Prepare the ESXi hosts
Deploy the NSX controller nodes
Defining the segment ID
Configuring the transport parameters
Set up the transport zone
Step 3: Virtual networking 101
Add a Logical Switch
Add a Distributed Logical Router
Add a EDGE services Gateway
Dynamic routing between virtual and physical
Connecting vRealize Automation
Network reservations
Setting up NSX network profiles
The external profile
The NAT profile
The routed profile
Using NSX network profiles in blueprint
Summary
9. DevOps Considerations
What is DevOps
Agility meets policies
How does DevOps work
What are containers
Containers are not VMs
Container host: Virtual or physical
DevOps and Shadow IT
Radical new IT approach
Cattle versus pets
Changing the organizational culture
PaaS as part of DevOps
The Cloud Foundry framework
Cloud Foundry and the SDDC
vRealize Code Stream: DevOps without containers
All about the pipeline
vRealize Code Stream integration
SDDC and DevOps: A mixed world
DevOps requirements
Enterprise requirements
Legacy and DevOps: Coexistence in one environment
Use DevOps principles to manage the SDDC
Summary
10. Capacity Management with vRealize Operations
Capacity monitoring in the SDDC
vRealize Operations Manager
vROps 6.3 deployment workflow
Capacity monitoring
Overprovisioning and resource allocation
Navigating vRealize Operations Manager
Capacity remaining
Capacity planning
Projects in vRealize Operations Manager
Reports in vRealize Operations Manager
Views in vRealize Operations Manager
Summary
11. Troubleshooting and Monitoring
Monitoring and analytics in the SDDC
The risk of false positives
Management versus payload monitoring
Management monitoring
Payload monitoring
KPIs versus thresholds
vRealize Operations Manager
Analytics using vRealize Operations Manager
Exploring vRealize Operations Manager anomalies
Badges and what they describe
The Health badge and how to read it
The Risk badge and how to read it
The Efficiency badge and how to read it
Service health information in vRealize Automation
Log management in the SDDC
Millions of log entries
Log management from the big data perspective
vRealize Log Insight
SDDC components to add to vRealize Log Insight
How to analyze logs using vRLI
Using the Interactive Analytics View
Creating and using dashboards
The pro-active analytics features
Summary
12. Continuous Improvement
Continual Service Improvement
Technical assurance
Reviewing blueprints
Reviewing automation and integration
Revisiting the business case
ITIL in the SDDC
Matching the requirements to the solution
Applying continuous service improvement to the SDDC
Summary

Building VMware Software-Defined Data Centers

Building VMware Software-Defined Data Centers

Copyright © 2016 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: December 2016

Production reference: 1061216

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham 

B3 2PB, UK.

ISBN 978-1-78646-437-8

www.packtpub.com

Credits

Author

Valentin Hamburger

Copy Editors

Safis Editing

Dipti Mankame

Reviewer

Daniel Koeck

Project Coordinator

Judie Jose

Commissioning Editor

Kartikey Pandey

Proofreader

Safis Editing 

Acquisition Editor

Vijin Boricha

Indexer

Pratik Shirodkar

Content Development Editor

Rashmi Suvarna

Graphics

Kirk D'Penha

Technical Editor

Gaurav Suri

Production Coordinator

Shantanu N. Zagade

About the Author

Valentin Hamburger was working at VMware for more than seven years. In his former role, he was a lead consulting architect and took care of the delivery and architecture of cloud projects in central EMEA. In his current role, he is EMEA solutions lead for VMware at Hitachi Data Systems (HDS). Furthermore he works as an advisor with HDS engineering on the Hitachi Enterprise Cloud, which is based on VMware vRealize technology. He holds many industry certifications in various areas such as VMware, Linux, and IBM Power compute environments.  He serves as a partner and trusted advisor to HDS customers primarily in EMEA. His main responsibilities are ensuring that HDS's future innovations align with essential customer needs and translating customer challenges to opportunities focused on virtualization topics. Valentin enjoys sharing his knowledge as a speaker at national and international conferences such as VMworld.

I want to personally thank Daniel Koeck for reviewing the technical content of this book and providing such valuable and productive inputs. Besides his technical expertise I am happy to have him as a friend and supporter for this book. Furthermore, I want to thank my beautiful wife and daughter for their patience and understanding while I was writing this book. Without their support and love, this wouldn’t have been possible at all. Finally I do want to thank Rashmi Suvarna who had patience with me as an author and supported me wherever she could in order to get all this work done.

About the Reviewer

Daniel Koeck has been working for 15 years in IT. He leaded large scale (more than 20,000 VMs) projects, reaching from Service Provider Clouds, to DevOps enabled large scale software solutions in the last 6 years. He holds a degree for applied computer science and IT-security. Daniel is an IBM Redbook Gold author, and co-authored other many other books and whitepapers about x86 virtualization. He is regularly invited as a speaker to different universities and technology conferences all over Europe and USA, and enjoys sharing his experience there. You can find him on twitter @Cloudsandwakes.

www.PacktPub.com

eBooks, discount offers, and more

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www2.packtpub.com/books/subscription/packtlib

Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.

Why subscribe?

Fully searchable across every book published by PacktCopy and paste, print, and bookmark contentOn demand and accessible via a web browser

Preface

This book uses the most up-to-date, cutting-edge VMware products to help you deliver a complete unified hybrid cloud experience within your infrastructure.

It will help you build an SDDC architecture and practices to deliver a fully virtualized infrastructure with cost-effective IT outcomes. In the process, you will use some of the most advanced VMware products such as vSphere, vRealize Automation and Orchestrator, and NSX. You will see how to provision applications and IT services on private clouds or IaaS with seamless accessibility and mobility across the hybrid environment.

This book will ensure that you develop an SDDC approach for your data center that fulfills your organization's business needs and tremendously boosts your agility and flexibility. It will also teach you how to draft, design, and deploy toolsets and software to automate your data center and speed up IT delivery to meet your lines of businesses demands. In the end, you will build unified hybrid clouds that dramatically boost your IT outcomes.

What this book covers

Chapter 1, The Software-Defined Data Center, discusses principles and basics about the SDDC. Besides the technical aspects, it will also highlight the organizational aspects and that the SDDC is a new way of managing and running a data center and therefore also an architectural change. Also, it will describe the implementation journey and what is necessary to take into account besides the technological aspects.

Chapter 2, Identify Automation and Standardization Opportunities, highlights the main principles of automation and standardization. The differences between scripts and workflows are described. Also, it will bring examples how to apply standardization and automation to the data center in order to make the SDDC flexible and agile as possible.

Chapter 3, VMware vSphere: The SDDC Foundation, covers important vSphere functions, which will decrease the amount of customization when it comes to automation. Since virtualization is the base of an SDDC, this chapter will focus on examples and configurations for vSphere. This chapter will discuss advanced vSphere functions and their importance for an SDDC.

Chapter 4, SDDC Design Considerations, explains the main principles of an SDDC design including detailed examples. Highlighted are also what assumptions, constraints and limits are and how they will influence a design.  Furthermore, it will show a simple–to-follow approach to translate business challenges in a technical solution and therefore an agile and efficient SDDC design.

Chapter 5, VMware vRealize Automation, introduces vRA (formally known as vCloud Automation Center) and its capabilities. The implementation of the design considerations of the former chapter will be discussed, and it will show other important configuration options, principles, and concepts. Also, it will focus on the creation of so-called blueprints and what is needed to prepare a VM template to be deployed.

Chapter 6, vRealize Orchestrator, touches on what workflows are and how they can be developed in a controlled and clean manner. It will highlight how to integrate those into vRealize Automation to create powerful services for almost any task in the SDDC. In addition, it will discuss what postdeployment third-party integration can be achieved using vRO (for example, IPAM and CMDB integration).

Chapter 7, Service Catalog Creation, brings up the basic service catalog design. Also, it bridges the business case to the service catalog and describes why that is important and how that sync can be achieved. It will explain  based on an example how to configure an outcome-focused service catalog in vRealize Automation.

Chapter 8, Network Virtualization using NSX, discusses software-defined networking principles. It highlights NSX basic functions and configurations and why it is a game changer within the SDDC. With NSX, broad data center automation can be fully achieved by gaining maximal flexibility and agility for service deployments. It will also cover the base configuration and integration with SDDC based on practical examples and detailed integration descriptions.

Chapter 9, DevOps Considerations, describes DevOps in general and what changes it brings to IT and the SDDC. It discusses most of the modern technologies to run DevOps including containers and container frameworks such as Pivotal Cloud Foundry. Furthermore, it describes a DevOps approach to run and manage the SDDC itself using VMware vRealize Code Stream Management Pack for IT DevOps. This will add additional agility and flexibility when it comes to managing and operating the SDDC.

Chapter 10, Capacity Management with vRealize Operations, mentions how important a proper capacity management is in a fully automated data center. It will highlight techniques and principles in regard to successfully plan infrastructure expansion. It provides practical configuration examples for resource planning and predictive capacity maintenance.

Chapter 11, Troubleshooting and Monitoring, explains the monitoring and analytics methods for the SDDC. Since an automated data center might have different challenges in terms of monitoring, it further highlights the differences to static infrastructure and why it is important to have a smart monitoring and analytics approach for the SDDC. It will describe how to limit the impact of issues with smart and predictive troubleshooting and analytics methods, including the use of vRealize Log Insight.

Chapter 12, Continuous Improvement, mentions the importance of continuously working on the services and processes within the SDDC. Once the SDDC is deployed and functions properly it is time to reflect and maybe update the created services. The chapter mentions how important it is to detect possible process flaws or glitches and update those. Furthermore, it summarizes the importance of ITIL in a modern data center and explains that the SDDC is basically the fully automated version of ITIL bringing all its benefits to life without all its drawbacks like the bureaucracy overhead.

What you need for this book

vRealize AutomationvRealize OrchestratorvRealize Operations ManagervRealize Log InsightvRealize Code Stream
Management pack for IT DevOps
VMware vSphereVMware NSX

Who this book is for

If you are an IT professional or VMware administrator who virtualizes data centers and IT infrastructures, this book is for you. Developers and DevOps engineers who deploy applications and services would also find this book useful. Data center architects and those at the CXO level who make decisions will appreciate the value in the content.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "Provide a meaningful name such as Backup."

Any command-line input or output is written as follows:

msdtc –uninstall

A block of code is set as follows:

#!/bin/bash #Turn off iptables for app server access /sbin/service iptables stop

New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "Click OK to store the new property."

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book-what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of. To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the color images of this book

We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/BuildingVMwareSoftwaredefinedDataCenters_ColorImages.pdf.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at [email protected] with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.

Chapter 1. The Software-Defined Data Center

Originally the term software-defined data center (SDDC) has been introduced by VMware, to further describe the move to a cloud-like IT experience. The term software-defined is an important bit of information. It basically means that every key function in the data center is performed and controlled by software, instead of hardware. This opens a whole new way of operating, maintaining but also innovating in a modern data center.

But how does a so-called SDDC look like, and why is a whole industry pushing so hard towards its adoption? This question might also be a reason why you are reading this book, which is meant to provide a deeper understanding of it and give practical examples and hints how to build and run such a data center. Meanwhile, it will also provide the knowledge of mapping business challenges with IT solutions. This is a practice which becomes more and more important these days.

IT has come a long way from a pure back office, task oriented role in the early days, to a business relevant asset, which can help organizations to compete with their competition. There has been a major shift from a pure infrastructure provider role to a business enablement function. Today, most organizations business is just as good as their internal IT agility and ability to innovate. There are many examples in various markets where a whole business branch was built on IT innovations such as Netflix, Amazon Web Services (AWS), Uber, Airbnb, just to name a few.

However, it is unfair to compare any startup with a traditional organization. A startup has one application to maintain and they have to build up a customer base.

A traditional organization has a wide customer base and many applications to maintain. So they need to adapt their internal IT to become a digital enterprise, with all the flexibility and agility of a startup, but also maintaining the trust and control over their legacy services.

This chapter will cover the following points:

Why is there a demand for SDDC in ITWhat is SDDCUnderstand the business challenges and map it to SDDC deliverablesThe relation of an SDDC and an internal private cloudIdentify new data center opportunities and possibilitiesBecome a center of innovation to empower your organization's business

The demand for change

Today organizations face different challenges in the market to stay relevant. The biggest move was clearly introduced by smartphones and tablets. It was not just a computer in a smaller device, they changed the way IT is delivered and consumed by end users. These devices proved that it can be simple to consume and install applications. Just search in an app store, choose what you like, use it as long as you like it. If you do not need it any longer, simply remove it. All with very simplistic commands and easy to use gestures.

More and more people relying on IT services by using a smartphone as their terminal to almost everything. These devices created a demand for fast and easy application and service delivery. So in a way, smartphones have not only transformed the whole mobile market, they also transformed how modern applications and services are delivered from organizations to their customers.

Although it would be quite unfair to compare a large enterprise data center with an app store or enterprise service delivery with any app installs on a mobile device, there are startups and industries, which rely solely on the smartphone as their target for services, such as Uber or WhatsApp.

On the other side, smartphone apps also introduce a whole new way of delivering IT services, since any company never knows how many people will use the app simultaneously. But in the backend, they still have to use web servers and databases to continuously provide content and data for these apps.

This also introduces a new value model for all other companies. People start to judge a company by the quality of their smartphone apps available. Also, people started to migrate to companies which might offer better smartphone integration as the previous one used. This is not bound to a single industry, but affects a broad spectrum of industries today such as the financial industry, car manufacturers, insurance groups, and even food retailers, just to name a few.

A classic data center structure might not be ideal for quick and seamless service delivery. These architectures are created by projects to serve a particular use case for a couple of years. An example of this bigger application environments is web server farms, traditional SAP environments, or a data warehouse.

Traditionally these were designed with an assumption about their growth and use. Special project teams have set them up across the data center pillars, as shown in the following figure. Typically, those project teams separate after such the application environment has been completed.

All these pillars in the data center are required to work together, but every one of them also needs to mind their own business. Mostly those different divisions also have their own processes which then may integrate into a data center wide process. There was a good reason to structure a data center in this way, the simple fact that nobody can be an expert in every discipline. Companies started to create groups to operate certain areas in a data center, each building their own expertise for their own subject.

This was evolving and became the most applied model for IT operations within organizations. Many, if not all, bigger organizations have adopted this approach and people build their careers on these definitions. It served IT well for decades and ensured that each party was adding its best knowledge to any given project.

However, this setup has one flaw, it has not been designed for massive change and scale. The bigger these divisions get, the slower they can react to request from other groups in the data center. This introduces a bi-directional issue, since all groups may grow at a similar rate, the overall service delivery time might also increase exponentially.

Unfortunately, this also introduces a cost factor when it comes to service deployments across these pillars. Each new service, an organization might introduce or develop, will require each area of IT to contribute. Traditionally, this is done by human handovers from one department to the other.

Each of these handovers will delay the overall project time or service delivery time, which is also often referred to as time to market. It reflects the needed time interval from the request of a new service to its actual delivery. It is important to mention that this is a level of complexity every modern organization has to deal with when it comes to application deployment today.

The difference between organizations might be in the size of the separate units, but the principle is always the same. Most organizations try to bring their overall service delivery time down to be quicker and more agile. This is often related to business reasons as well as IT cost reasons.

In some organizations, the time to deliver a brand new service from request to final roll out may take 90 working days. This means a requestor might wait 18 weeks or more than four and a half month from requesting a new business service to its actual delivery. Do not forget that this reflects the complete service delivery, over all groups until it is ready for production. Also, after these 90 days, the requirement of the original request might have changed which would lead into repeating the entire process.

Often a quicker time to market is driven by the lines of business (LOB) owners to respond to a competitor in the market, who might already deliver their services faster. This means that today's IT has changed from a pure internal service provider to a business enabler supporting its organization to fight the competition with advanced and innovative services.

While this introduces a great chance to the IT department to enable and support their organizations business, it also introduces a threat at the same time. If the internal IT struggles to deliver what the business is asking for, it may lead to leverage shadow IT within the organization.

The term shadow IT describes a situation where either the LOBs of an organization or its application developers have grown so disappointed with the internal IT delivery times, that they actually use an external provider for their requirements. This behavior is not agreed with the IT security and can lead to heavy business or legal troubles.

This happens more often than one might expect, and it can be as simple as putting some internal files on a public cloud storage provider. These services grant quick results. It is as simple as Register-Download-Use. They are very quick in enrolling new users and sometimes provide a limited use for free. The developer or business owner might not even be aware that there is something non-compliant going on while using these services.

So besides the business demand for a quicker service delivery and the security aspect, an organization's IT department has now also the pressure of staying relevant. But SDDC can provide much more value to the IT than just staying relevant.

The automated data center will be an enabler for innovation and trust and introduce a new era of IT delivery. It can not only provide faster service delivery to the business, it can also enable new services or offerings to help the whole organization being innovative for their customers or partners.

Business challenges:  The use case

Today's business strategies often involve a digital delivery of services of any kind. This implies that the requirements a modern organization has towards their internal IT have changed drastically. Unfortunately, the business owners and the IT department tend to have communication issues in some organizations. Sometimes they even operate completely disconnected from each other, as if each of them were their own small company within the organization.

Nevertheless, a lot of data center automation projects are driven by enhanced business requirements. In some of these cases, the IT department has not been made aware of what these business requirements look like, or even what the actual business challenges are. Sometimes IT just gets as little information as: We are doing cloud now.

It's a dangerous simplification, since the use case is key when it comes to designing and identifying the right solution to the organization's challenges. It is important to get the requirements from the IT delivery side as well as the business requirements and expectations.

Here is a simple example how a use case might be identified and mapped to technical implementation.

The business view

John works as a business owner in an insurance company. He recognizes that their biggest competitor in the market started to offer a mobile application to their clients. The app is simple and allows to do online contract management and tells the clients which products they have enrolled as well as rich information about contract timelines and possible consolidation options.

He asks his manager to start a project to also deliver such an application to their customers. Since it is only a simple smartphone application, he expects that its development might take a couple of weeks and then they can start a beta phase. To be competitive he estimates that they should have something usable for their customers within a maximum of 5 months. Based on these facts, he got approval from his manager to request such a product from the internal IT.

The IT view

Tom is the data center manager of this insurance company. He got informed that the business wants to have a smartphone application to do all kinds of things for the new and existing customers. He is responsible for creating a project and bring all necessary people on board to support this project and finally deliver the service to the business. The programming of the app will be done by an external consulting company.

Tom discusses a couple of questions regarding this request with his team:

How many users do we need to serve?How much time do we need to create this environment?What is the expected level of availability?How much compute power/disk space might be required?

After a round of brainstorming and intense discussion, the team still is quite unsure how to answer these questions. For every question, there are a couple of variables the team cannot predict.

Will only a few of their thousands of users adapt to the app, what if they undersize the middleware environment?

What if the user adoption rises within a couple of days, what if it lowers and the environment is overpowered and therefore the cost is too high?

Tom and his team identified that they need a dynamic solution to be able to serve the business request. He creates a mapping to match possible technical capabilities to the use case. After this mapping was completed, he is using it to discuss with his CIO if and how it can be implemented.

Business challenge

Question

IT capability

Easy to use app to win new customers/keep existing

How many users do we need to the server?

Dynamic scale of an environment based on actual performance demand.

How much time do we need to create this environment?

To fulfill the expectations the environment needs to be flexible. Start small – scale big.

What is the expected level of availability?

Analytics and monitoring over all layers. Including possible self-healing approach.

How much compute power/disk space might be required?

Create compute nodes based on actual performance requirements on demand. Introduce a capacity on demand model for required resources.

Given this table, Tom revealed that with their current data center structure it is quite difficult to deliver what the business is asking for. Also, he got a couple of requirements from other departments, which are going in a similar direction.

Based on these mappings, he identified that they need to change their way of deploying services and applications. They will need to use a fair amount of automation. Also, they have to span these functionalities across each data center department as a holistic approach, as shown in the following diagram:

In this example, Tom actually identified a very strong use case for SDDC in his company. Based on the actual business requirements of a simple application, the whole IT delivery of this company needs to adopt. While this may sound like pure fiction, these are the challenges modern organizations need to face today.

Tip

It is very important to identify the required capabilities for the entire data center and not just for a single department. You will also have to serve the legacy applications and bring them onto the new model. Therefore it is important to find a solution, which is serving the new business case as well as the legacy applications either way. In the first stage of any SDDC introduction in an organization, it is the key to keeping always an eye on the big picture.

Tools to enable SDDC

There is a basic and broadly accepted declaration of what an SDDC needs to offer. It can be considered as the second evolutionary step after server virtualization. It offers an abstraction layer from the infrastructure components such as compute, storage, and network by using automation and tools as such as a self-service catalog In a way; it represents a virtualization of the whole data center with the purpose to simplify the request and deployment of complex services. Other capabilities of an SDDC are:

Automated infrastructure/service consumptionPolicy based services and applications deploymentChanges to services can be made easily and instantlyAll infrastructure layers are automated (storage, network, and compute)No human intervention is needed for infrastructure/service deploymentHigh level of standardization is usedBusiness logic is for chargeback or showback functionality

All of the preceding points define an SDDC technically. But it is important to understand that an SDDC is considered to solve the business challenges of the organization running it. That means based on the actual business requirements, each SDDC will serve a different use case. Of course, there is the main setup you can adopt and roll out, but it is important to understand your organization's business challenges in order to prevent any planning or design shortcomings.

Also, to realize this functionality, SDDC needs a couple of software tools. These are designed to work together to deliver a seamless environment. The different parts can be seen like gears in a watch where each gear has an equally important role to make the clockwork function correctly.

It is important to remember this when building your SDDC, since missing on one part can make another very complex or even impossible afterward.

This is a list of VMware tools building an SDDC:

vRealize Business for CloudvRealize Operations ManagervRealize Log InsightvRealize AutomationvRealize OrchestratorvRealize Automation Converged BlueprintvRealize Code StreamVMware NSXVMware vSphere

vRealize Business for Cloud is a chargeback/showback tool. It can be used to track the cost of services as well as the cost of a whole data center. Since the agility of an SDDC is much higher than for a traditional data center, it is important to track and show also the cost of adding new services. It is not only important from a financial perspective, it also serves as a control mechanism to ensure users are not deploying uncontrolled services and leaving them running even if they are not required anymore.

vRealize Operations Manager is serving basically two functionalities. One is to help with the troubleshooting and analytics of the whole SDDC platform. It has an analytics engine, which applies machine learning to the behavior of its monitored components. The another important function is capacity management. It is capable of providing what-if analysis and informs about possible shortcomings of resources way before they occur. These functionalities also use the machine learning algorithms and get more accurate over time. This becomes very important in a dynamic environment where on-demand provisioning is granted.

vRealize Log Insight is a unified log management. It offers rich functionality and can search and profile a lot of log files in seconds. It is recommended to use it as a universal log endpoint for all components in your SDDC. This includes all OSes as well as applications and also your underlying hardware. In an event of error, it is much simpler to have a central log management which is easily searchable and delivers an outcome in seconds.

vRealize Automation (vRA) is the base automation tool. It is providing the cloud portal to interact with your SDDC. The portal it provides offers the business logic such as service catalogs, service requests, approvals, and application life cycles. However, it relies strongly on vRealize Orchestrator for its technical automation part. vRA can also tap into external clouds to extend the internal data center. Extending an SDDC is mostly referred to as hybrid cloud. There are a couple of supported cloud offerings vRA can manage.

vRealize Orchestrator (vRO) is providing the workflow engine and the technical automation part of the SDDC. It is literally the orchestrator of your new data center. vRO can be easily bound together with vRA to form a very powerful automation suite, where anything with an application programming interface (API) can be integrated. Also, it is required to integrate third-party solutions into your deployment workflows, such as configuration management database (CMDB), IP address management (IPAM), or ticketing systems via IT service management (ITSM).

vRealize Automation Converged Blueprint was formally known as vRealize Automation Application Services and is an add-on functionality to vRA, which takes care of application installations. It can be used with pre-existing scripts (like Windows PowerShell or Bash on Linux), but also with variables received from vRA. This makes it very powerful when it comes to on-demand application installations. This tool can also make use of vRO to provide even better capabilities for complex application installations.

vRealize Code Stream is an addition to vRA and serves specific use cases in the DevOps area of the SDDC. It can be used with various development frameworks such as Jenkins. Also it can be used as a tool for developers to build and operate their own software test, QA and deployment environment. Not only can the developer build these separate stages, the migration from one stage into another can also be fully automated by scripts. This makes it a very powerful tool when it comes to stage and deploy modern and traditional applications within the SDDC.

VMware NSX is the network virtualization component. Given the complexity some applications/services might introduce, NSX will provide a good and profound solution to help solving it. The challenges include:

Dynamic network creationMicrosegmentationAdvanced securityNetwork function virtualization

VMware vSphere is mostly the base infrastructure and used as the hypervisor for server virtualization. You are probably familiar with vSphere and its functionalities. However, since the SDDC is introducing a change to you data center architecture, it is recommended to revisit some of the vSphere functionalities and configurations. By using the full potential of vSphere it is possible to save effort when it comes to automation aspects as well as the service/application deployment part of the SDDC.

This represents your toolbox required to build the platform for an automated data center. All of them will bring tremendous value and possibilities, but they also will introduce change. It is important that this change needs to be addressed and is a part of the overall SDDC design and installation effort. Embrace the change.

The implementation journey

While a big part of this book focuses on building and configuring the SDDC, it is important to mention that there are also non-technical aspects to consider. Creating a new way of operating and running your data center will always involve people. It is important to also briefly touch this part of the SDDC. Basically, there are three major players when it comes to a fundamental change in any data center, as shown in the following image:

Basically, there are three major topics relevant for every successful SDDC deployment. Same as for the tools principle, these three disciplines need to work together in order to enable the change and make sure that all benefits can be fully leveraged.

These three categories are:

PeopleProcessTechnology

The process category

Data center processes are as established and settled as IT itself. Beginning with the first operator tasks like changing tapes or starting procedures up to highly sophisticated processes to ensure that the service deployment and management is working as expected they have already come a long way. However, some of these processes might not be fit for purpose anymore, once automation is applied to a data center. To build an SDDC it is very important to revisit data center processes and adapt them to work with the new automation tasks. The tools will offer integration points into processes, but it is equally important to remove bottlenecks for the processes as well. However, keep in mind that if you automate a bad process, the process will still be bad, but fully automated. So it is also necessary to revisit those processes so that they can become slim and effective as well.

Remember Tom, the data center manager. He has successfully identified that they need an SDDC to fulfill the business requirements and also did a use case to IT capabilities mapping. While this mapping is mainly talking about what the IT needs to deliver technically, it will also imply that the current IT processes need to adapt to this new delivery model.

The process change example in Tom's organization

If the compute department works on a service involving OS deployment, they need to fill out an Excel sheet with IP addresses and server names and send it to the networking department. The network admins will ensure that there is no double booking by reserving the IP address and approve the requested hostname. After successfully proving the uniqueness of this data, name and IP get added to the organization's DNS server.

The manual part of this process is no longer feasible once the data center enters the automation era, imagine that every time somebody orders a service involving a VM/OS deploy, the network department gets an e-mail containing the Excel with the IP and hostname combination. The whole process will have to stop until this step is manually finished.

To overcome this, the process has to be changed to use an automated solution for IPAM. The new process has to track IP and hostnames programmatically to ensure there is no duplication within the entire data center. Also, after successfully checking the uniqueness of the data, it has to be added to the Domain Name System (DNS).

While this is a simple example of one small process, normally there is a large number of processes involved which need to be reviewed for a fully automated data center. This is a very important task and should not be underestimated since it can be a differentiator for success or failure of an SDDC.

Think about all other processes in place, which are used to control the deploy/enable/install mechanics in your data center. Here is a small example list of questions to ask regarding established processes:

What is our current IPAM/DNS process?Do we need to consider a CMDB integration?What is our current ticketing process? (ITSM)What is our process to get resources from the network, storage, and compute?What OS/VM deployment process is currently in place?What is our process to deploy an application (handovers, steps, or departments involved)?What does our current approval process look like?
Do we need a technical approval to deliver a service?Do we need a business approval to deliver a service?
What integration process do we have for a service/application deployment?
DNS, Active Directory (AD), Dynamic Host Configuration Protocol (DHCP), routing, Information Technology Infrastructure Library (ITIL), and so on

Now for the approval question, normally these are an exception for the automation part since approvals are meant to be manual in the first place (either technical or business). If all the other answers to this example questions involve human interaction as well, consider to changing these processes to be fully automated by the SDDC.

Since human intervention creates waiting times, it has to be avoided during service deployments in any automated data center. Think of it as the robotic construction bands today's car manufacturers are using. The processes they have implemented, developed over ages of experience, are all designed to stop the band only in case of an emergency.

The same comes true for the SDDC; try to enable the automated deployment through your processes, stop the automation only in case of an emergency.

Identifying processes is the simple part, changing them is the tricky part. However, keep in mind that this is an all-new model of IT delivery, therefore there is no golden way of doing it. Once you have committed to change those processes, keep monitoring if they truly fulfill their requirement.

This leads to another process principle in the SDDC: Continual Service Improvement (CSI). Revisit what you have changed from time to time and make sure that those processes are still working as expected, if they don't, change them again.

The people category

Since every data center is run by people, it is important to also consider that a change of technology will also impact those people. There are some claims that an SDDC can be run with only half of the staff or save a couple of employees since all is automated.

The truth is, an SDDC will transform IT roles in a data center. This means that some classic roles might vanish, while others will be added by this change.

It is unrealistic to say that you can run an automated data center with half the staff than before. But it is realistic to say that your staff can concentrate on innovation and development instead of working a 100% to keep the lights on. And this is the change an automated data center introduces. It opens up the possibilities to evolve into a more architecture and design focused role for current administrators.

The people example in Tom's organization

Currently, there are two admins in the compute department working for Tom. They are managing and maintaining the virtual environment, which is largely VMware vSphere. They are creating VMs manually, deploying an OS by a network install routine (which was a requirement for physical installs - so they kept the process) and then handing the ready VMs over to the next department to finish installing the service they are meant for.

Recently they have experienced a lot of demand for VMs and each of them configures 10 to 12 VMs per day. Given this, they cannot concentrate on other aspects of their job, like improving OS deployments or the handover process.

At a first look, it seems like the SDDC might replace these two employees since the tools will largely automate their work. But that is like saying a jackhammer will replace a construction worker.

Actually, their roles will shift to a more architectural aspect. They need to come up with a template for OS installations and an improvement how to further automate the deployment process. Also, they might need to add new services/parts to the SDDC in order to fulfill the business needs continuously.

So instead of creating all the VMs manually, they are now focused on designing a blueprint, able to be replicated as easy and efficient as possible.

While their tasks might have changed, their workforce is still important to operate and run the SDDC. However, given that they focus on design and architectural tasks now, they also have the time to introduce innovative functions and additions to the data center.

Keep in mind that an automated data center affects all departments in an IT organization. This means that also the tasks of the network and storage as well as application and database teams will change. In fact, in an SDDC it is quite impossible to still operate the departments disconnected from each other since a deployment will affect all of them.

This also implies that all of these departments will have admins shifting to higher-level functions in order to make the automation possible. In the industry, this shift is also often referred to as Operational Transformation. This basically means that not only the tools have to be in place, you also have to change the way how the staff operates the data center. In most cases organizations decide to form a so-called center of excellence (CoE) to administer and operate the automated data center.

This virtual group of admins in a data center is very similar to project groups in traditional data centers. The difference is that these people should be permanently assigned to the CoE for an SDDC. Typically you might have one champion from each department taking part in this virtual team.

Each person acts as an expert and ambassador for their department. With this principle, it can be ensured that decisions and overlapping processes are well defined and ready to function across the departments. Also, as an ambassador, each participant should advertise the new functionalities within their department and enable their colleagues to fully support the new data center approach.

It is important to have good expertise in terms of technology as well as good communication skills for each member of the CoE.

The technology category

This is the third aspect of the triangle to successfully implement an SDDC in your environment. Often this is the part where people spend most of their attention, sometimes by ignoring one of the other two parts. However, it is important to note that all three topics need to be equally considered. Think of it like a three-legged chair, if one leg is missing it can never stand.

The term technology does not necessarily only refer to new tools required to deploy services. It also refers to already established technology, which has to be integrated with the automation toolset (often referred to as third-party integration). This might be your AD, DHCP server, e-mail system, and so on.

There might be technology which is not enabling or empowering the data center automation, so instead of only thinking about adding tools, there might also be tools to be removed or replaced. This is a normal IT lifecycle task and has been gone through many iterations already. Think of things like a fax machine or the telex; you might not use them anymore, they have been replaced by e-mail and messaging.

The technology example in Tom's organization

The team uses some tools to make their daily work easier when it comes to new service deployments. One of the tools is a little graphical user interface to quickly add content to AD. The admins use it to insert the hostname, organizational unit (OU) as well as creating the computer account with it. This was meant to save admin time since they don't have to open all the various menus in the AD configuration to accomplish these tasks.

With the automated service delivery, this has to be done programmatically. Once a new OS is deployed it has to be added to the AD including all requirements by the deployment tool. Since AD offers an API this can be easily automated and integrated into the deployment automation. Instead of painfully integrating the graphical tool, this is now done directly by interfacing the organization's AD, ultimately replacing the old graphical tool.

The automated deployment of a service across the entire data center requires a fair amount of communication. Not in a traditional way, but machine-to-machine communication leveraging programmable interfaces. Using such APIs is another important aspect of the applied data center technologies. Most of the today's data center tools, from backup all the way up to web servers, do come with APIs. The better the API is documented, the easier the integration into the automation tool. In some cases, you might need the vendors to support you with the integration of their tools.

If you have identified a tool in the data center, which does not offer any API or even command-line interface (CLI) option at all, try to find a way around this software or even consider replacing it with a new tool.

APIs are the equivalent of handovers in the manual world. The better the communication works between tools, the faster and easier the deployment will be completed. To coordinate and control all this communication, you will need far more than scripts to run. This is a task for an orchestrator, which can run all necessary integration workflows from a central point. This orchestrator will act as a conductor for a big orchestra. It will form the backbone of your SDDC.

Why are these three topics so important?

The technology aspect closes the triangle and brings the people and the processes parts together. If the processes are not altered to fit the new deployment methods, automation will be painful and complex to implement. If the deployment stops at some point, since the processes require manual intervention, the people will have to fill in this gap.

This means that they now have new roles, but also need to maintain some of their old tasks to keep the process running. By introducing such an unbalanced implementation of an automated data center, the workload for people can actually increase, while the service delivery times may not dramatically decrease. This may lead to an avoidance of the automated tasks since the manual intervention might be seen as faster by individual admins.

So it is very important to accept all three aspects as the main part of the SDDC implementation journey. They all need to be addressed equally and thoughtfully to unveil the benefits and improvements an automated data center has to offer.

However, keep in mind that this truly is a journey. An SDDC is not implemented in days but in months. Given this, also the implementation team in the data center has this time to adopt themselves and their process to this new way of delivering IT services. Also, all necessary departments and their lead need to be involved in this procedure.

An SDDC implementation is always a team effort.

Additional possibilities and opportunities

All the previews mentioned topics serve the sole goal to install and use the SDDC within your data center. However, once you have the SDDC running the real fun begins since you can start to introduce additional functionalities impossible for any traditional data center. Let's just briefly touch on some of the possibilities from an IT view.

The self-healing data center