Mastering OpenStack - Omar Khedher - E-Book

Mastering OpenStack E-Book

Omar Khedher

0,0
34,79 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

In this second edition, you will get to grips with the latest features of OpenStack. Starting with an overview of the OpenStack architecture, you'll see how to adopt the DevOps style of automation while deploying and operating in an OpenStack environment. We'll show you how to create your own OpenStack private cloud. Then you'll learn about various hypervisors and container technology supported by OpenStack. You'll get an understanding about the segregation of compute nodes based on reliability and availability needs. We'll cover various storage types in OpenStack and advanced networking aspects such as SDN and NFV.
Next, you'll understand the OpenStack infrastructure from a cloud user point of view. Moving on, you'll develop troubleshooting skills, and get a comprehensive understanding of services such as high availability and failover in OpenStack. Finally, you will gain experience of running a centralized logging server and monitoring OpenStack services.
The book will show you how to carry out performance tuning based on OpenStack service logs. You will be able to master OpenStack benchmarking and performance tuning. By the end of the book, you'll be ready to take steps to deploy and manage an OpenStack cloud with the latest open source technologies.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 496

Veröffentlichungsjahr: 2017

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Title Page

Mastering OpenStack

Second Edition

Discover your complete guide to designing, deploying, and managing OpenStack-based clouds in mid-to-large IT infrastructures with best practices, expert understanding, and more
Omar Khedher Chandan Dutta Chowdhury

BIRMINGHAM - MUMBAI

Copyright

Mastering OpenStack

Second Edition

Copyright © 2017 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: July 2015

Second edition: April 2017

Production reference: 1240417

Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.

ISBN 978-1-78646-398-2

www.packtpub.com

Credits

Authors Omar Khedher Chandan Dutta Chowdhury

Copy Editor Safis Editing

Reviewer Mohamed Jarraya

Project Coordinator Kinjal Bari

Commissioning Editor Kartikey Pandey

Proofreader Safis Editing

Acquisition Editor Rahul Nair

Indexer Pratik Shirodkar

Content Development Editor Trusha Shriyan

Graphics Kirk D'Penha

Technical Editor Naveenkumar Jain

Production Coordinator Arvindkumar Gupta

About the Authors

Omar Khedher is a systems and network engineer who has worked for a few years in cloud computing environment and has been involved in several private cloud projects based on OpenStack. He has also worked on projects targeting public cloud AWS.

Leveraging his skills as a system administrator in virtualization, storage, and networking, Omar works as a cloud system engineer for a leading advertising technology company, Fyber, based in Berlin. He is part of a highly skilled team working on several projects which include building and migrating infrastructure to the cloud using latest open source tools and DevOps philosophy.

He is also the author of the first edition of Mastering OpenStack and OpenStack Sahara Essentials, Packt Publishing. He has also authored a few academic publications based on a new research for cloud performance improvement.

Chandan Dutta Chowdhury is a tech lead at Juniper Networks Pvt. Ltd, working on OpenStack Neutron plugins. He has over 11 years of experience in the deployment of Linux-based solutions. In the past, he has been involved in developing Linux-based clustering and deployment solutions. He has contributed to setting up and maintaining a private cloud solution in Juniper Networks.

He was a speaker at the OpenStack Tokyo summit, where he presented the idea of adding firewall logs and other Neutron enhancements. He is speaker at the Austin summit where he talks about making enhancements to the Nova scheduler. He loves to explore technology and he blogs at https://chandanduttachowdhury.wordpress.com.

About the Reviewer

Mohamed Jarraya received his PhD and Master's degrees in Computer Science, LAAS-CNRS, Paul Sabatier University of Toulouse in 2000 and 1997, respectively. Mohamed have obtained an Engineering Diploma in Computer Science from ENIT, Tunisia. He is currently assistant professor at the College of Computation and Informatics, Saudi Electronic University, Saudi Arabia. His research interests include cloud computing, performance evaluation, modeling computing systems, and security.

www.Packtpub.com

For support files and downloads related to your book, please visit www.PacktPub.com.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www.packtpub.com/mapt

Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.

Why subscribe?

Fully searchable across every book published by Packt

Copy and paste, print, and bookmark content

On demand and accessible via a web browser

Customer Feedback

Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review on this book's Amazon page at https://www.amazon.com/dp/1786463989.

If you'd like to join our team of regular reviewers, you can e-mail us at [email protected]. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!

Table of Contents

Credits

Preface

What this book covers

What you need for this book

Who this book is for

Conventions

Reader feedback

Customer support

Downloading the example code

Errata

Piracy

Questions

Designing OpenStack Cloud Architectural Consideration

OpenStack - The new data center paradigm

Introducing the OpenStack logical architecture

Keystone - identity management

Swift - object storage

Cinder - block storage

Manila - File share

Glance - Image registry

Nova-Compute service

nova-api

nova-compute

nova-network

nova-scheduler

nova-conductor

Neutron - Networking services

The Neutron architecture

Ceilometer, Aodh, and Gnocchi - Telemetry

Heat - Orchestration

Horizon - Dashboard

Message Queue

The database

Gathering the pieces and building a picture

Provisioning a VM under the hood

A sample architecture setup

OpenStack deployment

The conceptual model design

The logical model design

What about storage?

Networking needs

The logical networking design

Physical network layout

The tenant data network

Management and the API network

Virtual Network types

The external network

The tenant networks

The physical model design

Estimating the hardware capabilities

CPU calculations

Memory calculations

Network calculations

Storage calculations

Best practices

Summary

Deploying OpenStack - The DevOps Way

DevOps in a nutshell

DevOps and cloud - everything is code

DevOps and OpenStack

Breaking down OpenStack into pieces

Working with the infrastructure deployment code

Integrating OpenStack into infrastructure code

Continuous integration and delivery

Choosing the automation tool

Introducing Ansible

Modules

Variables

Inventory

Roles

Playbooks

Ansible for OpenStack

The development and production environments

The hardware and software requirements

Networking requirements

The development environment

Setting up the development machine

Preparing the infrastructure code environment

Preparing the development setup

Configuring your setup

Building the development setup

Tracking your changes

Summary

OpenStack Cluster – The Cloud Controller and Common Services

Understanding the art of clustering

Asymmetric clustering

Symmetric clustering

Divide and conquer

The cloud controller

The Keystone service

The identity provider

The resource provider

The authorization provider

The token provider

The catalog provider

The policy provider

Federated Keystone

Fernet tokens

The nova-conductor service

The nova-scheduler service

The API services

Image management

The network service

The Horizon dashboard

The telemetry services

Alarms

Events

Infrastructure services

Planning for the message queue

Consolidating the database

Cloud controller clustering

Starting deployment with OpenStack Ansible

The deployment node

Bringing up the controller nodes

The target hosts

Configuring the network

Running the OpenStack playbooks

Configuring OpenStack Ansible

Network configuration

Configuring Host Groups

The playbooks

Summary

OpenStack Compute - Choice of Hypervisor and Node Segregation

The compute service components

Deciding on the hypervisor

The Docker containers

OpenStack Magnum project

Segregating the compute cloud

Availability zones

Host Aggregates

Nova cells

Regions

Workload segregation

Changing the color of the hypervisor

Overcommitment considerations

The CPU allocation ratio

The RAM allocation ratio

Storing instances' alternatives

External shared file storage

Internal non-shared file storage

Understanding instance booting

Understanding the Nova scheduling process

Booting from image

Getting the instance metadata

Add a compute node

Planning for service recovery

Backup with backup-manager

Simple recovery steps

Data protection as a service

The OpenStack community

Summary

OpenStack Storage - Block, Object, and File Share

Understanding the storage types

Ephemeral storage

Persistent storage

Object storage is not NAS/SAN

A spotlight on Swift

The Swift architecture

Indexing the data

A rich API access

Swift gateways

Physical design considerations

The Swift ring

Storage policy and erasure coding

Swift hardware

Where to place what

The Swift network

Deploying Swift service

Using block storage service: Cinder

Using share storage service: Manila

Using the share service

Choosing the storage

Looking beyond the default - Ceph

Ceph in OpenStack

Deploying Ceph with Ansible

Storing images in Ceph

Summary

OpenStack Networking - Choice of Connectivity Types and Networking Services

The architecture of Neutron

Neutron plugins

Service plugin

Agents

Neutron API extensions

Implementing virtual networks

VLAN-based networks

Tunnel-based networks

Virtual switches

The ML2 plugin

Network types

Neutron subnets

Creating virtual networks and subnets

Understanding network port connectivity

Understanding Linux bridge-based connectivity

Understanding OpenVSwitch-based connectivity

Connecting virtual networks with routers

Configuring the routing service

Connecting networks using a virtual router

Connecting to the external world

Providing connectivity from the external world

Associating a floating IP to a virtual machine

Implementing network security in OpenStack

Security groups

Creating security group policies

Firewall as a service

Configuring the firewall service

Creating firewall policies and rules

Inter-site connectivity with VPN service

Summary

Advanced Networking - A Look at SDN and NFV

Understanding SDN-based networks

OVS architecture

Architecture of OVN

Components of OVN

Integrating OVN with OpenStack

Implementing virtual networks with OVN

Understanding network function virtualization

The Management and Orchestration (MANO) specifications

Topology and Orchestration Specification for Cloud Applications (TOSCA) templates

Looking at the Tacker project

Deploying LBaaS service with Octavia

Configuring Octavia

Creating a load balancer

Summary

Operating the OpenStack Infrastructure - The User Perspective

Operating the OpenStack tenancy

Managing projects and users

Managing user capabilities

Managing quotas

Compute service quotas

Block storage service quotas

Network service quotas

Orchestration service quotas

Orchestration in OpenStack

Demystifying the power of Heat

Stacking in OpenStack

Organizing the stacks

Modularizing the stacks

Embracing OpenStack orchestration - Terraform

Terraform in action

Terraform in OpenStack

Summary

OpenStack HA and Failover

HA under the scope

Do not mix them

HA levels in OpenStack

A strict service-level agreement

Measuring HA

The HA dictionary

Hands-on HA

Understanding HAProxy

Services should not fail

Load balancer should not fail

OpenStack HA under the hood

HA in the database

HA in the queue

Keep calm and implement HA

Implementing HA on MySQL

Implementing HA on RabbitMQ

Implementing HA on OpenStack cloud controllers

Implementing HA on network nodes

VRRP in Neutron

More HA in Neutron

HA in Ansible:

Summary

Monitoring and Troubleshooting - Running a Healthy OpenStack Cluster

Telemetry in OpenStack

Rethinking Ceilometer

Ceilometer glossary

The Ceilometer architecture

Gnocchi - time series database as a service

The Gnocchi architecture

Aodh - embracing alarms

The Aodh architecture

Installing Telemetry in OpenStack

The Ceilometer installation

Configuring alarming

Arming OpenStack monitoring

Running Nagios

Placing Nagios

Installing the Nagios server

Configuring Nagios on OpenStack nodes

Watching OpenStack

Troubleshooting - monitoring perspective

Services up and running

Services should listen

Rescuing instances

All green but unreachable

Summary

Keeping Track of Logs - ELK and OpenStack

Tackling logging

Demystifying logs in OpenStack

Logs location

Adjusting logs in OpenStack

Two eyes are better than one eye

ELK under the hood

Placing the ELK server

Installing the ELK server

Installing ElasticSearch

Configuring ElasticSearch

Defining ElasticSearch roles

Extending ElasticSearch capabilities

Installing Kibana

Configuring Kibana

Installing LogStash

Configuring LogStash

LogStash in action

Preparing LogStash clients

Filtering OpenStack logs

Extending the OpenStack-ELK pipeline

Visualizing OpenStack logs

Troubleshooting from Kibana

Summary

OpenStack Benchmarking and Performance Tuning - Maintaining Cloud Performance

Pushing the limits of the database

Deciding the resources outfit

Caching for OpenStack

Memcached in OpenStack

Integrating memcached

Benchmarking OpenStack at scale

Testing the OpenStack API - Rally in a nutshell

Meeting OpenStack SLA

Installing Rally

Rally in action

Scenario example - Performing Keystone

Shaking the OpenStack network - Shaker in a nutshell

Shaker architecture

Installing Shaker

Shaker in action

Scenario example - OpenStack L2

Summary

Preface

Today, OpenStack becomes a massive project increasingly extended with new features and subprojects. As hundreds of large array of enterprises are adopting and continuously contributing to the OpenStack ecosystem, it becomes the ultimate next generation private cloud solution. The range of services supported by OpenStack has grown naturally with the integration of new projects. This was a result of the innate stability of the core components of OpenStack and its great modular architecture. OpenStack has proved to be a mature private cloud platform for providing Infrastructure as a Service (IaaS) capabilities. With the emergence of new projects, the OpenStack ecosystem is trending to provide cloud services associated with Platform as a Service (PaaS). Why you should consider adopting OpenStack? There are many use cases and approaches that justify the adoption of OpenStack in any infrastructure based on various requirements and development needs. Still to think about how a private setup could rule the enterprise infrastructure, more specifically with OpenStack. The fundamental approach of such modular cloud platform is to provide more flexibility to manage the underlying infrastructure. Turning a traditional data center to a private cloud setup leverages the power of automation and increase the responsiveness for service delivery. You may notice while operating an OpenStack setup how easy it is to spin up new components. Its modular architecture unleashes the power of OpenStack as a pluggable cloud software solution. Another advantageous reason is its REST APIs exposure for each service. This embraces automation and easily facilitates the integration within the existing system setup. OpenStack can point you to the right path to overcome issues with legacy IT and vendor lock in. Within the latest releases of OpenStack, more modules and plugins have been developed to support third-party software services, including compute, storage, and network components.

In this new edition, we will be moving to a new learning path that will cover the novelty in OpenStack within the latest releases. Ideally, we will continue our journey by revisiting the OpenStack components and design patterns. We keep updating what is new in the core services architecture of OpenStack. That will cover new compute segregation and supported capabilities, including containerization, new network service shape, which includes Software Defined Network (SDN) and extends storage layout in OpenStack with the new incubated project. In each part of this edition, we keep sharing the experience in forms of best practices inspired from deployed OpenStack projects. We take a different method in this edition for automating the OpenStack deployment using system management tools on containers for the lab setup to mimic a real-production environment. This will give you a deep insight on the novelty of the OpenStack ecosystem and how to adopt it to meet your business needs.

The final section of this book will provide a complementary part in an OpenStack-ready production setup that includes administration, troubleshooting, monitoring, and benchmarking tool sets.

What this book covers

Chapter 1, Designing OpenStack Cloud Architectural Consideration, revisits the main architectural core services of OpenStack and highlights various updates on each architectural design. The chapter will be a starting stage for the first logical design of OpenStack that ends with a first physical model design framed with basic calculation for storage, compute, and network services. This will help choosing the right hardware to start building a private cloud ready for production deployment.

Chapter 2, Deploying OpenStack - The DevOps Way, introduces the trend of the philosophy of DevOps and how to exploit its benefits when deploying and managing the OpenStack environment. The chapter will introduce Ansible as a chosen system management tool to automate and manage the deployment of an OpenStack environment.

A succinct overview of the concept of Infrastructure as Code (IaC) will be taken under scope to enhance the OpenStack infrastructure management and operation. The first deployment will be based on containers for better isolation of OpenStack services and to mimic a real production setup.

Chapter 3, OpenStack Compute - Choice of Hypervisor and Node Segregation, presents deeper insights on the new updates of different services running in a cloud controller node and how to design for high availability and fault tolerant OpenStack services at an early stage. This will be covering the basic OpenStack core components, database, and message bus system. The chapter will decompose Ansible roles and playbooks in more detail for different OpenStack core components and common services.

Chapter 4, OpenStack Compute – Choice of Hypervisor and Node Segregation, covers the compute service in OpenStack and exposes the newly supported different hypervisors. A special fast growing virtualization technology supported lately by OpenStack will be introduced by covering Docker and the Magnum project. The chapter will introduce newly adopted concepts for large OpenStack setup including compute and host segregation, availability zones, regions, and the concept of cells in Nova. Compute scheduling will take a good part of the chapter by getting the grips of instance life cycle details. Ansible playbook of the compute service will be detailed to automate the installation of a new compute node in an existing OpenStack environment. The chapter will also explore few alternatives to backup an entire cluster in OpenStack.

Chapter 5,OpenStack Storage - Block, Object, and File Share, enlarges the scope of different storage types and alternatives supported by OpenStack. The chapter will give succinct updates on object and block storage in the latest releases of OpenStack. A new stable project supported by OpenStack Manilla will be covered in detail by going through its architecture layout within the OpenStack ecosystem. The chapter will explore different roles and Ansible playbooks for block and object storage, including an updated part for Ceph.

Chapter 6, Openstack Networking – Choice of Connectivity Type and Other Networking Services, focuses on presenting the current state of art in networking in OpenStack. This includes the new and updated Neutron plugins and different tunneling implementations developed in the latest OpenStack releases. The chapter describes different network implementations using Neutron. It details different network components and terminologies to simplify the management of virtual networks in OpenStack. A good part of the chapter is reserved to simplify the complexity of setting up virtual networks and routers by discovering how traffic flows under the hood. By the end of the chapter, Firewall as a Service (FWaaS) and VPN as a Service (VPNaaS) will be covered armed with examples.

Chapter 7, Advanced Networking - A Look at SDN and NFV, illustrates a new advanced networking topic in OpenStack. The chapter is dedicated to present the concepts of Software Defined Network (SDN) and NVF (Network Function Virtualization) and discuss their integration in OpenStack. The end of the chapter will explore the new implementation of Load Balancer as a Service in OpenStack.

Chapter 8, Operating the OpenStack Infrastructure – The User Perspective, discusses the usage of the readily deployed OpenStack platform. It will guide operators on how to manage users and projects and define how the underlying resources will be consumed. The chapter also gives a special insight on helping users to automate launching demanded stacks using the OpenStack orchestration service Heat. It will expose the need of adopting the concept of Infrastructure of Code and how it fulfills the new modern infrastructure requirements. As Heat will be introduced as the built-in tool to define resources from the template in OpenStack, the chapter will open the curtains for a new promising tool that supports multiple cloud providers: Terraform.

Chapter 9, OpenStack HA and Failover, speculates on the different high availability design patterns in OpenStack for each component. This will include a complete cluster setup for active and passive OpenStack services. The chapter will leverage not only the power of external tools to achieve high availability for message bus, database and other services, but will also explore the native relevant high available setups in OpenStack including network service.

Chapter 10, Monitoring and Troubleshooting - Running a Healthy OpenStack Cluster, explores the novelty of the telemetry service in OpenStack. More architectural discussions will be elaborated regarding the composition of the telemetry service within the latest releases, including alarms, events, and metrics in the ecosystem of OpenStack. It will show how to embrace the monitoring of the platform using external and popular tools such as Nagios. The chapter will help to get readers acquainted with how to diagnose common possible issues in OpenStack using different troubleshooting tools and methodologies.

Chapter 11, Keeping Track for Logs – ELK and OpenStack, goes through the available log files in OpenStack and how to use them for deep investigation when troubleshooting issues in OpenStack. The chapter will help you understand how to efficiently parse log files in OpenStack per service using modern and great log pipeline tools such as ELK (ElasticSearch, LogStash, and Kibana) stack. An updated and mature version of the ELK stack will be presented. The chapter will illustrate how to identify the root cause of the possible issues using effective ELK queries.

Chapter 12, OpenStack Benchmarking and Performance Tuning - Maintaining Cloud Performance, navigates through an advanced topic in the OpenStack journey: OpenStack performance boosting and benchmarking. By the means of one of the greatest benchmarking tools developed for OpenStack, Rally, you will gain a deeper understanding on how the OpenStack platform would behave. This would help to adjust the platform capacity and its architecture. Another novel topic will be elaborated to evaluate the OpenStack data plane. This will include benchmarking the network capabilities using Shaker tool.

What you need for this book

This book assumes a moderate level of the Linux operating system and cloud computing concepts. While this edition has been enhanced with richer content based on the latest updates in OpenStack, being familiar with the OpenStack ecosystem is very important. A basic knowledge and understanding of the network jargon, system management tools, and architecture design patters is required. Unlike the first edition, this book uses Ansible as the main system management tool for the OpenStack infrastructure management. It uses the OpenStack-Ansible official project, which is available in github https://github.com/openstack/openstack-ansible. Thus, a good understanding of the YAML syntax is a big plus.

Feel free to use any tool for the test environment such as Oracle’s VirtualBox, Vagrant, or VMware workstation. The lab setup can run OpenStack-Ansible using All-In-One build(OSA) found in the OpenStack-Ansible github repository. The book recommends installing the OpenStack environment on physical hardware to accomplish a production ready environment. Thus, a physical network infrastructure should be in place. On the other hand, running OpenStack in a virtual environment for testing purposes is possible if virtual network configuration is properly configured.

In this book, the following software list is required:

Operating System: CentOS 7 or Ubuntu 14.04

OpenStack – Mitaka or later release

VirtualBox 4.5 or newer

Vagrant 1.7 or newer

Ansible server 2.2 or newer

As you run the OpenStack installation in a development environment, the following minimum hardware resources are required:

A host machine with CPU hardware virtualization support

8 CPU cores

12 GB RAM

60 GB free disk space

Two network interface cards

Internet connectivity is required to download the necessary packages for OpenStack and other tools. Additionally, refer to the http://docs.openstack.org guide for detailed instructions on installing the latest versions of OpenStack or to update the package that no longer exists in the older versions.

Who this book is for

This book is essentially geared towards the novice cloud operators, architects, and DevOps engineers who are looking to deploy a private cloud setup based on OpenStack. The book is also for those who are following up the trend of the novelty of OpenStack, willing to expand their knowledge and enlarge their current OpenStack setup based on the new features and projects recently added to the OpenStack ecosystem. The book does not provide detailed steps on installing or running OpenStack services, so the reader can focus on understanding advanced features and methodologies that treat the topic at hand. This edition gives more options to deploy and run an OpenStack environment, so the reader should be able to follow the examples included in each chapter of this book.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "The configuration files of OSA are stored at/etc/openstack_ansible/on the deployment host."

A block of code is set as follows:

[computes] compute1.example.com compute2.example.com compute3.example.com compute[20:30].example.com

Any command-line input or output is written as follows:

# vagrant up --provider virtualbox

# vagrant ssh

New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "To manage security groups, navigate toCompute|Access & Security|Security Group."

Warnings or important notes appear in a box like this.
Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book-what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of. To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

You can download the code files by following these steps:

Log in or register to our website using your e-mail address and password.

Hover the mouse pointer on the

SUPPORT

tab at the top.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box.

Select the book for which you're looking to download the code files.

Choose from the drop-down menu where you purchased this book from.

Click on

Code Download

.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR / 7-Zip for Windows

Zipeg / iZip / UnRarX for Mac

7-Zip / PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Mastering-OpenStack-SecondEdition. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at [email protected] with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.

Designing OpenStack Cloud Architectural Consideration

The adoption of cloud technology has changed the way enterprises run their IT services. By leveraging new approaches on how resources are being used, several cloud solutions came into play with different categories: private, public, hybrid, and community. Whatever cloud category is used, this trend was felt by many organizations, which needs to introduce an orchestration engine to their infrastructure to embrace elasticity, scalability, and achieve a unique user experience to a certain extent. Nowadays, a remarkable orchestration solution, which falls into the private cloud category, has brought thousands of enterprises to the next era of data center generation: OpenStack. At the time of writing, OpenStack has been deployed in several large to medium enterprise infrastructures, running different types of production workload. The maturity of this cloud platform has been boosted due to the joint effort of several large organizations and its vast developer community around the globe. Within every new release, OpenStack brings more great features, which makes it a glorious solution for organizations seeking to invest in it, with returns in operational workloads and flexible infrastructure.

In this edition, we will keep explaining the novelties of OpenStack within the latest releases and discuss the great opportunities, which OpenStack can offer for an amazing cloud experience.

Deploying OpenStack is still a challenging step, which needs a good understanding of its beneficial returns to a given organization in terms of automation, orchestration, and flexibility. If expectations are set properly, this challenge will turn into a valuable opportunity, which deserves an investment.

After collecting infrastructure requirements, starting an OpenStack journey will need a good design and consistent deployment plan with different architectural assets.

The Japanese military leader, Miyamoto Musashi, wrote the following, very impressive thought on perception and sight, in The Book of Five Rings, Start Publishing LLC:

"In strategy, it is important to see distant things as if they were close and to take a distanced view of close things."

Our OpenStack journey will start by going through the following points:

Getting acquainted with the logical architecture of the OpenStack ecosystem by revisiting its components

Learning how to design an OpenStack environment by choosing the right core services for the right environment

Enlarging the OpenStack ecosystem by joining new projects within the latest stable releases

Designing the first OpenStack architecture for a large-scale environment

Planning for growth by going through first-deployment best practices and capacity planning

OpenStack - The new data center paradigm

Cloud computing is about providing various types of infrastructural services, such as Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). The challenge, which has been set by the public cloud is about agility, speed, and self-service. Most companies have expensive IT systems, which they have developed and deployed over the years, but they are siloed and need human intervention. In many cases, IT systems are struggling to respond to the agility and speed of the public cloud services. The traditional data center model and siloed infrastructure might become unsustainable in today's agile service delivery environment. In fact, today's enterprise data center must focus on speed, flexibility, and automation for delivering services to get to the level of next-generation data center efficiency.

The big move to a software infrastructure has allowed administrators and operators to deliver a fully automated infrastructure within a minute. The next-generation data center reduces the infrastructure to a single, big, agile, scalable, and automated unit. The end result is a programmable, scalable, and multi-tenant-aware infrastructure. This is where OpenStack comes into the picture: it promises the features of a next-generation data center operating system. The ubiquitous influence of OpenStack was felt by many big global cloud enterprises such as VMware, Cisco, Juniper, IBM, Red Hat, Rackspace, PayPal, and eBay, to name but a few. Today, many of them are running a very large scalable private cloud based on OpenStack in their production environment. If you intend to be a part of a winning, innovative cloud enterprise, you should jump to the next-generation data center and gain valuable experience by adopting OpenStack in your IT infrastructure.

To read more about the success stories of many companies, visit https://www.openstack.org/user-stories.

Introducing the OpenStack logical architecture

Before delving into the OpenStack architecture , we need to refresh or fill gaps and learn more about the basic concepts and usage of each core component.

In order to get a better understanding on how it works, it will be beneficial to first briefly parse the things, which make it work. In the following sections, we will look at various OpenStack services, which work together to provide the cloud experience to the end user. Despite the different services catering to different needs, they follow a common theme in their design that can be summarized as follows:

Most OpenStack services are developed in Python, which aids rapid development.

All OpenStack services provide REST APIs. These APIs are the main external communication interfaces for services and are used by the other services or end users.

The OpenStack service itself may be implemented as different components. The components of a service communicate with each other over the message queue. The message queue provides various advantages such as queuing of requests, loose coupling, and load distribution among the worker daemons.

With this common theme in mind, let's now put the essential core components under the microscope and go a bit further by asking the question: What is the purpose of such component?

Keystone - identity management

From an architectural perspective, Keystone presents the simplest service in the OpenStack composition. It is the core component and provides an identity service comprising authentication and authorization of tenants in OpenStack. Communications between different OpenStack services are authorized by Keystone to ensure that the right user or service is able to utilize the requested OpenStack service. Keystone integrates with numerous authentication mechanisms such as username/password and token/authentication-based systems. Additionally, it is possible to integrate it with an existing backend such as the Lightweight Directory Access Protocol (LDAP) and the Pluggable Authentication Module (PAM).

Keystone also provides a service catalog as a registry of all the OpenStack services.

With the evolution of Keystone, many features have been implemented within recent OpenStack releases leveraging a centralized and federated identity solution. This will allow users to use their credentials in an existing, centralized, sign-on backend and decouples the authentication mechanism from Keystone.

The federation identity solution becomes more stable within the OpenStack Juno release, which engages Keystone as a Service Provider (SP), and uses and consumes from a trusted Provider of Identity (IdP), user identity information in SAML assertions, or OpenID Connect claims. An IdP can be backed by LDAP, Active Directory, or SQL.

Swift - object storage

Swift is one of the storage services available to OpenStack users. It provides an object-based storage service and is accessible through REST APIs. Compared to traditional storage solutions, file shares, or block-based access, an Object-Storage takes the approach of dealing with stored data as objects that can be stored and retrieved from the Object-Store. A very high-level overview of Object Storage goes like this. To store the data, the Object-Store splits it into smaller chunks and stores it in separate containers. These containers are maintained in redundant copies spread across a cluster of storage nodes to provide high availability, auto-recovery, and horizontal scalability.

We will leave the details of the Swift architecture for later. Briefly, it has a number of benefits:

It has no central brain, and indicates no

Single Point Of Failure

(

SPOF

)

It is curative, and indicates auto-recovery in the case of failure

It is highly scalable for large petabytes of storage access by scaling horizontally

It has a better performance, which is achieved by spreading the load over the storage nodes

It has inexpensive hardware that can be used for redundant storage clusters

Cinder - block storage

You may wonder whether there is another way to provide storage to OpenStack users. Indeed, the management of the persistent block storage is available in OpenStack by using the Cinder service. Its main capability is to provide block-level storage to the virtual machine. Cinder provides raw volumes that can be used as hard disks in virtual machines.

Some of the features that Cinder offers are as follows:

Volume management

: This allows the creation or deletion of a volume

Snapshot management

: This allows the creation or deletion of a snapshot of volumes

Attaching or detaching volumes from instances

Cloning volumes

Creating volumes from snapshots

Copy of images to volumes and vice versa

It is very important to keep in mind that like Keystone services, Cinder features can be delivered by orchestrating various backend volume providers through configurable drivers for the vendor's storage products such as from IBM, NetApp, Nexenta, and VMware.

Cinder is proven as an ideal solution or a replacement of the old nova-volume service that existed before the Folsom release on an architectural level. It is important to know that Cinder has organized and created a catalog of block-based storage devices with several differing characteristics. However, we must obviously consider the limitation of commodity storage such as redundancy and auto-scaling.

When Cinder was introduced in the OpenStack Grizzly release, a joint feature was implemented to allow creating backups for Cinder volumes. A common use case has seen Swift evolves as a storage backup solution. Within the next few releases, Cinder was enriched with more backup target stores such as NFS, Ceph, GlusterFS, POSIX file systems, and the property IBM solution, Tivoli Storage Manager. This great backup extensible feature is defined by the means of Cinder backup drivers that have become richer in every new release. Within the OpenStack Mitaka release, Cinder has shown its vast number of backup options by marrying two different cloud computing environments, bringing an additional backup driver targeting Google Cloud Platform. This exciting opportunity allows OpenStack operators to leverage an hybrid cloud backup solution that empowers , a disaster recovery strategy for persistent data. What about security? This latent issue has been resolved since the Kilo release so Cinder volumes can be encrypted before starting any backup operations.

Manila - File share

Apart from the block and object we discussed in the previous section, since the Juno release, OpenStack has also had a file-share-based storage service called Manila. It provides storage as a remote file system. In operation, it resembles the Network File System (NFS) or SAMBA storage service that we are used on Linux while, in contrast to Cinder, it resembles the Storage Area Network (SAN) service. In fact, NFS and SAMBA or the Common Internet File System (CIFS) are supported as backend drivers to the Manila service. The Manila service provides the orchestration of shares on the share servers.

More details on storage services will be covered in Chapter 5, OpenStack Storage - Block, Object, and File Share.

Each storage solution in OpenStack has been designed for a specific set of purposes and implemented for different targets. Before taking any architectural design decisions, it is crucial to understand the difference between existing storage options in OpenStack today, as outlined in the following table:

Specification

Storage Type

Swift

Cinder

Manila

Access mode

Objects through REST API

As block devices.

File-based access

Multi-access

OK

No, can only be used by one client

OK

Persistence

OK

OK

OK

Accessibility

Anywhere

Within single VM

Within multiple VMs

Performance

OK

OK

OK

Glance - Image registry

The Glance service provides a registry of images and metadata that the OpenStack user can launch as a virtual machine. Various image formats are supported and can be used based on the choice of hypervisor. Glance supports images for KVM/Qemu, XEN, VMware, Docker, and so on.

As a new user of OpenStack, one might often wonder, What is the difference between Glance and Swift? Both handle storage. What is the difference between them? Why do I need to integrate such a solution?

Swift is a storage system, whereas Glance is an image registry. The difference between the two is that Glance is a service that keeps track of virtual machine images and metadata associated with the images. Metadata can be information such as a kernel, disk images, disk format, and so on. Glance makes this information available to OpenStack users over REST APIs. Glance can use a variety of backends for storing images. The default is to use directories, but in a massive production environment it can use other approaches such as NFS and even Swift.

Swift, on the other hand, is a storage system. It is designed for object-storage where you can keep data such as virtual disks, images, backup archiving, and so on.

The mission of Glance is to be an image registry. From an architectural point of view, the goal of Glance is to focus on advanced ways to store and query image information via the Image Service API. A typical use case for Glance is to allow a client (which can be a user or an external service) to register a new virtual disk image, while a storage system focuses on providing a highly scalable and redundant data store. At this level, as a technical operator, your challenge is to provide the right storage solution to meet cost and performance requirements. This will be discussed at the end of the book.

Nova-Compute service

As you may already know, Nova is the original core component of OpenStack. From an architectural level, it is considered one of the most complicated components of OpenStack. Nova provides the compute service in OpenStack and manages virtual machines in response to service requests made by OpenStack users.

What makes Nova complex is its interaction with a large number of other OpenStack services and internal components, which it must collaborate with to respond to user requests for running a VM.

Let's break down the Nova service itself and look at its architecture as a distributed application that needs orchestration between different components to carry out tasks.

nova-api

The nova-api component accepts and responds to the end user and computes API calls. The end users or other components communicate with the OpenStack nova-api interface to create instances via the OpenStack API or EC2 API.

The nova-api initiates most orchestrating activities such as the running of an instance or the enforcement of some particular policies.

nova-compute

The nova-compute component is primarily a worker daemon that creates and terminates VM instances via the hypervisor's APIs (XenAPI for XenServer, Libvirt KVM, and the VMware API for VMware).

nova-network

The nova-network component accepts networking tasks from the queue and then performs these tasks to manipulate the network (such as setting up bridging interfaces or changing IP table rules).

Neutron is a replacement for the nova-network service.

nova-scheduler

The nova-scheduler component takes a VM instance's request from the queue and determines where it should run (specifically which compute host it should run on). At an application architecture level, the term scheduling or scheduler invokes a systematic search for the best outfit for a given infrastructure to improve its performance.

nova-conductor

The nova-conductor service provides database access to compute nodes. The idea behind this service is to prevent direct database access from the compute nodes, thus enhancing database security in case one of the compute nodes gets compromised.

By zooming out of the general components of OpenStack, we find that Nova interacts with several services such as Keystone for authentication, Glance for images, and Horizon for the web interface. For example, the Glance interaction is central; the API process can upload any query to Glance, while nova-compute will download images to launch instances.

Nova also provides console services that allow end users to access the console of the virtual instance through a proxy such as nova-console, nova-novncproxy, and nova-consoleauth.

Neutron - Networking services

Neutron provides a real Network as a Service (NaaS) capability between interface devices that are managed by OpenStack services such as Nova. There are various characteristics that should be considered for Neutron:

It allows users to create their own networks and then attaches server interfaces to them

Its pluggable backend architecture lets users take advantage of commodity gear or vendor-supported equipment

It provides extensions to allow additional network services to be integrated

Neutron has many core network features that are constantly growing and maturing. Some of these features are useful for routers, virtual switches, and SDN networking controllers.

Neutron introduces the following core resources:

Ports

: Ports in Neutron refer to the virtual switch connections. These connections are where instances and network services are attached to networks. When attached to subnets, the defined MAC and IP addresses of the interfaces are plugged into them.

Networks

: Neutron defines networks as isolated Layer 2 network segments. Operators will see networks as logical switches that are implemented by the Linux bridging tools, Open vSwitch, or some other virtual switch software. Unlike physical networks, either the operators or users in OpenStack can define this.

Subnet

: Subnets in Neutron represent a block of IP addresses associated with a network. IP addresses from this block are allocated to the ports.

Neutron provides additional resources as extensions. The following are some of the commonly used extensions:

Routers

: Routers provide gateways between various networks.

Private IPs

: Neutron defines two types of networks. They are as follows:

Tenant networks

: Tenant networks use private IP addresses. Private IP addresses are visible within the instance and this allows the tenant's instances to communicate while maintaining isolation from the other tenant's traffic. Private IP addresses are not visible to the Internet.

External networks

: External networks are visible and routable from the Internet. They must use routable subnet blocks.

Floating IPs:

A floating IP is an IP address allocated on an external network that Neutron maps to the private IP of an instance. Floating IP addresses are assigned to an instance so that they can connect to external networks and access the Internet. Neutron achieves the mapping of floating IPs to the private IP of the instance by using

Network Address Translation

(

NAT

).

Neutron also provides advanced services to rule additional network OpenStack capabilities as follows:

Load Balancing as a Service

(

LBaaS

) to distribute the traffic among multiple compute node instances.

Firewall as a Service

(

FWaaS

) to secure layer 3 and 4 network perimeter access.

Virtual Private Network as a Service

(

VPNaaS

) to build secured tunnels between instances or hosts.

You can refer to the latest updated Mitaka release documentation for more information on networking in OpenStack at http://docs.openstack.org/mitaka/networking-guide/.

The Neutron architecture

The three main components of the Neutron architecture are:

Neutron server

: It accepts API requests and routes them to the appropriate Neutron plugin for action.

Neutron plugins

: They perform the actual work for the orchestration of backend devices such as the plugging in or unplugging ports, creating networks and subnets, or IP addressing.

Agents and plugins differ depending on the vendor technology of a particular cloud for the virtual and physical Cisco switches, NEC, OpenFlow, OpenSwitch, Linux bridging, and so on.

Neutron agents

: Neutron agents run on the compute and network nodes. The agents receive commands from the plugins on the Neutron server and bring the changes into effect on the individual compute or network nodes. Different types of Neutron agents implement different functionality. For example, the Open vSwitch agent implements L2 connectivity by plugging and unplugging ports onto

Open vSwitch

(

OVS

) bridges and they run on both compute and network nodes, whereas L3 agents run only on network nodes and provide routing and NAT services.

Neutron is a service that manages network connectivity between the OpenStack instances. It ensures that the network will not be turned into a bottleneck or limiting factor in a cloud deployment and gives users real self-service, even over their network configurations.

Another advantage of Neutron is its ability to provide a way to integrate vendor networking solutions and a flexible way to extend network services. It is designed to provide a plugin and extension mechanism that presents an option for network operators to enable different technologies via the Neutron API. More details about this will be covered in Chapter 6, OpenStack Networking - Choice of Connectivity Types and Networking Services and Chapter 7, Advances Networking - A Look SDN and NFV.

Keep in mind that Neutron allows users to manage and create networks or connect servers and nodes to various networks.

The scalability advantage will be discussed in a later topic in the context of the Software Defined Network (SDN) and Network Function Virtualization (NFV) technology, which is attractive to many networks and administrators who seek a high-level network multi-tenancy.

Ceilometer, Aodh, and Gnocchi - Telemetry

Ceilometer provides a metering service in OpenStack. In a shared, multi-tenant environment such as OpenStack, metering resource utilization is of prime importance.

Ceilometer collects data associated with resources. Resources can be any entity in the OpenStack cloud such as VMs, disks, networks, routers, and so on. Resources are associated with meters. The utilization data is stored in the form of samples in units defined by the associated meter. Ceilometer has an inbuilt summarization capability.

Ceilometer allows data collection from various sources, such as the message bus, polling resources, centralized agents, and so on.

As an additional design change in the Telemetry service in OpenStack since the Liberty release, the Alarming service has been decoupled from the Ceilometer project to make use of a new incubated project code-named Aodh. The Telemetry Alarming service will be dedicated to managing alarms and triggering them based on collected metering and scheduled events

More Telemetry service enhancements have been proposed to adopt a Time Series Database as a Service project code-named Gnoochi. This architectural change will tackle the challenge of metrics and event storage at scale in the OpenStack Telemetry service and improve its performance.

Telemetry and system monitoring are covered in more detail in Chapter 10, Monitoring and Troubleshooting - Running a Healthy OpenStack Cluster.

Heat - Orchestration

Debuting in the Havana release is the OpenStack Orchestration project Heat. Initial development for Heat was limited to a few OpenStack resources including compute, image, block storage, and network services. Heat has boosted the emergence of resource management in OpenStack by orchestrating different cloud resources resulting in the creation of stacks to run applications with a few pushes of a button. From simple template engine text files referred to as HOT templates (Heat Orchestration Template), users are able to provision the desired resources and run applications in no time. Heat is becoming an attractive OpenStack project due to its maturity and extended support resources catalog within the latest OpenStack releases. Other incubated OpenStack projects such as Sahara (Big Data as a Service) have been implemented to use the Heat engine to orchestrate the creation of the underlying resources stack. It is becoming a mature component in OpenStack and can be integrated with some system configuration management tools such as Chef for full stack automation and configuration setup.

Heat uses templates files in YAML or JSON format; indentation is important!

The Orchestration project in OpenStack is covered in more detail in Chapter 8, Operating the OpenStack Infrastructure- The User Perspective.

Horizon - Dashboard

Horizon is the web dashboard that pulls all the different pieces together from the OpenStack ecosystem.

Horizon provides a web frontend for OpenStack services. Currently, it includes all the OpenStack services as well as some incubated projects. It was designed as a stateless and data-less web application. It does nothing more than initiate actions in the OpenStack services via API calls and display information that OpenStack returns to Horizon. It does not keep any data except the session information in its own data store. It is designed to be a reference implementation that can be customized and extended by operators for a particular cloud. It forms the basis of several public clouds, most notably the HP Public Cloud, and at its heart is its extensible modular approach to construction.

Horizon is based on a series of modules called panels that define the interaction of each service. Its modules can be enabled or disabled, depending on the service availability of the particular cloud. In addition to this functional flexibility, Horizon is easy to style with Cascading Style Sheets (CSS).

Message Queue

Message Queue provides a central hub to pass messages between different components of a service. This is where information is shared between different daemons by facilitating the communication between discrete processes in an asynchronous way.

One major advantage of the queuing system is that it can buffer requests and provide unicast and group-based communication services to subscribers.

The database

Its database stores most of the build-time and run-time states for the cloud infrastructure, including instance types that are available for use, instances in use, available networks, and projects. It provides a persistent storage for preserving the state of the cloud infrastructure. It is the second essential piece of sharing information in all OpenStack components.

Gathering the pieces and building a picture

Let's try to see how OpenStack works by chaining all the service cores covered in the previous sections in a series of steps:

Authentication is the first action performed. This is where Keystone comes into the picture. Keystone authenticates the user based on credentials such as the username and password.

The service catalog is then provided by Keystone. This contains information about the OpenStack services and the API endpoints.

You can use the Openstack CLI to get the catalog:

$ openstack catalog list

The service catalog is a JSON structure that exposes the resources available on a token request.

Typically, once authenticated, you can talk to an API node. There are different APIs in the OpenStack ecosystem (the OpenStack API and EC2 API):

The following figure shows a high-level view of how OpenStack works:

Another element in the architecture is the instance scheduler. Schedulers are implemented by OpenStack services that are architected around worker daemons. The worker daemons manage the launching of instances on individual nodes and keep track of resources available to the physical nodes on which they run. The scheduler in an OpenStack service looks at the state of the resources on a physical node (provided by the worker daemons) and decides the best candidate node to launch a virtual instance on. An example of this architecture is nova-scheduler. This selects the compute node to run a virtual machine or Neutron L3 scheduler, which decides which L3 network node will host a virtual router.

The scheduling process in OpenStack Nova can perform different algorithms such as simple, chance, and zone. An advanced way to do this is by deploying weights and filters by ranking servers as its available resources.

Provisioning a VM under the hood

It is important to understand how different services in OpenStack work together, leading to a running virtual machine. We have already seen how a request is processed in OpenStack via APIs.

Let's figure out how things work by referring to the following simple architecture diagram:

The process of launching a virtual machine involves the interaction of the main OpenStack services that form the building blocks of an instance including compute, network, storage, and the base image. As shown in the previous diagram, OpenStack services interact with each other via a message bus to submit and retrieve RPC calls. The information of each step of the provisioning process is verified and passed by different OpenStack services via the message bus. From an architecture perspective, sub system calls are defined and treated in OpenStack API endpoints involving: Nova, Glance, Cinder, and Neutron.

On the other hand, the inter-communication of APIs within OpenStack requires an authentication mechanism to be trusted, which involves Keystone.

Starting with the identity service, the following steps summarize briefly the provisioning workflow based on API calls in OpenStack:

Calling the identity service for authentication

Generating a token to be used for subsequent calls

Contacting the image service to list and retrieve a base image

Processing the request to the compute service API

Processing compute service calls to determine security groups and keys

Calling the network service API to determine available networks

Choosing the hypervisor node by the compute scheduler service