Extending OpenStack - Omar Khedher - E-Book

Extending OpenStack E-Book

Omar Khedher

0,0
32,39 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Discover new opportunities to empower your private cloud by making the most of the OpenStack universe

Key Features

  • This practical guide teaches you how to extend the core functionalities of OpenStack
  • Discover OpenStack's flexibility by writing custom applications and network plugins
  • Deploy a containerized environment in OpenStack through a hands-on and example-driven approach

Book Description

OpenStack is a very popular cloud computing platform that has enabled several organizations during the last few years to successfully implement their Infrastructure as a Service (IaaS) platforms. This book will guide you through new features of the latest OpenStack releases and how to bring them into production straightaway in an agile way.

It starts by showing you how to expand your current OpenStack setup and how to approach your next OpenStack Data Center generation deployment. You will discover how to extend your storage and network capacity and also take advantage of containerization technology such as Docker and Kubernetes in OpenStack. Additionally, you'll explore the power of big data as a Service terminology implemented in OpenStack by integrating the Sahara project. This book will teach you how to build Hadoop clusters and launch jobs in a very simple way. Then you'll automate and deploy applications on top of OpenStack. You will discover how to write your own plugin in the Murano project. The final part of the book will go through best practices for security such as identity, access management, and authentication exposed by Keystone in OpenStack. By the end of this book, you will be ready to extend and customize your private cloud based on your requirements.

What you will learn

  • Explore new incubated projects in the OpenStack ecosystem and see how they work
  • Architect your OpenStack private cloud with extended features of the latest versions
  • Consolidate OpenStack authentication in your large infrastructure to avoid complexity
  • Find out how to expand your computing power in OpenStack on a large scale
  • Reduce your OpenStack storage cost management by taking advantage of external tools
  • Provide easy, on-demand, cloud-ready applications to developers using OpenStack in no time
  • Enter the big data world and find out how to launch elastic jobs easily in OpenStack
  • Boost your extended OpenStack private cloud performance through real-world scenarios

Who this book is for

This book is for system administrators, cloud architects, and developers who have experience working with OpenStack and are ready to step up and extend its functionalities. A good knowledge of basic OpenStack components is required. In addition, familiarity with Linux boxes and a good understanding of network and virtualization jargon is required.

Omar Khedher is a systems and network engineer. He has been involved in several cloud-related project based on AWS and OpenStack. He spent few years as cloud system engineer with talented teams to architect infrastructure in the public cloud at Fyber in Berlin. Omar wrote few academic publications for his PhD targeting cloud performance and was the author of Mastering OpenStack, OpenStack Sahara Essentials and co-authored the second edition of the Mastering OpenStack books by Packt.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 271

Veröffentlichungsjahr: 2018

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Extending OpenStack

 

 

Leverage extended OpenStack projects to implement containerization, deployment, and architecting robust cloud solutions

 

 

 

 

 

Omar Khedher

 

 

 

BIRMINGHAM - MUMBAI

Extending OpenStack

Copyright © 2018 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Commissioning Editor: Gebin GeorgeAcquisition Editor: Rahul NairContent Development Editor: Abhishek JadhavTechnical Editor:Swathy MohanCopy Editor: Safis Editing, Dipti MankameProject Coordinator: Judie JoseProofreader: Safis EditingIndexer: Priyanka DhadkeGraphics: Tom ScariaProduction Coordinator:  Shraddha Falebhai

First published: February 2018

Production reference: 1260218

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-78646-553-5

www.packtpub.com

mapt.io

Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Mapt is fully searchable

Copy and paste, print, and bookmark content

PacktPub.com

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

Contributors

About the author

Omar Khedher is a systems and network engineer. He has been involved in several cloud-related project based on AWS and OpenStack. He spent few years as cloud system engineer with talented teams to architect infrastructure in the public cloud at Fyber in Berlin.

Omar wrote few academic publications for his PhD targeting cloud performance and was the author of Mastering OpenStack, OpenStack Sahara Essentials and co-authored the second edition of the Mastering OpenStack books by Packt.

 

I would like to thank immensely my parents and brothers for their encouragement. A special thank goes to Dr. M. Jarraya. A thank you to my dears Belgacem, Andre, Silvio and Caro for the support. Thank you Tamara for the long support and patience. Thank you PacktPub team for the immense dedication. Many thankful words to the OpenStack family.

About the reviewer

Radhakrishnan Ramakrishnan is a DevOps engineer with CloudEnablers Inc, a product-based company targeting on multi-cloud orchestration and multi-cloud governance platforms, located in Chennai, India. He has more than 3 years of experience in Linux server administration, OpenStack Cloud administration, and Hadoop cluster administration in various distributions, such as Apache Hadoop, Hortonworks Data Platform, and the Cloudera distribution of Hadoop. His areas of interest are reading books, listening to music, and gardening.

I would like to thank my family, friends, employers and employees for their continued support.

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Table of Contents

Title Page

Copyright and Credits

Extending OpenStack

Packt Upsell

Why subscribe?

PacktPub.com

Contributors

About the author

About the reviewer

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Download the color images

Conventions used

Get in touch

Reviews

Inflating the OpenStack Setup

Revisiting the OpenStack ecosystem

Grasping a first layout

Postulating the OpenStack setup

Treating OpenStack as code

Growing the OpenStack infrastructure

Deploying OpenStack

Ansible in a nutshell

Testing the OpenStack environment

Prerequisites for the test environment

Setting up the Ansible environment

Running the OSA installation

Production OpenStack environment

Summary

Massively Scaling Computing Power

Decomposing the compute power

Empowering the compute service

Varying the compute flavor

Meeting Docker

Joining Docker

Meeting Xen

Joining Xen

Segregating the compute resources

Reasoning for infrastructure segregation

Defining regions

Defining AZ

Defining host aggregate

Defining cells

Reasoning for workload segregation

Filtering the compute workload

Weighting the compute power

Stacking or spreading

Weighing in action

Summary

Enlarging the OpenStack Storage Capabilities

Varying the block storage backends

Managing block storage – Logical Volume Manager (LVM)

Managing block storage – Network File System (NFS)

Managing block storage – Ceph RADOS Block Device (RBD)

Scheduling and filtering

Hybrid storage scheduling

Navigating the storage backup alternatives

Ceph as backup

Swift as backup

Exploring Manila – shared file service

Configuring the shared file service

Configuring block storage for the Manila backend

Configuring CephFS for the Manila backend

Summary

Harnessing the Power of the OpenStack Network Service

Neutron plugins reference

Driving the sole plugin – ML2 under the hood

Extending ML2 – customizing your own plugin

Maximizing network availability

Neutron HA – DVR

Configuring DVR

Neutron HA – VRRP

The era of network programming

Orchestrating the network function virtualization (NFV)

Summary

Containerizing in OpenStack

Why containers?

The natural evolution of containers

Game changing – microservices

Building the ship

Containers in OpenStack

Docker Swarm in OpenStack

Example – NGINX web server

Kubernetes in OpenStack

Example – application server

Mesos in OpenStack

Example – a Python-based web server

Summary

Managing Big Data in OpenStack

Big data in OpenStack

Rolling OpenStack Sahara service

Deploying the Hadoop cluster

Executing jobs

Summary

Evolving Self-Cloud Ready Applications in OpenStack

The evolvement of Murano

The Murano ecosystem

Integrating Murano in OpenStack

Deploying a self-contained application

Summary

Extending the Applications Catalog Service

Murano application under the hood

Developing application publisher perspective

Deploying application consumer perspective

Summary

Consolidating the OpenStack Authentication

Recapping the Keystone blocks

The multitude faces of the token

Multiple identity actors

All in one authentication hub

Keystone as SP – SAML

Keystone as SP – OpenID Connect

Summary

Boosting the Extended Cloud Universe

Benchmarking as a Service (BaaS)

Automating OpenStack profiling with Rally

Installing Rally

Benchmarking with Rally

Extending benchmarking with plugins

Summary

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

OpenStack is a very popular cloud computing platform that has enabled several organizations to successfully implement their Infrastructure as a Service (IaaS) platforms in the last few years. This book will guide you through new features of the latest OpenStack releases and how to bring them into production straight away in an agile way. It starts by showing you how to expand your current OpenStack setup and approach your next OpenStack Data Center generation deployment. You will discover how to extend your storage and network capacity, and also take advantage of containerization technology, such as Docker and Kubernetes in OpenStack. In addition, it is an opportunity to explore the power of big data as a service implemented in OpenStack by integrating the Sahara project. This book will teach you how to build Hadoop clusters and launch jobs in a very simple way. Then, it will dedicate time to automating and deploying applications on top of OpenStack. You will discover how to create and publish your own application in simple steps using the novel application catalog service in OpenStack code named Murano. The final part of the book will shed the light on the identity service and will go through a consolidated authentication setup using Keystone. The book will be enclosed by leveraging the right tool to conduct and extend benchmarking performances tests against an operating OpenStack environment using the Rally platform. By the end of this book, you will be ready to enter the next phase of OpenStack success by extending and customizing your private cloud based on your requirements.

Who this book is for

This book is for system administrators, cloud architects, and developers who have experience working with OpenStack and are ready to step up and extend its functionalities. A good knowledge of the basic OpenStack components is required. In addition, familiarity with Linux boxes and a good understanding of network and virtualization jargon is required.

What this book covers

Chapter 1, Inflating the OpenStack Setup, describes installing OpenStack from a basic setup model and introduces an expanded OpenStack layout.

Chapter 2, Massively Scaling Computing Power, explores the ways to scale the computing availability in a large infrastructure.

Chapter 3, Enlarging the OpenStack Storage Capabilities, itemizes the different storage options available in OpenStack and custom plugins.

Chapter 4, Harnessing the Power of the OpenStack Network Service, extends the usage of the OpenStack network service.

Chapter 5, Containerizing in OpenStack, integrates the Magnum project in OpenStack and itemize its workflow.

Chapter 6, Managing Big Data in OpenStack, extends the private cloud setup by covering the big data world and elastic data processing in OpenStack using the Sahara project.

Chapter 7, Evolving Self-Cloud Ready Applications in OpenStack, teaches you how to automate deploying applications on top of OpenStack using Murano project.

Chapter 8, Extending the Applications Catalog Service, explores the power of Murano plugins by creating customized ones.

Chapter 9, Consolidating the OpenStack Authentication, introduces the reader to the new implementation of Keystone in OpenStack and the federated identity concept.

Chapter 10, Boosting the Extended Cloud Universe, increases the availability and performance of the OpenStack infrastructure at scale.

To get the most out of this book

The book assumes a moderate level of the Linux operating system and being familiar with the OpenStack ecosystem. A good knowledge and understanding of networking and virtualization technology is required. Having an experience with containerization will help to move faster through the chapters of the book. Few examples have been written in Python and YAML that would require a basic knowledge on both languages but not necessary.

The installation of the OpenStack environment can be performed at any environment with available resources. The lab environment in this book uses the following software and tools:

Operating system: CentOS 7 or Ubuntu 14.04

OpenStack: Mitaka and later releases

VirtualBox 5.0 or newer

Vagrant 2.0.1 or newer

Ansible server 2.4 or newer

Python 2.7

The OpenStack installation will require the following hardware specifications:

A host machine with CPU hardware virtualization support

8 CPU cores

16 GB RAM

60 GB free disk space

Feel free to use any tool for the test environment such as Oracle’s VirtualBox, Vagrant, or VMware workstation. Many chapters implement a new OpenStack deployment to target the objectives of each one in a fresh installed environment. Feel free to re-deploy OpenStack with different releases across each lab. Make sure that you target the right release with the supported projects. This page can be a good reference to compare different OpenStack releases: https://releases.openstack.org/.

At the time of writing this book, several packages are being developed for new releases. Some old versions might go to end of life. This does not cover the operating system version or system management tools. It is recommended to check the latest version for each package that might not be available anymore based on the provided links throughout this book.

Download the example code files

You can download the example code files for this book from your account at www.packtpub.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

Log in or register at

www.packtpub.com

.

Select the

SUPPORT

tab.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR/7-Zip for Windows

Zipeg/iZip/UnRarX for Mac

7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Extending-OpenStack. In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it from https://www.packtpub.com/sites/default/files/downloads/ExtendingOpenStack_ColorImages.pdf.

Get in touch

Feedback from our readers is always welcome.

General feedback: Email [email protected] and mention the book title in the subject of your message. If you have questions about any aspect of this book, please email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packtpub.com.

Inflating the OpenStack Setup

"The past resembles the future more than one drop of water resembles another."
-Ibn Khaldoun

Nowadays, OpenStack has become a very mature cloud computing software solution, more so than it ever was before. It is a unique project because of its tremendous growth in setup and development. Now, thanks to OpenStack, it has become possible to build your own cloud in a cheaper, more elegant, and more flexible way. The official OpenStack website, https://www.openstack.org/ defines the reason for using such a great solution:

OpenStack software controls large pools of compute, storage, and networking resources throughout a data center, managed through a dashboard or through the OpenStack API. OpenStack works with popular enterprise and open source technologies making it ideal for heterogeneous infrastructure.

By looking at the roadmaps of OpenStack's development over the past few years, several open source projects have been incubated under the umbrella of OpenStack, such as big data, databases, security, and containerization technology, and the list is still growing. In each new OpenStack release, a new project becomes more mature and better integrated in the cloud platform. This creates more opportunities to expand the cloud universe functionalities and grow your new next generation data center.

In this chapter, we will cover the following topics:

Briefly parsing the OpenStack components and the innovation areas

Implementing a first architectural design of OpenStack private cloud

Checking the latest tools and processes to build a production-ready OpenStack environment

Discussing the needs to adopt the

Infrastructure as Code

(

IaC

) concept for successful OpenStack management and implementation

Exploring new opportunities to enlarge the OpenStack setup by tackling the cloud setup in both a test and a production environment using Ansible

Revisiting the OpenStack ecosystem

OpenStack has been designed to be deployed on a loosely coupled architectural layout. By defining each component of its ecosystem to run independently, it becomes possible to distribute each service among dedicated machines to achieve redundancy. As defined, the base services that constitute the core components in OpenStack are compute, network, and storage services. Based on this, the OpenStack community takes advantage of the base services and the design approach of the cloud software, and keeps developing and joining new open source projects to the OpenStack ecosystem. A variety of new X-As-A-Service projects appear with nearly every OpenStack release.

Getting up to speed with expanding the private cloud setup involves getting to grips core OpenStack services and terms. The following table shows the main projects in OpenStack in its early releases with their corresponding code names:

Code name

Service

Description

Nova

Compute 

Manages instance resources and operations

Glance

Image

Manages instance disk images and their snapshots

Swift

Object storage

Manages access to object storage level through REST API

Cinder

Block storage

Manages volumes for instances

Neutron

Network

Manages network resources to instances

Keystone

Identity

Manages authentication and authorization for users and services

Horizon

Dashboard

Exposes a graphical user interface to manage an OpenStack environment

Of course, the evolution of the OpenStack ecosystem has kept growing to cover more projects and include more services. Since October 2013 (the date of Havana's release), the OpenStack community has shifted to enlarge the services provided by OpenStack within an exhaustive list. The following table shows the extended services of OpenStack (Mitaka release) at the time of writing:

Code name

Service

Description

Ceilometer

Telemetry 

Provides monitoring of resource usage

Heat

Orchestration

Manages the collection of resources as single unit using template files

Trove

Database

Database as a Service (DBaaS) component

Sahara

Elastic Data Processing (EDP)

Quickly provisions the Hadoop cluster to run an EDP job against it

Ironic

Bare-metal

Provisions bare metal machines

Zaqar

Messaging service

Enables notification and messaging services

Manilla

Shared filesystems

Provides shared File system As A Service (FSaaS), allowing to mount one shared filesystem across several instances

Designate

Domain name service

Offers DNS services

Barbican

Key management

Provides key management service capabilities, such as keys, certificates, and binary data

Murano

Application catalog

Exposes an application catalog allowing the publishing of cloud-ready applications

Magnum

Containers

Introduces Container as a Service (CaaS) in OpenStack

Congress

Governance

Maintains compliance for enterprise policies

At the official OpenStack website, you can find a very informative page project-navigator that shows the maturity and adoption statistics for each OpenStack project and the age in years in which it has been in development. You can find this website at https://www.openstack.org/software/project-navigator.

Ultimately, if you want to expand your OpenStack environment to provide more X-As-A-Service user experience, you may need to revisit the core ecosystem first. This will enable you to pinpoint how the new service will be exposed to the end user and predict any change that needs more attention regarding the load and resources usage.

Grasping a first layout

Let's rekindle the flame and implement a basic architectural design. You probably have a running OpenStack environment where you have installed its different pieces across multiple and dedicated server roles. The architectural design of the OpenStack software itself gives you more flexibility to build your own private cloud. As mentioned in the first section, the loosely coupled design makes it easier to decide how to run services on nodes in your data center. Depending on how big it is, your hardware choices, or third-party vendor dependencies, OpenStack has been built so that it can't suffer from vendor lock-in. This makes it imperative that we do not stick to any specific design pattern or any vendor requirements.

The following figure shows a basic conceptual design for OpenStack deployment in a data center:

Postulating the OpenStack setup

OpenStack, as a distributed system, is designed to facilitate the designing of your private cloud. As summed up in the previous section, many components can run across different fleets of nodes. When it comes to a large infrastructure, the OpenStack setup can scale to more than one location, forming multisite environments that are geographically dispersed. In order to manage large-scale infrastructure with OpenStack, it becomes crucial to find a promising approach that makes any deployment, change, or update of the underlying infrastructure more consistent and easy to operate.

A very new and promising approach that will transform the way of managing IT infrastructures is IaC. Covering the challenges and principles of such model could fill an entire book. In the next section, we will cover how we will deploy our OpenStack environment on a large scale by adopting such an approach.

Treating OpenStack as code

The Infrastructure as Code concept provides several best practices and patterns that will help us achieve remarkable results for the portfolio of systems within an organization. Without going deeply into details of this concept, the following points show us the advantages of using IaC for our OpenStack deployment:

It automates the deployment of all OpenStack components through dozens of nodes with less effort, time, cost, and with more reliability

It audits the OpenStack environment with every change and update

It defines the desired state of the OpenStack infrastructure

The system recovers faster from failures by reproducing systems easily from unexpected changes during OpenStack deployment

It improves the robustness of OpenStack's infrastructure 

It keeps services available and consistent

In order to take advantage of the mentioned benefits of the concept of IaC, OpenStack environment components can be transformed to a defined role. Each role describes one or more specific elements of the OpenStack infrastructure and details how they should be configured.

Such roles can be written in a configuration definition file, which is a generic term to describe a role of a service or server. Nowadays, many tools have been developed for this purpose such as Chef, Puppet, and Ansible and have a better system management experience. The continuous growth of the  OpenStack ecosystem was a result of the support and dedication of several giant and medium enterprises around the globe. This interest to provide a unique cloud software solution was not limited only to the OpenStack code source but also the contribution to automate its deployment. This covers the development of ready-production artifacts to manage and operate an OpenStack environment through system management tools. That includes Chef cookbooks, Ansible playbooks, and Puppet manifests. 

Growing the OpenStack infrastructure

The ultimate goal of the Infrastructure as Code approach is to improve the confidence of the systems running in production. In addition, this can be coupled with infrastructure growth. Expanding the OpenStack layout, for example, cannot be achieved without taking into account an agile approach that keeps its different components across the data center running without interruption. Moreover, adding new components or integrating a new service into the OpenStack ecosystem setup will result in a design change. New components should talk to existing ones with few new resource requirements. This challenge can be delegated to a Version Control System (VCS). Whatever changes are made, keeping the OpenStack setup self-descriptive in VCS through definition files and scripts will define the desired state of the private cloud. This avoids any process that would end up reinventing the wheel; while it needs only to expand and correlate code describing the existing OpenStack setup.

To ensure that the OpenStack infrastructure resists changes as the code that describes it grows, a very agile way must exist to emphasize system configuration changes. This can be inspired by software development practices. This enables us to apply modern software development tools to deploy and extend an OpenStack infrastructure, for example. At this stage, a DevOps movement has appeared that brings software developers and operators together to collaborate. Of course, exploiting the new modern approach and its derived practices and ideas will bring beneficial results when growing or upgrading your OpenStack private cloud environment.

The next diagram resumes a simplistic shape of a standard change management life cycle for the deployment infrastructure code of OpenStack:

The different stages can be discussed as follows:

Plan and design

: The very early stage of planning the general layout of the OpenStack infrastructure and the related components that are willing to install, integrate, and deploy them.

Development stage

: This involves running tests for the latest versions of the infrastructure file definitions. In general, local tools, such as

Vagrant

and other virtualized local test environments, are used to test the changed files and commit them to a VCS.

Build and unit test stage

: Once a change is committed to VCS, a phase of code validation will be managed by a

Continuous Integration

(

CI

system. It will run several activities or jobs by checking the syntax, code compilation, and unit tests.

CI is an innovative practice that enables us to rapidly and effectively identify any defected code at an early stage. Jenkins and TeamCity are two of the most famous CI tools used by most software development enterprises. Such tools offer an automated test build of the software code, which provides fast feedback about its correctness at every commit of change.

Code packaging and release

: The CI tool should give a green light to process the changes. In this stage, the build has been done successfully and the configuration artifact will be packaged to be available for later phases.

During a classic application job build, one or more files are generated that will be uploaded to the configuration repository. A configuration artifact can be versioned and portable, but it must be consistent.

Test staging

: At this stage, several tests should be executed on similar production environments. The most effective infrastructure code test runs on multiple stages. For example, you should start with a first test stage for one OpenStack service on its own. Then, you should propagate the first test with the second one by integrating other OpenStack components.

Deploy to production

: That applies in the final stage where the modeled changes that have been tested will be applied with zero downtime. Some great release techniques can be engaged at this stage, such as

Blue-Green

 deployment.

The Blue-Green deployment technique ensures near zero downtime and reduces the risk of disturbing a running production environment when applying changes. During the change, two identical production environments are running. The live one is named Blue, and the idle one is named Green. A complete switch to Green environment will happen only when it was deployed and fully tested with the necessary checks and requirements. In the case of an unexpected issue in the live environment, it is still possible to rapidly roll out the last change by switching to the first Blue environment (the previous infrastructure version).

Operate in production

: This is the very last stage where it proves the degree of consistency of the last changes in a running production environment. It should also be possible to roll the changes out quickly and easily.

Deploying OpenStack

Integrating new services, updating, or upgrading some or all of the OpenStack components are all critical operational tasks. Such moments raise the need for the usage of software engineering practices. As mentioned in the previous section, applying such practices with Infrastructure as Code will help you deliver a high-quality code infrastructure. The end result will enable you to deploy a fully automated, robust, and continuous deliverable OpenStack environment.

To tweak the installation of a complete and extensible OpenStack environment, we need to start deploying a first test environment. As was promised in the beginning of this chapter, we will use a system management tool that will help us not only to deploy our first OpenStack layout rapidly, but also to carry feedback from testing results, such as unit testing.

Chef, Puppet, SaltStack, and many other system management tools are great tools that can do the job. You will probably have used one or more of them. Ansible will be chosen for this section and the upcoming sections as the system management tool for end-to-end OpenStack deployment and management.

Ansible in a nutshell

According to the new trend of cloud infrastructure developers, every operational task must be capable of automation. Many system management tools offer automation capabilities and have been extended to cover more advanced features, such as emulating parts of a given system for a fast file definition validation. Of course, every infrastructure tool must show its capability of making an easy-to-use, realistic full test and deployment. Compared to Chef or Puppet, for example, Ansible could be reckoned to be the simplest form of IT orchestration and automation tool. This is because Ansible does not require any agent or daemon to be running on the managed host or instance. It simply needs a Secure Shell connection, and then all it needs to do is copy the Ansible modules to the managed hosts and execute them, and that is it!

By the virtue of its simplicity, agentless architecture, deploying and expanding a large OpenStack infrastructure becomes much less complicated. Ansible uses playbooks to modularize its definition configuration files written in the YAML markup format.

As with Chef or Puppet, configuration files in Ansible are organized in a specific definition layered hierarchy, as follows:

Playbook

: A Playbook can be seen as a high level code of the system deployment. The code instructs which host or group of hosts will be assigned to which role. It encapsulates a few specific parameters to enable Ansible run user as

root

 for example.

Role

: A role represents the intended logical function of a host or group of hosts. The role file exposes tasks and customized functions to configure and deploy a service in one or a fleet of hosts.