75,59 €
Learn how you can put the features of OpenStack to work in the real world in this comprehensive path
This course is for those who are new to OpenStack who want to learn the cloud networking fundamentals and get started with OpenStack networking. Basic understanding of Linux Operating System, Virtualization, and Networking, and Storage principles will come in handy.
OpenStack is a collection of software projects that work together to provide a cloud fabric.
Learning OpenStack Cloud Computing course is an exquisite guide that you will need to build cloud environments proficiently. This course will help you gain a clearer understanding of OpenStack's components and their interaction with each other to build a cloud environment.
The first module, Learning OpenStack, starts with a brief look into the need for authentication and authorization, the different aspects of dashboards, cloud computing fabric controllers, along with 'Networking as a Service' and 'Software defined Networking'. Then, you will focus on installing, configuring, and troubleshooting different architectures such as Keystone, Horizon, Nova, Neutron, Cinder, Swift, and Glance. After getting familiar with the fundamentals and application of OpenStack, let's move deeper into the realm of OpenStack.
In the second module, OpenStack Cloud Computing Cookbook, preview how to build and operate OpenStack cloud computing, storage, networking, and automation. Dive into Neutron, the OpenStack Networking service, and get your hands dirty with configuring ML2, networks, routers, and distributed virtual routers. Further, you'll learn practical examples of Block Storage, LBaaS, and FBaaS.
The final module, Troubleshooting OpenStack, will help you quickly diagnose, troubleshoot, and correct problems in your OpenStack. We will diagnose and remediate issues in Keystone, Glance, Neutron networking, Nova, Cinder block storage, Swift object storage, and issues caused by Heat orchestration.
This Learning Path combines some of the best that Packt has to offer in one complete, curated package. It includes content from the following Packt products:
This course aims to create a smooth learning path that will teach you how to get started with setting up private and public clouds using a free and open source cloud computing platform—OpenStack. Through this comprehensive course, you'll learn OpenStack Cloud computing from scratch to finish and more!
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 825
Veröffentlichungsjahr: 2016
Learn how you can put the features of OpenStack to work in the real world in this comprehensive path
A course in three modules
BIRMINGHAM - MUMBAI
Copyright © 2016 Packt Publishing
All rights reserved. No part of this course may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this course to ensure the accuracy of the information presented. However, the information contained in this course is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this course.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this course by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Published on: August 2016
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham B3 2PB, UK.
ISBN 978-1-78712-318-2
www.packtpub.com
Authors
Alok Shrivastwa
Sunil Sarat
Kevin Jackson
Cody Bunch
Egle Sigler
Tony Campbell
Reviewers
Dr. Ketan Maheshwari
Ben Silverman
Chris Beatty
Walter Bentley
Victoria Martinez de la Cruz
Stefan Lenz
Andy McCrae
Melissa Palmer
Sriram Rajan
Content Development Editor
Mayur Pawanikar
Production Coordinator
Nilesh Mohite
The cloud is the new IT paradigm and has moved beyond being probable to being inevitable. No one can ignore it. Organizations have embraced cloud for various reasons such as agility, scalability, capex reduction, and a faster time to market their products and services. The cloud operating system, or cloud control layer or cloud software system or simply put cloud orchestrator, is at the heart of building a cloud delivering IaaS. While there are many choices available as far as the cloud orchestrator goes, OpenStack is a popular choice in the open source segment.
OpenStack is rapidly gaining momentum and is poised to become the leader in this segment. Therefore, it becomes imperative for organizations and IT managers / support teams to have these critical OpenStack skills. The challenge, however, stems from the fact that OpenStack is not a single product, but is a collection of multiple open source projects. Therefore, the challenge really is to have an understanding of these projects independently along with their interactions with the other projects and how they all are orchestrated together. While there is documentation available from the OpenStack project, it is important to have the necessary knowledge to stitch all of these services/components together and build your own cloud
This course is specifically designed to quickly help you get up to speed with OpenStack and give you the confidence and understanding to roll it out into your own data centers. From test installations of OpenStack running under VirtualBox to automated installation recipes that help you scale out production environments, this course covers a wide range of topics that help you install and configure a private cloud. The skills you will learn in this course will help you position yourself as an effective OpenStack troubleshooter.
This course is an attempt to provide all the information that is just about sufficient to kick start your learning of OpenStack and build your own cloud. We hope you will enjoy reading this course and more importantly find it useful in your journey towards learning and mastering OpenStack.
Module 1, Learning OpenStack, It is imperative for all the aspiring cloud administrators to possess OpenStack skills if they want to succeed in the cloud-led IT infrastructure space. This module comprises of installation prerequisites and basic troubleshooting instructions to help you build an error-free OpenStack cloud easily.
Module 2, OpenStack Cloud Computing Cookbook, in this module will show you exactly how to install the components that are required to make up a private cloud environment. Further you will learn to install and configure the components that are required to make up a private cloud environment.
Module 3, Troubleshooting OpenStack, in this module we'll walk through each OpenStack service and see how you can quickly diagnose, troubleshoot, and correct problems in your OpenStack. It will also provide high value information so that you can solve issues in storage, networking and compute.
Module 1:
The complete installation guidelines can be found at this URL:
http://docs.openstack.org/juno/install-guide/install/apt/content/
Module 2:
OpenStack runs on Linux. This module has been developed on Linux in a virtual environment such as VirtualBox or VMware Fusion or Workstation.
To run the accompanying virtual environment, you will need:
Hardware: At least 30Gb Disk with minimum 16Gb Ram
Software: Vagrant 1.6 or newer, VirtualBox 4.5 or newer or VMware Fusion/Workstation
Note: The accompanying virtual environment Vagrant scripts have not been tested on Windows. Please find the GitHub link for the supporting scripts for this module:
https://github.com/OpenStackCookbook/OpenStackCookbook
Module 3:
Software required through this module: Keystone, Glance, Neutron, Nova, Neutron, Cinder, Swift, Heat, Ceilometer, Elasticsearch, Logstash, Kibana with Ubuntu as the OS.
This course is for those who are new to OpenStack who want to learn the cloud networking fundamentals and get started with OpenStack networking. Basic understanding of Linux Operating System, Virtualization, and Networking and Storage principles will come in handy.
Feedback from our readers is always welcome. Let us know what you think about this course—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.
To send us general feedback, simply e-mail <[email protected]>, and mention the course's title in the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or contributing to a course, see our author guide at www.packtpub.com/authors.
Now that you are the proud owner of a Packt course, we have a number of things to help you to get the most from your purchase.
You can download the code files by following these steps:
You can also download the code files by clicking on the Code Files button on the course's webpage at the Packt Publishing website. This page can be accessed by entering the course's name in the Search box. Please note that you need to be logged in to your Packt account.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
The code bundle for the course is also hosted on GitHub at https://github.com/PacktPublishing/repository-name. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our courses—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this course. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your course, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.
To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the course in the search field. The required information will appear under the Errata section.
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at <[email protected]> with a link to the suspected pirated material.
We appreciate your help in protecting our authors and our ability to bring you valuable content.
If you have a problem with any aspect of this course, you can contact us at <[email protected]>, and we will do our best to address the problem.
Learning OpenStack
Set up and maintain your own cloud-based Infrastructure as a Service (IaaS) using OpenStack
Enterprises traditionally ran their IT services by running appropriate applications on a set of infrastructures and platforms. These were comprised of physical hardware in terms of compute, storage, and network along with software in terms of hypervisors, operating systems, and platforms. A set of experts from infrastructure, platform, and application teams would then put the pieces together and get a working solution tailored to the needs of the organization.
With the advent of virtualization and later on cloud, things have changed to a certain extent, primarily in the way things are built and delivered. Cloud, which has its foundations in virtualization, delivers a combination of relevant components as a service; be itInfrastructure as a Service (IaaS), Platform as a Service (PaaS), orSoftware as a Service (SaaS). In this book, we will only discuss how to provide a system with IaaS using an OpenStack-based private cloud. The key aspect of providing a system with IaaS is cross-domain automation. The system that helps us achieve this is called a Cloud Service Orchestrator or Cloud Platform or Cloud Controller. For the purposes of this book, we will refer to OpenStack as the Cloud Service Orchestrator. The Cloud Service Orchestrator or, simply put, the orchestrator is primarily responsible for the following:
Thus, in a cloud environment, the most important component is the orchestrator. There are several orchestrators; both free and open-source (FOSS) and commercial, which can help turn your virtualized IT infrastructure into a cloud.
Some of the choices in the FOSS segment for the orchestrators are as follows:
Some choices of commercial orchestrators are as follows:
In this book, we embark on a journey to understand the concepts, to install and configure the components of OpenStack, and finally, to build your own cloud using OpenStack. At the time of writing this book, OpenStack has been by far the most famous and widely adopted FOSS orchestrator or Cloud Software Platform in the market and the most comprehensive offering that provides IaaS among FOSS alternatives.
In this chapter, we will cover the following:
There are some key differences between commercial orchestrators, such as vRealize Automation and CIAC, and FOSS orchestrators, such as OpenStack. While both of them attempt to provide IaaS to users, it is important to understand the difference between both the types of orchestrator in order to appropriately design your Cloud.
Let's begin with commercial orchestrators; these provide a base IaaS to their users. They normally sit on top of a virtualized environment and enable an automated provisioning of compute, storage, and network, even though the extent of automation varies. As a part of the toolset, they also typically have a workflow engine, which in most cases provides us with an extensibility option.
The commercial orchestrators are a better choice when the entire orchestration needs to be plugged in to the current IT processes. They work wonderfully well when extensibility and integration are major tasks of the cloud environment, which is typically seen in large enterprises given the scale of operations, the type of business critical applications, and the maturity of IT processes.
In such large enterprises, in order to take full advantage of the private cloud, the integration and automation of the orchestrator in the IT systems of the company becomes necessary. This kind of orchestration is normally used when minimum changes are anticipated to be made to the applications. A primary use case of this is IaaS, where virtual machines are provisioned on a self-service basis and a very small learning curve is involved.
FOSS orchestrators are less extensible, but more standardized in terms of offerings. They offer standardized services that a user is expected to use as building blocks to offer a larger solution. In order to take full advantage of the FOSS orchestrators, some amount of recoding of applications is required as they need to make use of the newly offered services. The use cases here are both IaaS and PaaS (for example, Database as a Service, Message Queue as a Service, and so on).
For this reason, the APIs that are used among the FOSS orchestrators need to have some common ground. This common ground that we are talking about here is Amazon Web Services (AWS) API compatibility, as Amazon has emerged as the gold standard as far as the service-oriented cloud architecture is concerned. At the time of writing the book, OpenStack Nova still had AWS EC2 API compatibility, but this may be pushed out to the StackForge project.
Clouds fall under different categories depending on the perspective. If we look at it from an ownership and control standpoint, they will fall under private, public, hybrid, and community cloud categories. If we take a service perspective, it could be IaaS, PaaS, or SaaS. Let's look at the basic building blocks of a private cloud and understand how commercial orchestrators fit in vis-à-vis OpenStack.
The following block diagram shows the different building blocks of a cloud that are normally seen in a private implementation with a commercial orchestrator:
A private cloud with a commercial orchestrator
As we can see, in this private cloud setup, additional blocks such as Self Service Portal, Metering & Billing, and Workflows & Connectorssit on top of an already existing virtualized environment to provision a virtual machine, a stack of virtual machines, or a virtual machine with some application installed and configured over it.
While most of the commercial orchestrators are extensible, some of them have prebuilt plugins or connectors to most commonly used enterprise toolsets.
OpenStack doesn't natively support integration with enterprise toolsets, but in lieu of this, it provides more standardized services. OpenStack feels and behaves more like a public cloud inside an enterprise and provides more flexibility to a user. As you can see in the following diagram, apart from VM provisioning, services such as database, image storage, and so on are also provisioned:
A private cloud with OpenStack
Please note that some of these services, which are provided as a part of the standard offering by OpenStack, can be also be orchestrated using commercial orchestrators. However, this will take some efforts in terms of additional automation and integration.
So the big question is: under what circumstances should we choose OpenStack over the commercial orchestrators or vice versa? Let's look at the following table that compares the features that are significantly different.
Please note that the ease of installation and management are not covered in the following table:
Feature
OpenStack
Commercial orchestrator
Identity and access management*
Yes
Yes
Connectivity to enterprise toolsets
Not natively (Possible with ManageIQ)
Yes
Flexibility to the user
Yes
Somewhat
Enterprise control
Not natively (Possible with ManageIQ)
Yes
Standardized prebuilt services
Yes
No (Except virtual machines)
EC2-compatible API
Yes
No
So based on the previous table, OpenStack is an amazing candidate for an enterprise dev-test cloud and for providing public cloud-like services to an enterprise, while reusing existing hardware.
The currently supported stable release of OpenStack is codenamed Liberty. This book will deal with Juno, but the core concepts and procedures will be fairly similar to the other releases of OpenStack. The differences between Juno, Kilo, and Liberty and the subtle differences between the installation procedures of these will be dealt with in the Appendix section of the book.
OpenStack has a very modular architecture. OpenStack is a group of different components that deliver specific functions and come together to create a full-fledged orchestrator.
The following architecture diagram explains the architecture of the base components of the OpenStack environment. Each of these blocks and their subcomponents will be dealt with in detail in the subsequent chapters:
An OpenStack block diagram
The gray boxes show the core services that OpenStack absolutely needs to run. The other services are optional and are called Big Tent services, without which OpenStack can run, but we may need to use them as required. In this book, we look at the core components and also look at Horizon, Heat, and Ceilometer in the Big Tent services.
Each of the previously mentioned components has their own database. While each of these services can run independently, they form relationships and have dependencies among each other. As an example, Horizonand Keystone provide their services to the other components of OpenStack and should be the first ones to be deployed.
The following diagram expands on the preceding block diagram and depicts the different relationships amongst the different services:
Service relationships
The service relationship shows that the services are dependent on each other. It is to be noted that all the services work together in harmony to produce the end product as a Virtual Machine (VM). So the services can be turned on or off depending on what kind of virtual machine is needed as the output. While the details of the services are mentioned in the next section, if, as an example, the VM or the cloud doesn't require advanced networking, you may completely skip the installation and configuration of the Neutron service.
Not all the services of the OpenStack system were available from the first release. More services were added as the complexity of the orchestrator increased. The following table will help you understand the different services that can be installed, or should you choose to install another release in your environment:
Release name
Components
Austin
Nova, Swift
Bexar
Nova, Glance, Swift
Cactus
Nova, Glance, Swift
Diablo
Nova, Glance, Swift
Essex
Nova, Glance, Swift, Horizon, Keystone
Folsom
Nova, Glance, Swift, Horizon, Keystone, Quantum, Cinder
Grizzly
Nova, Glance, Swift, Horizon, Keystone, Quantum, Cinder
Havana
Nova, Glance, Swift, Horizon, Keystone, Neutron, Cinder, Heat, Ceilometer
Icehouse
Nova, Glance, Swift, Horizon, Keystone, Neutron, Cinder, Heat, Ceilometer, Trove
Juno
Nova, Glance, Swift, Horizon, Keystone, Neutron, Cinder, Heat, Ceilometer, Trove, Sahara
Kilo
Nova, Glance, Swift, Horizon, Keystone, Neutron, Cinder, Heat, Ceilometer, Trove, Sahara, Ironic, Zaqar, Manila, Designate, Barbican
Liberty
Nova, Glance, Swift, Horizon, Keystone, Neutron, Cinder, Heat, Ceilometer, Trove, Sahara, Ironic, Zaqar, Manila, Designate, Barbican, Murano, Magnum, Kolla, Congress
The OpenStack services and releases
At the time of writing, the only fully supported releases were Juno, Kilo, and Liberty. Icehouse is only supported from the security updates standpoint in the OpenStack community. There are, however, some distributions of OpenStack that are still available on older releases such as that of Icehouse. (You can read more about different distributions in the last chapter of the book.).
It is important to know about the functions that each of these services performs. We will discuss the different services of OpenStack. In order to understand the functions more clearly, we will also draw parallels with the services from AWS. So if you ever want to compare your private cloud with the most used public cloud, you can.
Please refer to the preceding table in order to see the services that are available in a particular OpenStack release.
This service provides identity and access management for all the components of OpenStack. It has internal services such as identity, resource, assignment, token, catalog, and policy, which are exposed as an HTTP frontend.
So if we are logging in to Horizon or making an API call to any component, we have to interact with the service and be able to authenticate ourselves in order to use it. The policy services allow the setting up of granular control over the actions allowed by a user for a particular service. The service supports federation and authentication with an external system such as an LDAP server.
This service is equivalent to the IAM service of the AWS public cloud.
Horizon provides us with a dashboard for both self-service and day-to-day administrative activities. It is a highly extensible Django project where you can add your own custom dashboards if you choose to. (The creation of custom dashboards is beyond the scope of this book and is not covered here).
Horizon provides a web-based user interface to OpenStack services including Nova, Swift, Keystone, and so on.
This can be equated to the AWS console, which is used to create and configure the services.
Nova is the compute component of OpenStack. It's one of the first services available since the inception as it is at the core of IaaS offering.
Nova supports various hypervisors for virtual machines such as XenServer, KVM, and VMware. It also supports Linux Containers (LXC) if we need to minimize the virtualization overhead. In this book, we will deal with LXC and KVM as our hypervisors of choice to get started.
It has various subcomponents such as compute, scheduler, xvpvncproxy, novncproxy, serialproxy, manage, API, and metadata. It serves an EC2 (AWS)-compatible API. This is useful in case you have a custom system such as ITIL tool integration with EC2 or a self-healing application. Using the EC2 API, this will run with minor modifications on OpenStack Nova.
Nova also provides proxy access to a console of guest virtual machines using the VNC proxy services available on hypervisors, which is very useful in a private cloud environment. This can be considered equivalent to the EC2 service of AWS.
Glance service allows the storage and retrieval of images and corresponding metadata. In other words, this will allow you to store your OS templates that you want to be made available for your users to deploy. Glance can store your images in a flat file or in an object store (such as Swift).
Swift is the object storage service of OpenStack. This service is primarily used to store and retrieveBinary Large Object (BLOBs). It has various subservices such as ring, container server, updater, and auditors, which have a proxy server as their frontend.
The swift service is used to actually store Glance images. As a comparison, the EC2 AMIs are stored in your S3 bucket.
The swift service is equivalent to the S3 storage service of AWS.
Cinder provides block storage to the Nova VMs. Its subsystems include a volume manager, a SQL database, an authentication manager, and so on. The client uses AQMP such as Rabbit MQ to provide its services to Nova. It has drivers for various storage systems such as Cloud Byte, Gluster FS, EMC VMAX, Netapp, Dell Storage Centre, and so on.
This service provides similar features to the EBS service of AWS.
Previously known as Quantum, Neutron provides networking as a service. There are several functionalities that it provides such as Load Balancer as a Service and Firewall as a Service. This is an optional service and we can choose not to use this, as basic networking is built into Nova. Also, Nova networking is being phased out. Therefore, it is important to deal with Neutron, as 99 percent of OpenStack implementations have implemented Neutron in their network services.
The system, when configured, can be used to create multi-tiered isolated networks. An example of this could be a full three-tiered network stack for an application that needs it.
This is equivalent to multiple services in AWS such as ELB, Elastic IP, and VPC.
Heat is the core orchestration service of the orchestrator. What this means is that you can script the different components that are being spun up in an order. This is especially helpful if we want to deploy multicomponent stacks. The system integrates with most of the services and makes API calls in order to create and configure different components.
The template used in Heat is called Heat Orchestrator Template (HOT). It is actually a single file in which you can script multiple actions. As an example, we can write a template to create an instance, some floating IPs and security groups, and even create some users in Keystone.
The equivalent of Heat in AWS would be the cloud formation service.
Ceilometer service is used to collect metering data. There are several subsystems in the Ceilometer such as polling agent, notification agent, collector, and API. This also allows the saving of alarms abstracted by a storage abstraction layer to one of the supported databases such as Mongo DB, Hbase, or SQL server.
Trove is the Database as a Service component of OpenStack. This service uses Nova to create the compute resource to run DBaaS. It is installed as a bunch of integration scripts that run along with Nova. The service requires the creation of special images that are stored in Glance.
This is equivalent to the RDS service of AWS.
Sahara service is the Big Data service of OpenStack; it is used to provision a Hadoop cluster by passing a few parameters. It has several components such as Auth component, Data Access Layer, Provisioning Engine, and Elastic Data Processing.
This is very close to getting the MapReduce AWS service in your very own cloud.
The Designate service offers DNS services equivalent to Route 53 of the AWS. The service has various subsystems such as API, the Central/Core service, the Mini DNS service, and Pool Manager. It has multiple backend drivers that can be used, examples being PowerDNS, BIND, NSD, and DynECT. We can create our own backend drivers as well.
The Ironic service allows bare metal provisioning using technologies such as the PXE boot and the Intelligent Platform Management Interface (IPMI). This will allow bare metal servers to be provisioned provided we have the requisite drivers for them.
Please remember that the requisite networking elements have to be configured, for example, the DNS, DHCP configuration and so on, which are needed for the PXE boot to work.
Zaqar is the messaging and notification service of OpenStack. This is equivalent to the SNS service from AWS. It provides multitenanted HTTP-based messaging API that can be scaled horizontally as and when the need arises.
Barbican is the key management service of OpenStack that is comparable to KMS from AWS. This provides secure storage, retrieval, provisioning and management of various types of secret data such as keys, certificates, and even binary data.
Manila provides a shared filesystem as a service. At the moment, it has a single subcomponent called the manila-manage. This doesn't have any equivalent in the AWS world yet. This can be used to mount a single filesystem on multiple Nova instances, for instance a web server with shared assets, which will help to keep the static assets in sync without having to run a block-level redundancy such as DRBD or continuous rsyncs.
Murano is an application catalog, enabling application developers and cloud administrators to publish various cloud-ready applications in a catalog format. This service will use Heat at the backend to deliver this and will only work on the UI and API layer.
Magnum introduces Linux Containers such as Dockers and Kubernetes (by Google) to improve migration option. This service is in some ways like Trove, it uses an image with Docker installed on it and orchestrates Magnum with Heat. It is effectively Container as a Service (CaaS) of OpenStack.
Kolla is another project that is focused on containers. While it did make its first appearance in Kilo, it was majorly introduced in the Liberty release. This is aimed at better operationalization by containerizing OpenStack itself. That means, we can now run the OpenStack services in containers, and thereby make governance easier.
At the time of writing, the Kolla project supported services such as Cinder, Swift, Ceph, and Ironic.
Congress is another project focused on governance. It provides Policy as a Service, which can be used for compliance in a dynamic infrastructure, thereby maintaining the OpenStack components to be compliant to the enterprise policy.
The following table shows the dependency of services. The Dependent on column shows all the services, which are needed for successful installation and configuration of the service. There might be other interactions with other services, but they are not mentioned here:
Service name
Core service
Dependent on
Keystone
True
None
Horizon
False
Keystone
Glance
True
Swift
Keystone
Horizon
Swift
True
Keystone
Nova
True
Keystone
Horizon
Glance
Cinder (Optional)
Neutron (Optional)
Heat
False
Keystone
Cinder
False
Keystone
Neutron
False
Keystone
Nova
Ceilometer
False
Keystone
Trove
False
Keystone
Nova
Glance
Sahara
False
Keystone
Nova
Glance
Swift
Keystone
Magnum
False
Heat
Nova
Glance
Swift
Keystone
Murano
False
Heat
Service dependency
In the remainder of this book, we will be installing and configuring various OpenStack components. Therefore, let's look at the architecture that we will follow in the remainder of the book and what we need to have handy.
While we can set up all the components of the OpenStack on a single server, it will not be close to any real-life scenario, so taking this into consideration, we will do a minimal distributed installation. Since this book is intended to be a beginner's guide, we shall not bore ourselves with cloud architecture questions.
As we are aware by now that OpenStack is made up of individual components, we need to be careful in selecting the appropriate services. As we have already seen in the dependency maps table, some services are sort of mandatory and the others are optional depending on the scenario. Too many services and you complicate the design, too little and you constrain it; so it is imperative that we strike a good balance. In our case, we will stick to the basic services:
In the optional section, we will choose Neutron. This should help us in getting a pretty robust cloud with the essential features rolled out in no time.
We will be installing these components on virtual machines for our learning purposes; we will use four different virtual machines to run our cloud:
The following diagram shows the kind of services that will be hosted in each of the different nodes in the rest of the book. We will identify the servers with the previously mentioned names:
The OpenStack service layout
The controller node will house the manager services for all the different OpenStack components such as message queue, Keystone, image service, Nova management, and Neutron management.
The network node server will house Neutron components such as the DHCP Agent, the L3 Agent, and Open vSwitch. This node will provide networking to all the guest VMs that spin up in the OpenStack environment.
The compute node will have the hypervisor installed on itself. For the purpose of this setup, we will use LXC or KVM to keep things simple. It also houses network agents.
The storage node will provide block and object storage to the rest of the OpenStack services. This will be the node that needs to be connected to the iSCSI storage in order to create different blocks.
We will use Linux Ubuntu 14.04 as the operating system of choice to install and configure the different components. All the previously mentioned nodes should be running Ubuntu.
Since we are going to use Neutron, the following network architecture needs to be followed:
The following diagram shows the different connections in our network. The compute node is connected to all the networks except the external network. It is to be noted that the storage and the tunnel network can be completely internal networks. The management network is primarily the one that needs to be accessible from the LAN of the company, as this will be the network that the users will need to reach in order to access the self-service portal:
Network connectivity
For the purpose of learning, let's set up the network ranges that we will use in our installation. The following is the table of the network range:
Network Name
IP Range
Management Network
172.22.6.0/24
Tunnel Network
10.0.0.0/24
Storage Network
192.168.10.0/24
External Network
192.168.2.0/24
Network ranges
Since we are using this in the lab network, the external network is assumed and will need to be changed depending on the routing rules.
In this chapter, we were introduced to orchestrators, both commercial and FOSS. At a very high level, we looked at the differences between these two types of orchestrators and the appropriate use cases for OpenStack. We also looked at the basic building blocks of a private cloud and their correlation in the OpenStack world. We looked at the OpenStack architecture and services. And finally, we covered the lab setup that would be required to learn the deployment of your private cloud using OpenStack.
We start our journey in the next chapter by learning to install and configure the common components that form the basis of most of the OpenStack services. The key topic covered, however, would be installation and configuration of Keystone, which is the core authentication and authorization service of OpenStack
Most of the OpenStack components have a basic in-built authentication mechanism, which is adequate for them to function on their own. However, when they have to come together, Keystone forms the bridge, a common platform for authentication and authorization.
Keystone was launched in the Essex release and has been deemed a core component of the OpenStack deployment ever since. In this chapter, we will understand in some detail the following:
Please be advised that this will be installed and configured on the controller node.
The entire installation and configuration of common components and the core Keystone service takes between 60-90 minutes.
Let's understand identity-related concepts that are used in Keystone.
User represents a person or a service with a set of credentials such as a user name, password, or username and an API key. A user needs to be a member of at least one project, but can be a part of multiple projects.
A group of users in OpenStack is called a project or a tenant. Both of these terms are used interchangeably and mean the same thing. Please be advised that tenant is the new terminology, and the term project has seeped in from the initial days when Keystone was not available. The policies and quotas are all applied at the project or the tenant level
As shown in the figure, users can be a part of one or more projects.
The role determines
