Containerization with Ansible 2 - Aric Renzo - E-Book

Containerization with Ansible 2 E-Book

Aric Renzo

0,0
37,19 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Automate the container lifecycle from image build through cloud deployment using the automation language you already know.

About This Book

  • Use Ansible Container as an integral part of your workflow to increase flexibility and portability.
  • Manage the container life cycle using existing Ansible roles and automate the entire container build, deployment and management process.
  • A step-by-step guide that will get you up and running from building a simple container image to deploying a complex, multi-container app in the cloud.

Who This Book Is For

This book is aimed at DevOps engineers, administrators and developers who already have some familiarity with writing and running Ansible playbooks, and want to learn how to use Ansible to implement containerization.

What You Will Learn

  • Increase your productivity by using Ansible roles to define and build images
  • Learn how to work with Ansible Container to manage, test, and deploy your containerized applications.
  • Increase the flexibility and portability of your applications by learning to use Ansible
  • Discover how you can apply your existing Ansible roles to the image build process
  • Get you up and running from building a simple container image to deploying a complex, multi-container app in the cloud.
  • Take an indepth look at the architecture of Ansible Container, and learn how to build re-usable container images, reliably and efficiently.

In Detail

Today many organizations are adopting containerization and DevOps methodologies to improve the flexibility and reliability of deploying new applications. Building custom application containers often means leveraging brittle and oftentimes complex Dockerfiles that can lead to cumbersome, multi-layered containers. Ansible Container brings a new workflow for managing the development of containers from development all the way through to production. The goal of this book is to get you up and running with Ansible Container so that you can create container images from Ansible roles, run containers locally, and deploy them to the cloud.

We'll progress from a simple, single container application, to a complex application consisting of multiple, connected containers. You'll learn how to run the application locally, how to deploy it to an OpenShift cluster running locally, and how to deploy it to a Kubernetes cluster running in the cloud. Along the way, you'll see how to use roles to define each image or micro-service, and how to share your completed project with the Ansible community. Next, you will be able to take full advantage of Ansible Container, and use it to automate the container lifecycle in your own projects.

By the end of this book,you will gain mastery of the Ansible Container platform by building complex multi-container projects ready for deployment into production.

Style and approach

This book will walk you through Ansible Containerization from building a simple container image to deploying a complex, multi-container app in the cloud. You will get an in-depth understanding of how to effectively manage containers using Ansible 2.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 308

Veröffentlichungsjahr: 2017

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Containerization with Ansible 2

 

 

 

 

 

 

 

 

 

 

Implement container management, deployment, and orchestration within the Ansible ecosystem

 

 

 

 

 

 

 

 

 

 

Aric Renzo

 

 

 

 

BIRMINGHAM - MUMBAI

Containerization with Ansible 2

 

Copyright © 2017 Packt Publishing

 

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

 

First published: November 2017

Production reference: 1051217

 

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

 

ISBN 978-1-78829-191-0

 

www.packtpub.com

Credits

Author

Aric Renzo

Copy Editor

Safis Editing

Reviewer

Michael Bright

Project Coordinator

Judie Jose

Commissioning Editor

Vijin Boricha

Proofreader

Safis Editing

Acquisition Editor

Heramb Bhavsar

Indexer

Tejal Daruwale Soni

Content Development Editor

Devika Battike

Graphics

Tania Dutta

 

Technical Editor

Prachi Sawant

Production Coordinator

Melwyn Dsa

About the Author

Aric Renzo is a DevOps engineer based in Charlotte, North Carolina, and is a fan of all things geeky and open source. He has experience working on many open source and free software project deployments for clients based on OpenStack, Ansible, Docker, Chef, SaltStack, and Kubernetes. Aric is a member of the Ansible community and teaches courses on basic and advanced Ansible concepts. His past projects include work on data center deployments, network infrastructure automation, MongoDB NoSQL database architecture, and designing highly available OpenStack environments. Aric is a fan of anything to do with DevOps, automation, and making his workflow more efficient.

Aric is a lifelong geek and a graduate of Penn State University in the information sciences and technology program. He is married to Ashley Renzo, an incredibly beautiful and talented science teacher in Gaston County, North Carolina.

Dedicated to the love of my life, Ashley Renzo; without her unending love and encouragement, this book would never have been written. Also to my dearest friends and family who have prayed for me, advised me, and shared their amazing wisdom with me throughout this project. I am so blessed to have these amazing people in my life.

About the Reviewer

Michael Bright, RHCE/RHCSA, is a solution architect working in the HPE EMEA Customer Innovation Center.He has strong experience across cloud and container technologies (Docker, Kubernetes, AWS, GCP, Azure), as well as NFV/SDN.Based in Grenoble, France, he runs a Python user group and is a co-organizer of the Docker and FOSS Meetup groups.He has a keen interest in serverless, container, orchestration, and unikernel technologies, on which he has presented and run training tutorials at several conferences.He has presented many a time on subjects as diverse as NFV, Docker, container orchestration, serverless, unikernels, Jupyter Notebooks, MongoDB, and Tmux.Michael has a wealth of experience across pure research, R&D, and presales consulting roles. The books that he has worked on are CoreOS in Action, Manning andKubernetes in Action, Manning.

www.PacktPub.com

For support files and downloads related to your book, please visit www.PacktPub.com.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

 

https://www.packtpub.com/mapt

Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.

Why subscribe?

Fully searchable across every book published by Packt

Copy and paste, print, and bookmark content

On demand and accessible via a web browser

Customer Feedback

Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review on this book's Amazon page at https://www.amazon.com/dp/1788291913.

If you'd like to join our team of regular reviewers, you can e-mail us at [email protected]. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!

Table of Contents

Preface

What this book covers

What you need for this book

Who this book is for

Conventions

Reader feedback

Customer Support

Downloading the color images for this book

Errata

Piracy

Questions

Building Containers with Docker

DevOps and the shifting IT landscape

Manual deployments of monolithic applications

An introduction to automation

Virtualization of applications and infrastructure

Containerization of applications and infrastructure

Orchestrating of containerized applications

Building your first docker container

Instantiating the lab environment

Installing the lab environment:

Starting your first Docker container

Building your first container

Dockerfiles

Container life cycle management

References

Summary

Working with Ansible Container

An introduction to Ansible Container and the microservice architecture

A quick introduction to Docker Compose

Ansible Container workflow

Ansible Container quick-start

Ansible Container init

Ansible Container build

Ansible Container run

Ansible Container destroy

Summary

Your First Ansible Container Project

What are Ansible roles and container-enabled roles?

Roles in Ansible Galaxy

Ansible Container NGINX role

Starting a new project

Installing the NGINX role

Running the NGINX role

Modifying the NGINX role

Running the modified role

Pushing the project to Docker Hub

Summary

What's in a Role?

Custom roles with Ansible Container

YAML syntax

Ansible modules

A brief overview of MariaDB

Initializing an Ansible Container role

What's in a container-enabled role?

Initializing the MariaDB project and role

container.yml

Writing a container-enabled role

roles/mariadb_role/meta/container.yml

tasks/main.yml

Task breakdown (main.yml)

tasks/initialize_database.yml

Task breakdown (initialize_database.yml)

templates/my.cnf.j2

Building the container-enabled role

Customizing the container-enabled role

variable_files/dev.yml

variable_files/test.yml

variable_files/prod.yml

container.yml

References

Summary

Containers at Scale with Kubernetes

A brief overview of Kubernetes

Getting started with the Google Cloud platform

Deploying an application in Kubernetes using kubectl

Describing Kubernetes resources

Exposing Kubernetes services

Scaling Kubernetes pods

Creating deployments using Kubernetes manifests

Creating services using Kubernetes manifests

References

Summary

Managing Containers with OpenShift

What is OpenShift?

Installing Minishift locally

Installing the Minishift binaries

Deploying containers using the web interface

OpenShift web user interface tips

An introduction to the OpenShift CLI

OpenShift and Ansible Container

References

Summary

Deploying Your First Project

Overview of ansible-container deploy

ansible-container deploy

Deploying containers to Kubernetes

Deploying containers to OpenShift

References

Summary

Building and Deploying a Multi-Container Project

Defining complex applications using Docker networking

Exploring the Ansible Container django-gulp-nginx project

Building the django-gulp-nginx project

Development versus production configurations

Deploying the project to OpenShift

References

Summary

Going Further with Ansible Container

Tips for writing roles and container apps

Use full YAML syntax

Use Ansible modules

Build powerful deployment playbooks with Ansible Core

Troubleshooting application containers

Create a build pipeline using Jenkins or TravisCI

Share roles and apps on GitHub and Ansible Galaxy

Containerize everything!

References

Summary

Preface

Over the last few years, the world of IT has seen radical shifts in the ways in which software applications are developed and deployed. The rise of automation, cloud computing, and virtualization has fundamentally shifted how system administrators, software developers, and organizations as a whole view and manage infrastructure. Just a few years ago, it would seem unthinkable to many in the IT industry to allow mission-critical applications to be run outside the walls of the corporate data center. However, now there are more organizations than ever migrating infrastructure to cloud services such as AWS, Azure, and Google Compute in an effort to save time and cut back on overhead costs related to running physical infrastructure. By abstracting away the hardware, companies can focus on what really matters—the software applications that serve their users.

The next great tidal wave within the IT field formally started in 2013 with the initial release of the Docker container engine. Docker allowed users to easily package software into small, reusable execution environments known as containers, leveraging features in the Linux kernel for use with LXC (Linux Containers). Using Docker, developers can create microservice applications that can be built quickly, are guaranteed to run in any environment, and leverage reusable service artifacts (container images) that can be version controlled. As more and more users adopted containerized workflows, gaps in execution began to appear. While Docker was great at building and running containers, it struggled to be a true end-to-end solution across the entire container life cycle.

The Ansible Container project was developed to bring the power of the Ansible configuration management and automation platform to the world of containers. Ansible Container bridges the container life cycle management gap by allowing container build and deploy pipelines to speak the Ansible language. Using Ansible Container, you can leverage the powerful Ansible configuration management language to not only build containers, but deploy full-scale applications on remote servers and cloud platforms.

This book will serve as a guide to working with the Ansible Container project. It is my goal that by the end of this book, you will have a firm understanding of how Ansible Container works, and how to leverage its many capabilities to build robust containerized software stacks from development all the way to production.

What this book covers

Chapter 1, Building Containers with Docker, introduces the reader to what Docker is, how it works, and the basics of using Dockerfiles and Docker Compose. This chapter lays down the foundational concepts needed to start learning how to use Ansible Container.

Chapter 2, Working with Ansible Container,explores the Ansible Container workflow. This chapter gives the reader familiarity with the core Ansible Container concepts such as build, run, and destroy.

Chapter 3, Your First Ansible Container Project, gives the user experience in building a simple Ansible Container project by leveraging a community role available on Ansible Galaxy. By the end of this chapter, the reader will be familiar with building projects and pushing container artifacts to container image repositories such as Docker Hub.

Chapter 4, What's in a Role?, gives the user an overview of how to write custom container-enabled roles for use with Ansible Container. The overarching goal of this chapter is to write a role that builds a fully functional MariaDB container image from scratch. By the end of this chapter, the user should have basic familiarity with writing Ansible playbooks using proper style and syntax.

Chapter 5, Containers at Scale with Kubernetes, gives the reader an overview of the Kubernetes platform and core functionality. In this chapter, the reader will have the opportunity to create a multi-node Kubernetes cluster in the Google Cloud and run containers inside it.

Chapter 6, Managing Containers with OpenShift, introduces the reader to Redhat's OpenShift platform. This chapter gives the reader the steps required to deploy a local OpenShift cluster using Minishift and run containerized workloads on it. This chapter also looks at the key differences between Kubernetes and OpenShift, even though the architectures are fundamentally similar.

Chapter 7, Deploying Your First Project takes an in-depth look at the final command in the Ansible Container workflow—deploy. Using deploy, the reader will gain first-hand experience of deploying previously built projects to Kubernetes and OpenShift using the Ansible Container as an end-to-end workflow tool.

Chapter 8, Building and Deploying a Multi-Container Project, looks at how Ansible Container can be used to build a project that leverages more than one application container. Critical to a full understanding of this topic is an introduction to container networking and configuring containers to access network resources. This chapter will give the reader an opportunity to build and deploy a multi-container project using Django, Gulp, NGINX, and PostgreSQL containers.

Chapter 9, Going Further with Ansible Container, gives the reader an idea of the next steps to take after mastering the entire Ansible Container workflow. Topics explored in this section include integrating Ansible Container with CICD tools, and sharing projects on Ansible Galaxy.

What you need for this book

This book assumes a beginner-to-medium level of experience of working with the Linux operating system, deploying applications, and managing servers. This book walks you through the steps required to bring up a fully-functional lab environment on your local laptop to quickly get up and running using a Virtualbox and Vagrant environment. Prior to starting, it would be helpful to have Virtualbox, Vagrant, and the Git command-lineclient installed and running on your personal computer. To run this environment with full specifications, the following system requirements must be met or exceeded:

CPU: 2 cores (Intel Core i5 or equivalent)

Memory: 8 GB RAM

Disk space: 80 GB

In this book, you will need the following software list:

VirtualBox 5.1 or higher

Vagrant 1.9.1 or higher

A text editor that edits YAML files (GitHub Atom or VIM preferred)

 

Internet connectivity is required to install the necessary packages.

Who this book is for

This book is designed to assist those currently working as system administrators, DevOps engineers, or technical architects, (or similar roles) to quickly get up and running with the Ansible Container workflow. It is helpful as well if the reader already has a basic understanding of Docker, Ansible, or other related automation platforms prior to reading the book, although not required. It is my hope that a user can get a firm understanding of these basics while reading the book. The end goal is to help readers gain a solid foundation on how Ansible Container can accelerate building, running, testing, and deploying application containers from development to production environments.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning. Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "We can include other contexts through the use of the include directive."

A block of code is set as follows:

--- - name: Create User Account user: name: MyUser state: present - name: Install Vim text editor apt: name: vim state: present

Any command-line input or output is written as follows:

ubuntu@node01:/tmp$ ansible-galaxy init MyRole --container-enabled - MyRole was created successfully

New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "Clicking the Next button moves you to the next screen."

Warnings or important notes appear like this.
Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book-what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of. To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer Support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the color images for this book

We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/ContainerizationwithAnsible2_ColorImages.pdf.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visitinghttp://www.packtpub.com/submit-errata, selecting your book, clicking on theErrata Submission Formlink, and entering the details of your errata. Once your errata is verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title. To view the previously submitted errata, go tohttps://www.packtpub.com/books/content/supportand enter the name of the book in the search field. The required information will appear under theErratasection.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy. Please contact us at [email protected] with a link to the suspected pirated material. We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.

 

Building Containers with Docker

In recent years, the landscape of the IT industry has dramatically shifted. The rise of highly interactive mobile applications, cloud computing, and streaming media has pushed the limits of the existing IT infrastructure. Users who were once happy with web browsing and email are now taking advantage of the highly interactive services that are available and are continually demanding higher bandwidth, reliability, and more features. In the wake of this shift, IT departments and application developers are continually attempting to find ways to keep up with the increased demand to remain relevant to consumers who depend on their services.

As an application developer, infrastructure support specialist, or DevOps engineer, you have no doubt seen the radical shift in how infrastructure is supported and maintained. Gone are the days when a developer could write an application in isolation, deploy it across an enterprise, and hand over the keys to operations folks who may only have had a basic understanding of how the application functioned. Today, the development and operations paradigms are intrinsically interwoven in what most enterprises are calling DevOps. In the DevOps mindset, operations and support staff work directly with application developers in order to write applications, as well as infrastructure as code. Leveraging this new mindset allows services to go live that may scale multiple tiers and spread between hundreds of servers, data centers, and cloud providers. Once an organization adopts a DevOps mindset, this creates a cultural shift between the various departments. A new team mentality usually emerges, in which developers and operations staff feel a new sense of camaraderie. Developers are happy to contribute to code that makes application deployments easier, and operations staff are happy with the increased ease of use, scaling, and repeatability that comes with new DevOps-enabled applications.

Even within the world of DevOps, containerization has been actively growing and expanding across organizations as a newer and better way to deploy and maintain applications. Like anything else in the world of information technology, we need controlled processes around how containers are built, deployed, and scaled across an organization. Ansible Container provides an abstracted and simple-to-implement methodology for building and running containers at scale. Before we start to learn about Ansible and containerization platforms, we must first examine how applications and services were deployed historically.

Before we get started, let's look at the topics we will address in this chapter:

A historical overview of the DevOps and IT infrastructure

:

Manual deployments

An introduction to automation

The virtualization of applications

The containerization of applications

The orchestrating of containerized applications

Building your first Docker container

Setting up a lab environment

Starting your first Docker container

Building your first Docker container

Container life cycle management

DevOps and the shifting IT landscape

Let's take a quick look at the evolution of many IT departments, and the response to this radical shift across the industry. Before we delve into learning about containers, it is important to understand the history of deploying applications and services in order to realize which problems containerization addresses, as well as how infrastructure has changed and evolved over the decades. 

Manual deployments of monolithic applications

The manual deployment of large monolithic applications is where most application deployments start out, and the state of most infrastructure in the late 1990's and early to mid-2000's. This approach normally goes something like this:

An organization decides they want to create a new service or application.

The organization commissions a team of developers to write the new service.

New servers and networking equipment are racked and stacked to support the new service.

The new service is deployed by the operations and engineering teams, who may have little to no understanding of what the new service actually does.

Usually, this approach to deploying an application is characterized by little to no use of automation tools, basic shell or batch scripts, and large complex overheads to maintain the application or deploy upgrades. Culturally, this approach creates information silos in teams, and individuals become responsible for small portions of a complicated overall picture. If a team member is transferred between departments or leaves the organization, havoc can arise when the people who are then responsible for the service are forced to reverse engineer the original thought processes of those who originally developed the application. Documentation may be vague if it exists at all.

An introduction to automation

The next step in the evolution towards a more flexible, DevOps-oriented architecture is the inclusion of an automation platform that allows operation and support engineers to simplify many aspects of deployment and maintenance tasks within an organization. Automation tools are numerous and varied, depending on the extent to which you wish to automate your applications. Some automation tools work only at an OS-level to ensure that the operating system and applications are running as expected. Other automation tools can use interfaces such as IPMI to remotely power-on bare-metal servers in order to deploy everything from the operating system upward.

Automation tools are based around the configuration management concepts of current state and desired state. The goal of an automation platform is to evaluate the current state of a server against a programmatic template that defines the servers desired state and only applies actions on the server that are required to bring it into the desired state. For example, an automation platform checking for NGINX in a running state may look at an Ubuntu 16.04 server and see that NGINX is not currently installed.

To bring this server into the desired state, it may run the command apt-get install nginx on the backend to bring that server into compliance. When that same automation tool is evaluating a CentOS server, it may determine that NGINX is installed but not running. To bring this server into compliance, it would run systemctl start nginx to bring that server into compliance. Notice that it did not attempt to re-install NGINX. To expand our example, if the automation tool was examining a server that had NGINX both installed and running, it would take no action on that server, as it is already in the desired state. The key to a good automation platform is that the tool only executes the steps required to bring that server into the desired state. This concept is known as idempotency, and is a hallmark of most automation platforms.

We will now look at a handful of open source automation tools and examine how they work and what makes them unique. Having a firm understanding of automation tools and how they work will help you to understand how Ansible Container works, and why it is an invaluable tool for container orchestration:

Chef

:

Chef is a configuration management tool written by Adam Jacobs in 2008 to address specific use cases he was tasked with at the time. Chef code is written in a Ruby-based domain-specific language known as

recipes

. A collection of recipes grouped together for a specific purpose is known as a

cookbook

. Cookbooks are stored on a server, from which clients can periodically download updated recipes using the client software running as a daemon. The

Chef Client

is responsible for evaluating the current state against the desired states described in the cookbooks.

Puppet

:

Puppet was written in 2005 by Luke Kaines and, similar to Chef, works on a client-server model. Puppet manifests are written in a Ruby DSL and stored on a dedicated server known as the

Puppet Master

. C

lients run a daemon known as the

Puppet Agent

, which is responsible for downloading Puppet manifests and executing them locally across the clients.

Salt

:

Salt is a configuration management tool written by Thomas Hatch in 2011. Similar to Puppet and Chef, Salt works primarily on a

client-server

model in which

states

stored on the

Salt Master

are executed on the minions to bring about the desired state. Salt is notable in that it is one of the fastest and most efficient configuration management platforms, as it employs a message bus architecture (ZeroMQ) between the master and nodes. Levering this message bus, it is quickly able to evaluate these messages and take the corresponding action.

Ansible

:

Ansible is perhaps one of the more unique automation platforms of the ones we have looked at thus far. Ansible was written in 2012 by Michael DeHaan to provide a minimal, yet powerful configuration management tool. Ansible

playbooks

are simple YAML files that detail the actions and parameters that will be executed on target hosts in a very readable format. By default, Ansible is agentless and leverages a

push

model, in which playbooks are executed from a centralized location (your laptop, or a dedicated host on the network), and evaluated on a target host over SSH. The only requirements to deploy Ansible are that the hosts you are running playbooks against need to be accessible over SSH, and they must have the correct version of Python installed (2.7 at the time of writing). If these requirements are satisfied, Ansible is an incredibly powerful tool that requires very little effort in terms of knowledge and resources to get started using it. More recently, Ansible launched the Ansible Container project, with the purpose of bringing configuration management paradigms to building and deploying container-based platforms. Ansible is an incredibly flexible and reliable platform for configuration management with a large and healthy open source ecosystem.

So far, we have seen how introducing automation into our infrastructure can help bring us one step closer to realizing the goals of DevOps. With a solid automation platform in place, and the correct workflows to introduce change, we can leverage these tools to truly have control over our infrastructure.While the benefits of automation are great indeed, there are major drawbacks. Incorrectly implemented automation introduces a point of failure into our infrastructure. Before selecting an automation platform, one must consider what will happen in the event that our master server goes down (applicable to tools such as Salt, Chef, and Puppet). Or what will happen if a state, recipe, playbook, or manifest fails to execute on one of yourbare metalinfrastructure servers. Using configuration management and automation tools is essentially a requirement in today's landscape, and ways to deploy applications which actually simplify and sometimes negate these potential issues are emerging.

Virtualization of applications and infrastructure

With the rise of cloud computing in recent years, the virtualization of applications and infrastructure has for many organizations replaced traditional in-house deployments of applications and services. Currently, it is proving to be more cost-effective for individuals and companies to rent hardware resources from companies such as Amazon, Microsoft, and Google and spin up virtual instances of servers with exactly the hardware profiles required to run their services.

Many configuration management and automation tools today are adding direct API access to these cloud providers to extend the flexibility of your infrastructure. Using Ansible, for example, you can describe exactly the server configuration you require in a playbook, as well as your cloud provider credentials. Executing this playbook will not only spin up your required instances but will also configure them to run your application. What happens if a virtual instance fails? Blow it away and create a new one. With the ushering in of cloud computing, so too comes a new way to look at infrastructure. No longer is a single server or group of servers considered to be special and maintained in a specific way. The cloud is introducing DevOps practitioners to the very real concept that infrastructure can be disposable.

Virtualization, however, is not limited to just cloud providers. Many organizations are currently implementing virtualization in-house using platforms such as ESXi, Xen, and KVM. These platforms allow large servers with a lot of storage, RAM, and CPU resources to host multiple virtual machines that use a portion of the host operating system's resources.

Considering the benefits that virtualization and automation bring to the table, there are still many drawbacks to adopting such an architecture. For one, virtualization in all its forms can be quite expensive. The more virtual servers you create in a cloud provider, the more expensive your monthly overhead fee will be, not considering the added cost of large hardware profile virtual machines. Furthermore, deployments such as these can be quite resourced-intensive. Even with low specifications, spinning up a large number of virtual machines can take large amounts of storage, RAM, and CPU from the hypervisor hardware.

Finally, consideration must also be paid to the maintenance and patching of the virtual machine operating systems, as well as the hypervisor operating system. Even though automation platforms and modern hypervisors allow virtual machines to be quickly spun up and destroyed, patching and updates still must be considered for instances that might be kept for weeks or months. Remember, even though the operating system has been virtualized, it is still prone to security vulnerabilities, patching, and maintenance.

Containerization of applications and infrastructure

Containerization made an entrance on the DevOps scene when Docker was launched in the month of March of 2013. Even though the concepts of containerization predate Docker, for many working in the field, it was their first introduction to the concept of running an application inside a container. Before we go forward, we must first establish what a container is and what it is not.

A container is an isolated process in a Linux system that has control groups and kernel namespaces associated with it. Within a container, there is a very thin operating system layer, which has just enough resources to launch and run other processes. The base operating system layer can be based on any operating system, even a different operating system from the one that is running on the host. When a container is run, the container engine allocates access to the host operating system kernel to run the container in isolation from other processes on the host. From the perspective of the application inside the container, it appears to be the only process on that host, even though that same host could be running multiple versions of that container simultaneously.

The following illustration shows the relationship between the host OS, the container engine, and the containers running on the host:

Figure 1: An Ubuntu 16.04 host running multiple containers with different base operating systems

Many beginners at containerization mistake containers for lightweight virtual machines and attempt to fix or modify running containers as you would a VM or a bare metal server that isn't running correctly. Containers are meant to be truly disposable. If a container is not running correctly, they are lightweight enough that one can terminate the existing container and rebuild a new one from scratch in a matter of seconds. If virtual machines and bare metal servers are to be treated as pets (cared for, watered, and fed), containers are to be treated as cattle (here one minute, deleted and replaced the next minute). I think you get the idea.

This implementation differs significantly from traditional virtualization, in that a container can be built quickly from a container source file and start running on a host OS, similar to any other process or daemon in the Linux kernel. Since containers are isolated and extremely thin, one does not have to be concerned about running any unnecessary processes inside of the container, such as SSH, security tools, or monitoring tools. That container exists for a specific purpose, to run a single application. Container runtime environments, such as Docker, provide the necessary resources so that the container can run successfully and provide an interface to the host's software and hardware resources, such as storage and networking.

By their very nature, containers are designed to be portable. A container using a CentOS base image running the Apache web server can be loaded on a CentOS host, an Ubuntu host, or even a Windows host; they all have the same container runtime environment and run in exactly the same way. The benefits of having this type of modularity are immense. For example, a developer can build a container image for MyAwesomeApplication 1.0 on his or her laptop, using only a few megabytes of storage and memory, and be confident that the container will run exactly the same in production as it does on their laptop. When it's time to upgrade the MyAwesomeApplicationto version2.0, the upgrade path is to simply replace the running container image with the newer container image version, significantly simplifying the upgrade process.

Combining the portability of running containers in a runtime environment such as Docker with automation tools such as Ansible can provide software developers and operations teams with a powerful combination. New software can be deployed faster, run more reliably, and have a lower maintenance overhead. It is this idea that we will explore further in this book.

Orchestrating of containerized applications

Working towards a more flexible, DevOps-oriented infrastructure does not stop with running applications and tools in containers. By their very nature, containers are portable and flexible. As with anything else in the IT industry, the portability and flexibility that containers bring can be built upon to make something even more useful. Kubernetes and Docker Swarm are two container scheduling platforms that make maintaining and deploying containers even easier.

Building your first docker container

Now that we have covered some introductory information that will serve to bring the reader up to speed on DevOps, configuration management, and containerization, it's time to get our hands dirty and actually build our first Docker container from scratch. This portion of the chapter will walk you through building containers manually and with scripted Dockerfiles. This will provide a foundational knowledge of how the Ansible Container platform works on the backend to automate the building and deployment of container images.

When working with container images, it is important to understand the difference between container images and running instances