Deployment with Docker - Srdjan Grubor - E-Book

Deployment with Docker E-Book

Srdjan Grubor

0,0
37,19 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

A practical guide to rapidly and efficiently mastering Docker containers, along with tips and tricks learned in the field.

About This Book

  • Use Docker containers, horizontal node scaling, modern orchestration tools (Docker Swarm, Kubernetes, and Mesos) and Continuous Integration/Continuous Delivery to manage your infrastructure.
  • Increase service density by turning often-idle machines into hosts for numerous Docker services.
  • Learn what it takes to build a true container infrastructure that is scalable, reliable, and resilient in the face of increased complexities from using container infrastructures.
  • Find out how to identify, debug, and mitigate most real-world, undocumented issues when deploying your own Docker infrastructure.
  • Learn tips and tricks of the trade from existing Docker infrastructures running in production environments.

Who This Book Is For

This book is aimed at system administrators, developers, DevOps engineers, and software engineers who want to get concrete, hands-on experience deploying multi-tier web applications and containerized microservices using Docker. This book is also for anyone who has worked on deploying services in some fashion and wants to take their small-scale setups to the next level (or simply to learn more about the process).

What You Will Learn

  • Set up a working development environment and create a simple web service to demonstrate the basics
  • Learn how to make your service more usable by adding a database and an app server to process logic
  • Add resilience to your services by learning how to horizontally scale with a few containers on a single node
  • Master layering isolation and messaging to simplify and harden the connectivity between containers
  • Learn about numerous issues encountered at scale and their workarounds, from the kernel up to code versioning
  • Automate the most important parts of your infrastructure with continuous integration

In Detail

Deploying Docker into production is considered to be one of the major pain points in developing large-scale infrastructures, and the documentation available online leaves a lot to be desired. With this book, you will learn everything you wanted to know to effectively scale your deployments globally and build a resilient, scalable, and containerized cloud platform for your own use.

The book starts by introducing you to the containerization ecosystem with some concrete and easy-to-digest examples; after that, you will delve into examples of launching multiple instances of the same container. From there, you will cover orchestration, multi-node setups, volumes, and almost every relevant component of this new approach to deploying services. Using intertwined approaches, the book will cover battle-tested tooling, or issues likely to be encountered in real-world scenarios, in detail. You will also learn about the other supporting components required for a true PaaS deployment and discover common options to tie the whole infrastructure together.

At the end of the book, you learn to build a small, but functional, PaaS (to appreciate the power of the containerized service approach) and continue to explore real-world approaches to implementing even larger global-scale services.

Style and approach

This in-depth learning guide shows you how to deploy your applications in production using Docker (from the basic steps to advanced concepts) and how to overcome challenges in Docker-based infrastructures. The book also covers practical use-cases in real-world examples, and provides tips and tricks on the various topics.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 348

Veröffentlichungsjahr: 2017

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Deployment with Docker

 

 

 

 

 

 

 

 

 

 

Apply continuous integration models, deploy applications quicker, and scale at large by putting Docker to work

 

 

 

 

 

 

 

 

Srdjan Grubor

 

 

 

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

Deployment with Docker

 

Copyright © 2017 Packt Publishing

 

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

 

First published: November 2017

 

Production reference: 1201117

 

Published by Packt Publishing Ltd.

Livery Place 35 Livery Street Birmingham B3 2PB, UK.

 

ISBN 978-1-78646-900-7

 

www.packtpub.com

Credits

 

Author

 

Srdjan Grubor

Copy Editor

 

Stuti Srivastava

Reviewer

 

Francisco Souza

Project Coordinator

 

Virginia Dias

Commissioning Editor

 

Vijin Boricha

Proofreader

 

Safis Editing

Acquisition Editor

 

Rahul Nair

Indexer

 

Aishwarya Gangawane

Content Development Editor

 

Sharon Raj

Graphics

 

Kirk D'Penha

Technical Editor

 

Prashant Chaudhari

Production Coordinator

 

Aparna Bhagat

About the Author

Srdjan Grubor is a software engineer who has worked on projects large and small for many years now, with deployment sizes ranging from small to global. Currently, he is working on solving the world's connectivity problems for Endless OS as a cloud engineer and was one of the first people to become a Docker Certified Associate. He enjoys breaking things just to see how they work, tinkering, and solving challenging problems. Srdjan believes that there is always room for philanthropy in technology.

Acknowledgments

I'd  like to thank every person and company that has spent time working on open source software that has enabled me and countless others to improve their lives and learn things through its use—don't ever stop contributing!

As for personal appreciation for help on this book, I'd also like to thank:

My family for being the most awesome family one can ask for

My girlfriend for being the best partner ever and also keeping me sane through the stress of writing this book in my limited spare time

Dora (the kitty) for making me take breaks by sitting on the laptop keyboard

Galileo (the sugar glider) for being the cutest rebel pet in the world

Endless for introducing me to open source software and encouraging me to contribute back

So many others that would fill pages and pages of this book

Thank you all from the bottom of my heart!

About the Reviewer

Francisco Souza is a Docker Captain and a senior software engineer working with video and container technologies at the New York Times. Prior to that, he worked with the open source PaaS Tsuru, created back in 2012 and later adapted to leverage Docker for container deployment and management. Other than video and containers, Francisco also likes to explore topics related to concurrency, parallelism, and distributed systems.

 

He has also contributed as a reviewer to Extending Docker, Russ McKendrick, Packt and Docker Networking Cookbook, Jon Langemak, Packt.

www.PacktPub.com

For support files and downloads related to your book, please visit www.PacktPub.com. Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.comand as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details. At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

 

https://www.packtpub.com/mapt

Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.

Why subscribe?

Fully searchable across every book published by Packt

Copy and paste, print, and bookmark content

On demand and accessible via a web browser

Customer Feedback

Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review on this book's Amazon page at https://www.amazon.com/dp/1786469006.

If you'd like to join our team of regular reviewers, you can email us at [email protected]. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!

I would like to mainly dedicate this book to you, the reader, as you were my primary motivation for writing this book and always kept me typing. Without the thought that someone would use this material to learn new things, the book itself would not have been written at all. 

Table of Contents

Preface

What this book covers

What you need for this book

Who this book is for

Conventions

Reader feedback

Customer support

Downloading the example code

Downloading the color images of this book

Errata

Piracy

Questions

Containers - Not Just Another Buzzword

The what and why of containers

Docker's place

Introduction to Docker containers

The competition

rkt

System-level virtualization

Desktop application-level virtualizations

When should containerization be considered?

The ideal Docker deployment

The container mindset

The developer workflow

Summary

Rolling Up the Sleeves

Installing Docker

Debugging containers

Seeing what the container sees

Our first Dockerfile

Breaking the cache

A container more practical

Extending another container with FROM

Ensuring the latest patches are included

Applying our custom NGINX configuration

Building and running

Service from scratch

Labels

Setting environment variables with ENV

Exposing ports

Container security layering with limited users

VOLUMEs and data that lives outside of the container

Setting the working directory

Adding files from the internet

Changing the current user

Putting it all together

Summary

Service Decomposition

A quick review

Docker commands

Dockerfile commands

Writing a real service

An overview

What we are going to build

The implementation

Web server

Authentication

The database

The application server

The main application logic

Running it all together

Launching

Testing

Limitations and issues with our implementation

Fixing the critical issues

Using a local volume

Generating the credentials at runtime

Introducing Docker networking

Summary

Scaling the Containers

Service discovery

A recap of Docker networking

Service Discovery in depth

Client-side discovery pattern

Server-side discovery pattern

Hybrid systems

Picking the (un)available options

Container orchestration

State reconciliation

Docker Swarm

Kubernetes

Apache Mesos/Marathon

Cloud-based offerings

Implementing orchestration

Setting up a Docker Swarm cluster

Initializing a Docker Swarm cluster

Deploying services

Cleaning up

Using Swarm to orchestrate our words service

The application server

index.js

The web server

Database

Deploying it all

The Docker stack

Clean up

Summary

Keeping the Data Persistent

Docker image internals

How images are layered

Persisting the writable CoW layer(s)

Running your own image registry

Underlying storage driver

aufs

btrfs / zfs

overlay and overlay2

devicemapper

Cleanup of Docker storage

Manual cleanup

Automatic cleanup

Persistent storage

Node-local storage

Bind mounts

Read-only bind mounts

Named volumes

Relocatable volumes

Relocatable volume sync loss

UID/GID and security considerations with volumes

Summary

Advanced Deployment Topics

Advanced debugging

Attaching to a container's process space

Debugging the Docker daemon

Advanced networking

Static host configuration

DNS configuration

Overlay networks

Docker built-in network mappings

Docker communication ports

High availability pipelines

Container messaging

Implementing our own messaging queue

package.json

index.js

Dockerfile

Advanced security

Mounting the Docker socket into the container

Host security scans

Read-only containers

Base system (package) updates

Privileged mode versus --cap-add and --cap-drop

Summary

The Limits of Scaling and the Workarounds

Limiting service resources

RAM limits

CPU limits

Pitfall avoidance

ulimits

Max file descriptors

Socket buffers

Ephemeral ports

Netfilter tweaks

Multi-service containers

Zero-downtime deployments

Rolling service restarts

Blue-green deployments

Blue-turquoise-green deployments

Summary

Building Our Own Platform

Configuration management

Ansible

Installation

Basics

Usage

Amazon Web Services setup

Creating an account

Getting API keys

 Using the API keys

HashiCorp Packer

Installation

Usage

Choosing the right AMI base image

Building the AMI

Deployments to AWS

The road to automated infrastructure deployment

Running the deployment and tear-down playbooks

Continuous integration/Continuous delivery

Resource considerations

First-deploy circular dependency

Further generic CI/CD uses

Summary

Exploring the Largest-Scale Deployments

Maintaining quorums

Node automation

Reactive auto-scaling

Predictive auto-scaling

Monitoring

Evaluating next-gen technologies

Technological needs

Popularity

A team's technical competency

Summary

Preface

Microservices and containers are here to stay, and in today's world Docker is emerging as the de facto standard for scalability. Deploying Docker into production is considered to be one of the major pain points of developing large-scale infrastructure and the documentation that you can find online leaves a lot to be desired. With this book, you will get exposure to the various tools, techniques, and workarounds available for the development and deployment of a Docker infrastructure in your own cloud, based on the author's real-world experiences of doing the same. You will learn everything you wanted to know to effectively scale your deployments globally and build a resilient and scalable containerized cloud platform for yourself.

What this book covers

Chapter 1, Containers – Not Just Another Buzzword, examines what the current approaches are to deploying services and why containers, and Docker specifically, are eclipsing other forms of infrastructure deployment. 

Chapter 2, Rolling Up the Sleeves, covers all the necessary steps to set up and run a small local service based on Docker. We will cover how to install Docker, run it, and get a quick overview of the Docker CLI. With that knowledge, we will write a basic Docker container and see how to run it locally.

Chapter 3, Service Decomposition, covers how to take the knowledge from the previous chapter and use it to create and build additional of a database and an app server container, mirroring simple decomposed microservice deployments.

Chapter 4, Scaling the Containers, talks about scaling horizontally with multiple instances of the same container. We will cover service discovery, how to deploy one to make the scaling of a module transparent to the rest of the infrastructure, and its various pros and cons depending on the implementation, with a quick look into horizontal node scaling.

Chapter 5, Keeping the Data Persistent, covers data persistence for your containers. We will cover node-local storage, transient storage, and persistent volumes and their intricacies. We will also spend a bit more time on Docker image layering and some pitfalls.

Chapter 6, Advanced Deployment Topics, adds isolation and messaging to the cluster to increase the security and stability of the services. Other security consideration in Docker deployments and their trade-offs will be covered here.

Chapter 7, The Limits of Scaling and the Workarounds, covers all the issues that you might come across as you scale beyond your basic RESTful service needs. We will dig deep into the issues that you will find with default deployments and how to work around them with minimal hassle, along with handling code version changes and higher-level management systems.

Chapter 8, Building Our Own Platform, helps us build our own mini Platform-as-a-Service (PaaS) in this chapter. We will cover everything from configuration management to deployment in a cloud-based environment that you can use to bootstrap your own cloud.

Chapter 9, Exploring the Largest-Scale Deployments, covers what we built up, and extends into the theoretical and real-world examples of the largest-scale deployments of Docker it also covers any development on the horizon that the reader should keep an eye out for.

What you need for this book

Before you start with the book, make sure you have the following:

Intel or AMD-based x86_64 machine

At least 2 GB of RAM

At least 10 GB of hard drive space

Linux (Ubuntu, Debian, CentOS, RHEL, SUSE, or Fedora), Windows 10, Windows Server 2016, or macOS

Internet connection

Who this book is for

This book is aimed at system administrators, developers, DevOps engineers, and software engineers who want to get concrete hands-on experience deploying multitier web applications and containerized microservices using Docker. It is meant for anyone who has worked on deploying services in some fashion and wants to take their small-scale setups to the next order of magnitude or wants to learn more about it.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "If you go to http://127.0.0.1:8080 in your browser again, you will see that our app works just like before!"

A block of code is set as follows:

# Make sure we are fully up to date RUN apt-get update -q && \ apt-get dist-upgrade -y && \ apt-get clean && \ apt-get autoclean

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

# Make sure we are fully up to date RUN apt-get update -q && \

apt-get dist-upgrade

-y && \ apt-get clean && \ apt-get autoclean

Any command-line input or output is written as follows:

$ docker swarm leave --forceNode left the swarm.

New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "In order to download new modules, we will go toFiles|Settings|Project Name|Project Interpreter."

Warnings or important notes appear like this.
Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book-what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of. To send us general feedback, simply email [email protected], and mention the book's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files emailed directly to you. You can download the code files by following these steps:

Log in or register to our website using your email address and password.

Hover the mouse pointer on the

SUPPORT

tab at the top.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box.

Select the book for which you're looking to download the code files.

Choose from the drop-down menu where you purchased this book from.

Click on

Code Download

.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR / 7-Zip for Windows

Zipeg / iZip / UnRarX for Mac

7-Zip / PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Deployment-with-Docker/. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Downloading the color images of this book

We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/DeploymentwithDocker_ColorImages.pdf.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title. To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the internet, please provide us with the location address or website name immediately so that we can pursue a remedy. Please contact us at [email protected] with a link to the suspected pirated material. We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.

Containers - Not Just Another Buzzword

In technology, sometimes the jumps in progress are small but, as is the case with containerization, the jumps have been massive and turn the long-held practices and teachings completely upside down. With this book, we will take you from running a tiny service to building elastically scalable systems using containerization with Docker, the cornerstone of this revolution. We will perform a steady but consistent ramp-up through the basic blocks with a focus on the inner workings of Docker, and, as we continue, we will try to spend a majority of the time in the world of complex deployments and their considerations.

Let’s take a look at what we will cover in this chapter:

What are containers and why do we need them?

Docker’s place in the container world

Thinking with a container mindset

The what and why of containers

We can’t start talking about Docker without actually covering the ideas that make it such a powerful tool. A container, at the most basic level, is an isolated user-space environment for a given discrete set of functionality. In other words, it is a way to modularize a system (or a part of one) into pieces that are much easier to manage and maintain while often also being very resilient to failures.

In practice, this net gain is never free and requires some investment in the adoption and implementation of new tooling (such as Docker), but the change pays heavy dividends to the adopters in a drastic reduction of development, maintenance, and scaling costs over its lifetime.

At this point, you might ask this: how exactly are containers able to provide such huge benefits? To understand this, we first need to take a look at deployments before such tooling was available.

In the earlier days of deployments, the process for deploying a service would go something like this:

Developer would write some code.

Operations would deploy that code.

If there were any problems in deployment, the operations team would tell the developer to fix something and we would go back to step 1.

A simplification of this process would look something like this:

dev machine => code => ops => bare-metal hosts

The developer would have to wait for the whole process to bounce back for them to try to write a fix anytime there was a problem. What is even worse, operations groups would often have to use various arcane forms of magic to ensure that the code that developers gave them can actually run on deployment machines, as differences in library versions, OS patches, and language compilers/interpreters were all high risk for failures and likely to spend a huge amount of time in this long cycle of break-patch-deploy attempts.

The next step in the evolution of deployments came to improve this workflow with the virtualization of bare-metal hosts as manual maintenance of a heterogeneous mix of machines and environments is a complete nightmare even when they were in single-digit counts. Early tools such as chroot came out in the late 70s but were later replaced (though not fully) with hypervisors such as Xen, KVM, Hyper-V, and a few others, which not only reduced the management complexity of larger systems, but also provided Ops and developers both with a deployment environment that was more consistent as well as more computationally dense:

dev machine => code => ops => n hosts * VM deployments per host

This helped out in the reduction of failures at the end of the pipeline, but the path from the developer to the deployment was still a risk as the VM environments could very easily get out of sync with the developers.

From here, if we really try to figure out how to make this system better, we can already see how Docker and other container technologies are the organic next step. By making the developers' sandbox environment as close as we can get to the one in production, a developer with an adequately functional container system can literally bypass the ops step, be sure that the code will work on the deployment environment, and prevent any lengthy rewrite cycles due to the overhead of multiple group interactions:

dev machine => container => n hosts * VM deployments per host

With Ops being needed primarily in the early stages of system setup, developers can now be empowered to take their code directly from the idea all the way to the user with the confidence that a majority of issues that they will find will be ones that they will be able to fix.

If you consider this the new model of deploying services, it is very reasonable to understand why we have DevOps roles nowadays, why there is such a buzz around Platform as a Service (PaaS) setups, and how so many tech giants can apply a change to a service used by millions at a time within 15 minutes with something as simple as git push origin by a developer without any other interactions with the system.

But the benefits don't stop there either! If you have many little containers everywhere and if you have increased or decreased demand for a service, you can add or eliminate a portion of your host machines, and if the container orchestration is properly done, there will be zero downtime and zero user-noticeable changes on scaling changes. This comes in extremely handy to providers of services that need to handle variable loads at different times--think of Netflix and their peak viewership times as an example. In most cases, these can also be automated on almost all cloud platforms (that is, AWS Auto Scaling Groups, Google Cluster Autoscaler, and Azure Autoscale) so that if some triggers occur or there are changes in resource consumption, the service will automatically scale up and down the number of hosts to handle the load. By automating all these processes, your PaaS can pretty much be a fire-and-forget flexible layer, on top of which developers can worry about things that really matter and not waste time with things such as trying to figure out whether some system library is installed on deployment hosts.

Now don't get me wrong; making one of these amazing PaaS services is not an easy task by any stretch of imagination, and the road is covered in countless hidden traps but if you want to be able to sleep soundly throughout the night without phone calls from angry customers, bosses, or coworkers, you must strive to be as close as you can to these ideal setups regardless of whether you are a developer or not.

Docker's place

So far, we have talked a lot about containers but haven't mentioned Docker yet. While Docker has been emerging as the de facto standard in containerization, it is currently one of many competing technologies in this space, and what is relevant today may not be tomorrow. For this reason, we will cover a little bit of the container ecosystem so that if you see shifts occurring in this space, don't hesitate to try another solution, as picking the right tool for the job almost always beats out trying to, as the saying goes, fit a square peg in a round hole.

While most people know Docker as the Command-line Interface (CLI) tool, the Docker platform extends above and beyond that to include tooling to create and manage clusters, handle persistent storage, build and share Docker containers, and many others, but for now, we will focus on the most important part of that ecosystem: the Docker container.

Introduction to Docker containers

Docker containers, in essence, are a grouping of a number of filesystem layers that are stacked on top of each other in a sequence to create the final layout that is then run in an isolated environment by the host machine's kernel. Each layer describes which files have been added, modified, and/or deleted relative to its previous parent layer. For example, you have a base layer with a file /foo/bar, and the next layer adds a file /foo/baz. When the container starts, it will combine the layers in order and the resulting container will have both /foo/bar and /foo/baz. This process is repeated for any new layer to end up with a fully composed filesystem to run the specified service or services.

Think of the arrangement of the filesystem layers in an image as the intricate layering of sounds in a symphony: you have the percussion instruments in the back to provide the base for the sound, wind instruments a bit closer to drive the movements, and in the front, the string instruments with the lead melody. Together, it creates a pleasing end result. In the case of Docker, you generally have the base layers set up the main OS layers and configuration, the service infrastructure layers go on top of that (interpreter installation, the compilation of helpers, and so on), and the final image that you run is finally topped with the actual service code. For now, this is all you will need to know, but we will cover this topic in much more detail in the next chapter.

In essence, Docker in its current form is a platform that allows easy and fast development of isolated (or not depending on how the service is configured) Linux and Windows services within containers that are scalable, easily interchangeable, and easily distributable.

The competition

Before we get too deep into Docker itself, let us also cover some of the current competitors in broad strokes and see how they differ from Docker itself. The curious thing about almost all of them is that they are generally a form of abstraction around Linux control groups (cgroups) and namespaces that limit the use of Linux host's physical resources and isolate groups of processes from each other, respectively. While almost every tooling mentioned here provides some sort of containerization of resources, it can differ greatly in the depth of isolation, implementation security, and/or the container distribution.

rkt

rkt, often written as Rocket, is the closest competing application containerization platform from CoreOS that was started as a more secure application container runtime. Over time, Docker has closed a number of its security failings but unlike rkt, which runs with limited privileges as a user service, Docker's main service runs as root. This means that if someone manages to break out of the Docker container, they will automatically have full access to the host's root, which is obviously a really bad thing from an operations perspective while with rkt, the hacker would also need to escalate their privilege from the limited user. While this comparison here isn't painting Docker in great light from a security standpoint, if its development trajectory is to be extrapolated, it is possible and likely that this issue will be heavily mitigated and/or fixed in the future.

Another interesting difference is that unlike Docker, which is designed to run a single process within the container, rkt can run multiple processes within a container. This makes deploying multiple services within a single container much easier. Now, having said that, you actually can run multiple processes within a Docker container (we will cover this at a later point in the book) but it is a great pain to set that up properly but I did find in practice that the pressure to keep services and containers based on a single process really pushes the developer to create containers as true microservices instead of treating them as mini VMs so don't consider this necessarily as a problem.

While there are many other smaller reasons to choose Docker over rkt and vice versa, one massive thing cannot be ignored: the rate of adoption. While rkt is a bit younger, Docker has been adopted by almost all big tech giants, and there doesn't seem to be any sign of stopping the trend. With this in mind, if you need to work on microservices today, the choice is probably very clear but as with any tech field, the ecosystem may look much differently in a year or even just a couple of months.

System-level virtualization

On the opposite side, we have platforms for working with full system images instead of applications like LXD, OpenVZ, KVM, and a few others. They, unlike Docker and rkt, are designed to provide you with full support for all of the virtualized system services but at the cost of much higher resource usage purely by its definition. While having separate system containers on a host is needed for things like better security, isolation, and possibly compatibility, almost the entire use of these containers from personal experience can be moved to an application-level virtualization system with a bit of work to provide better resource use profile and higher modularity at a slight increase of cost in creating the initial infrastructure. A sensible rule to follow here is that if you are writing applications and services, you should probably use application-level virtualization but if you are providing VMs to the end user or want much more isolation between services you should use a system-level virtualization.

Desktop application-level virtualizations

Flatpak, AppImage, Snaps, and other similar technologies also provide isolation and packaging for single-application level containers, but unlike Docker, all of them target the deployment of desktop applications and do not have as precise control over the container life cycle (that is starting, stopping, forced termination, and so on) nor do they generally provide layered images. Instead, most of these tools have nice wrapper Graphical User Interfaces (GUIs) and provide a significantly better workflow for installing, running, and updating desktop applications. While most have large overlaps with Docker due to the same underlying reliance on mentioned cgroups and namespaces, these application-level virtualization platforms do not traditionally handle server applications (applications that run without UI components) and vice versa. Since this field is still young and the space they all cover is relatively small, you can probably expect consolidations and cross-overs so in this case it would be either for Docker to enter the desktop application delivery space and/or for one or more of these competing technologies to try to support server applications.

When should containerization be considered?

We've covered a lot of ground so far, but there is an important aspect that we did not cover yet but which is an extremely important thing to evaluate as containers do not make sense in a large array of circumstances as the end deployment target regardless of how much buzz there is around this concept, so we will cover some general use cases where this type of platform should really be considered (or not). While containerization should be the end goal in most cases from an operations perspective and offers huge dividends with minimal effort when injected into the development process, turning deployment machines into a containerized platform is a pretty tricky process, and if you will not gain tangible benefits from it, you might as well dedicate this time to something that will bring real and tangible value to your services.

Let's start this by covering scaling thresholds first. If your services as a whole can completely fit and run well on a relatively small or medium virtual machine or a bare-metal host and you don't anticipate sudden scaling needs, virtualization on the deployment machines will lead you down the path of pain that really isn't warranted in most cases. The high front-loaded costs of setting up even a benign but robust virtualized setup will usually be better spent on developing service features at that level.

If you see increases in demand with a service backed with a VM or bare-metal host, you can always scale up to a larger host (vertical scaling) and refocus your team but for anything less than that, you probably shouldn't go that route. There have been many cases where a business has spent months working to get the container technology implemented since it is so popular, only to lose their customers due to lack of development resources and having to shut their doors.

Now that your system is maxing out the limits of vertical scalability, is it a good time to add things such as Docker clusters to the mix? The real answer is "maybe". If your services are homogeneous and consistent across hosts, such as sharded or clustered databases or simple APIs, in most cases, this still isn't the right time either as you can scale this system easily with host images and some sort of a load balancer. If you're opting for a bit more fanciness, you can use a cloud-based Database as a Service (DBaaS) such as Amazon RDS, Microsoft DocumentDB, or Google BigQuery and auto-scale service hosts up or down through the same provider (or even a different one) based on the required level of performance.

If there is ample foreshadowing of service variety beyond this, the need for a much shorter pipeline from developer to deployment, rising complexity, or exponential growth, you should consider each of these as triggers to re-evaluate your pros/cons but there is no clear threshold that will be a clear cut-off. A good rule of thumb here, though, is that if you have a slow period for your team it won't hurt to explore the containerization options or to gear up your skills in this space, but be very careful to not underestimate the time it would take to properly set up such a platform regardless of how easy the Getting Started instructions look on many of these tools.

With this all, what are the clear signs that you need to get containers into your workflow as soon as you can? There can be many subtle hints here but the following list covers the ones that should immediately bring the containers topic up for discussion if the answer is yes, as the benefits greatly outweigh the time investment into your service platform:

Do you have more than 10 unique, discrete, and interconnected services in your deployment?

Do you have three or more programming languages you need to support on the hosts?

Are your ops resources constantly deploying and upgrading services?

Do any of your services require "four 9s" (99.99%) or better availability?

Do you have a recurring pattern of services breaking in deployments because developers are not considerate of the environment that the services will run in?

Do you have a talented Dev or Ops team that's sitting idle?

Does your project have a burning hole in the wallet?

Okay, maybe the last one is a bit of a joke but it is in the list to illustrate, in somewhat of a sarcastic tone, that at the time of writing this getting a PaaS platform operational, stable, and secure is neither easy nor cheap regardless of whether your currency is time or money. Many will try to trick you into the idea that you should always use containers and make everything Dockerized, but keep a skeptical mindset and make sure that you evaluate your options with care.

The ideal Docker deployment

Now that we have the real-talk parts done with, let us say that we are truly ready to tackle containers and Docker for an imaginary service. We covered bits and pieces of this earlier in the chapter, but we will here concretely define what our ideal requirements would look like if we had ample time to work on them:

Developers should be able to deploy a new service without any need for ops resources

The system can auto-discover new instances of services running

The system is flexibly scalable both up and down

On desired code commits, the new code will automatically get deployed without Dev or Ops intervention

You can seamlessly handle degraded nodes and services without interruption

You are capable of using the full extent of the resources available on hosts (RAM, CPUs, and so on)

Nodes should almost never need to be accessed individually by developers

If these are the requirements, you will be happy to know that almost all of them are feasible to a large extent and that we will cover almost all of them in detail in this book. For many of them, we will need to get into Docker way deeper and beyond most of the materials you will find elsewhere, but there is no point in teaching you deployments that you cannot take to the field that only print out "Hello World"s.

As we explore each topic in the following chapters, we will be sure to cover any pitfalls as there are many such complex system interactions. Some will be obvious to you, but many probably will not (for example, the PID1 issue), as the tooling in this space is relatively young and many tools critical for the Docker ecosystem are not even version 1.0 or have reached version 1.0 only recently.

Thus, you should consider this technology space to still be in its early stages of development so be realistic, don't expect miracles, and expect a healthy dose of little "gotchas". Keep also in mind that some of the biggest tech giants have been using Docker for a long time now (Red Hat, Microsoft, Google, IBM, and so on), so don't get scared either.

To get started and really begin our journey, we need to first reconsider the way we think about services.

The container mindset

Today, as we have somewhat covered earlier in the chapter, vast majority of services deployed today are a big mess of ad hoc or manually connected and configured pieces that tend to break apart as soon as a single piece is changed or moved. It is easy to imagine this as a tower of cards where the piece that needs changing is often in the middle of it, with risks taking the whole structure down. Small-to-medium projects and talented Dev and Ops team can mostly manage this level of complexity but it is really not a scalable methodology.

The developer workflow