Extending Docker - Russ McKendrick - E-Book

Extending Docker E-Book

Russ McKendrick

0,0
43,19 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Master the art of making Docker more extensible, composable, and modular by leveraging plugins and other supporting tools

About This Book

  • Get the first book on the market that shows you how to extend the capabilities of Docker using plugins and third-party tools
  • Master the skills of creating various plugins and integrating great tools in order to enhance the functionalities of Docker
  • A practical and learning guide that ensures your investment in Docker becomes more valuable

Who This Book Is For

This book is for developers and sys admins who are well versed Docker and have knowledge on basic programming languages. If you can't wait to extend Docker and customize it to meet your requirements, this is the book for you!

What You Will Learn

  • Find out about Docker plugins and the problems they solve
  • Gain insights into creating your own plugin
  • Use Docker tools to extend the basic functionality of the core Docker engine
  • Get to grips with the installation and configuration of third-party tools available to use with Docker plugins
  • Install, configure, and use a scheduling service to manage the containers in your environment
  • Enhance your day-to-day Docker usage through security, troubleshooting, and best practices

In Detail

With Docker, it is possible to get a lot of apps running on the same old servers, making it very easy to package and ship programs. The ability to extend Docker using plugins and load third-party plugins is incredible, and organizations can massively benefit from it.

In this book, you will read about what first and third party tools are available to extend the functionality of your existing Docker installation and how to approach your next Docker infrastructure deployment. We will show you how to work with Docker plugins, install it, and cover its lifecycle. We also cover network and volume plugins, and you will find out how to build your own plugin.

You'll discover how to integrate it with Puppet, Ansible, Jenkins, Flocker, Rancher, Packer, and more with third-party plugins. Then, you'll see how to use Schedulers such as Kubernetes and Amazon ECS. Finally, we'll delve into security, troubleshooting, and best practices when extending Docker.

By the end of this book, you will learn how to extend Docker and customize it based on your business requirements with the help of various tools and plugins.

Style and approach

An easy to follow guide with plenty of hands-on practical examples which can be executed both on your local machine or externally hosted services.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 271

Veröffentlichungsjahr: 2016

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Extending Docker
Credits
About the Author
About the Reviewer
www.PacktPub.com
eBooks, discount offers, and more
Why subscribe?
Preface
What this book covers
What you need for this book
Who this book is for
Conventions
Reader feedback
Customer support
Downloading the example code
Errata
Piracy
Questions
1. Introduction to Extending Docker
The rise of Docker
Dedicated machines
Virtual machines
Dedicated versus virtual machines
Containers
Everyone should be using Docker?
Life cycle of a container
Installing Docker
What are the limits?
Summary
2. Introducing First-party Tools
Docker Toolbox
Why install Docker locally?
Installing Docker Toolbox
Docker Machine
Developing locally
Heading into the cloud
The DigitalOcean driver
The Amazon Web Services driver
Other considerations
Docker Swarm
Creating a local cluster
Creating a Remote Cluster
Discovery backends
Docker Compose
Why Compose?
Compose files
Launching more
Summary
3. Volume Plugins
Zero volumes
The default volume driver
Third-party volume drivers
Installing Convoy
Launching containers with a Convoy volume
Creating a snapshot using Convoy
Backing up our Convoy snapshot
Restoring our Convoy backups
Summing up Convoy
Block volumes using REX-Ray
Installing REX-Ray
Moving the REX-Ray volume
Summing up REX-Ray
Flocker and Volume Hub
Forming your Flock
Deploying into the Flock
Summing up Flocker
Summary
4. Network Plugins
Docker networking
Multi-host networking with overlays
Launching Discovery
Readying the Swarm
Adding the overlay network
Using the overlay network
Back to Consul
Composing multi-host networks
Summing up multi-host networking
Weaving a network
Configuring a Cluster again
Installing and configuring Weave
Docker Compose and Weave
Weave Scope
Calling off the Swarm
Weavemesh Driver
Summarizing Weave
Summary
5. Building Your Own Plugin
Third-party plugins
Convoy
REX-Ray
Flocker
Weave
The commonalities among the plugins
Understanding a plugin
Discovery
Startup order
Activation
API calls
Writing your plugin service
Summary
6. Extending Your Infrastructure
Why use these tools?
Puppetize all the things
Docker and Puppet
A more advanced Puppet example
A final note about Puppet
Orchestration with Ansible
Preparation
The playbook
Section one
Section Two
Section three
Section four
Ansible and Puppet
Vagrant (again)
Provisioning using Vagrant
The Vagrant Docker provider
Packaging images
An application
The Docker way
Building with Packer
Packer versus Docker Build
Image summary
Serving up Docker with Jenkins
Preparing the environment
Creating an application
Creating a pipeline
Summing up Jenkins
Summary
7. Looking at Schedulers
Getting started with Kubernetes
Installing Kubernetes
Launching our first Kubernetes application
An advanced example
Creating the volumes
Launching MySQL
Launching WordPress
Supporting tools
Kubernetes Dashboard
Grafana
ELK
Remaining cluster tools
Destroying the cluster
Recap
Amazon EC2 Container Service (ECS)
Launching ECS in the console
Recap
Rancher
Installing Rancher
Securing your Rancher installation
Cattle cluster
Deploying the Cluster application
What's going on in the background?
The catalog
WordPress
Storage
Clustered database
Looking at WordPress again
DNS
Docker & Rancher Compose
Docker Compose
Rancher Compose
Back to where we started
Removing the hosts
Summing up Rancher
Summary
8. Security, Challenges, and Conclusions
Securing your containers
Docker Hub
Dockerfile
Official images
Pushed images
Docker Cloud
Private registries
The challenges
Development
Staging
Production
Summary
Index

Extending Docker

Extending Docker

Copyright © 2016 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: June 2016

Production reference: 1100616

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham B3 2PB, UK.

ISBN 978-1-78646-314-2

www.packtpub.com

Credits

Author

Russ McKendrick

Reviewer

Francisco Souza

Commissioning Editor

Pratik Shah

Acquisition Editor

Rahul Nair

Content Development Editor

Mayur Pawanikar

Technical Editor

Danish Shaikh

Copy Editor

Vibha Shukla

Project Coordinator

Nidhi Joshi

Proofreader

Safis Editing

Indexer

Mariammal Chettiyar

Production Coordinator

Arvindkumar Gupta

Cover Work

Arvindkumar Gupta

About the Author

Russ McKendrick is an experienced solution architect who has been working in IT and related industries for the better part of 23 years. During his career, he has had varied responsibilities in a number of industries, ranging from looking after an entire IT infrastructure to providing first-line, second-line, and senior support in client-facing and internal teams for corporate organizations.

Russ works almost exclusively with Linux, using open source systems and tools across dedicated hardware, virtual machines to public and private clouds at Node4 Limited, where he heads up the Open Source solutions team.

About the Reviewer

Francisco Souza is a software engineer working in the video area at The New York Times. He is also one of the creators of Tsuru, an open source cloud platform, which is built on top of Docker and other open source solutions, including CloudStack and the Go programming language.

www.PacktPub.com

eBooks, discount offers, and more

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www2.packtpub.com/books/subscription/packtlib

Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.

Why subscribe?

Fully searchable across every book published by PacktCopy and paste, print, and bookmark contentOn demand and accessible via a web browser

Preface

In the past few years, Docker has emerged as one of the most exciting new pieces of technology. Numerous companies, both enterprise and start-ups, have embraced the tool.

Several first-party and third-party tools have been developed to extend the core Docker functionality. This book will guide you through the process of installing, configuring, and using these tools, as well as help you understand which is the best tool for the job.

What this book covers

Chapter 1, Introduction to Extending Docker, discusses Docker and some of the problems that it solves. We will also discuss some of the ways in which the core Docker engine can be extended to gain additional functionality.

Chapter 2, Introducing First-party Tools, covers the tools provided by Docker to work alongside the core Docker Engine. These are Docker Toolbox, Docker Compose, Docker Machine, and Docker Swarm.

Chapter 3, Volume Plugins, introduces Docker plugins. We will start by looking at the default volume plugin that ships with Docker and look at three third-party plugins.

Chapter 4, Network Plugins, explains how to extend our container's networking across multiple Docker hosts, both locally and in public clouds.

Chapter 5, Building Your Own Plugin, introduces how to best approach writing your own Docker storage or network plugin.

Chapter 6, Extending Your Infrastructure, covers how to use several established DevOps tools to deploy and manage both your Docker hosts and containers.

Chapter 7, Looking at Schedulers, discusses how you can deploy Kubernetes, Amazon ECS, and Rancher, following the previous chapters.

Chapter 8, Security, Challenges, and Conclusions, helps to explain the security implications of where you deploy your Docker images from, as well as looking at the various tools that we have covered in the previous chapters and the situations they are best deployed in.

What you need for this book

You will need either an OS X or Windows laptop or desktop PC that is capable of running VirtualBox (https://www.virtualbox.org/) and has access to both Amazon Web Service and DigitalOcean accounts with permissions to launch resources.

Who this book is for

This book is aimed at both developers and system administrators who feel constrained by their basic Docker installation and want to take their configuration to the next step by extending the functionality of the core Docker engine to meet the business' and their own ever-changing needs.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "Once installed, you should be able to check whether everything worked as expected by running the Docker hello-world container."

A block of code is set as follows:

### Dockerfile FROM php:5.6-apache MAINTAINER Russ McKendrick <[email protected]> ADD index.php /var/www/html/index.php

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

version: '2' services: wordpress: container_name: "my-wordpress-app" image: wordpress ports: - "80:80" environment: - "WORDPRESS_DB_HOST=mysql.weave.local:3306" - "WORDPRESS_DB_PASSWORD=password" - "constraint:node==chapter04-01"

Any command-line input or output is written as follows:

curl -sSL https://get.docker.com/ | sh

New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "To move to the next step of the installation, click on Continue."

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

To send us general feedback, simply e-mail <[email protected]>, and mention the book's title in the subject of your message.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

You can download the code files by following these steps:

Log in or register to our website using your e-mail address and password.Hover the mouse pointer on the SUPPORT tab at the top.Click on Code Downloads & Errata.Enter the name of the book in the Search box.Select the book for which you're looking to download the code files.Choose from the drop-down menu where you purchased this book from.Click on Code Download.

You can also download the code files by clicking on the Code Files button on the book's webpage at the Packt Publishing website. This page can be accessed by entering the book's name in the Search box. Please note that you need to be logged in to your Packt account.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR / 7-Zip for WindowsZipeg / iZip / UnRarX for Mac7-Zip / PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/ExtendingDocker. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at <[email protected]> with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at <[email protected]>, and we will do our best to address the problem.

Chapter 1. Introduction to Extending Docker

In this chapter, we will discuss the following topics:

Why Docker has been so widely accepted by the entire industryWhat does a typical container's life cycle look like?What plugins and third-party tools will be covered in the upcoming chapters?What will you need for the remainder of the chapters?

The rise of Docker

Not very often does a technology come along that is adopted so widely across an entire industry. Since its first public release in March 2013, Docker has not only gained the support of both end users, like you and I, but also industry leaders such as Amazon, Microsoft, and Google.

Docker is currently using the following sentence on their website to describe why you would want to use it:

"Docker provides an integrated technology suite that enables development and IT operations teams to build, ship, and run distributed applications anywhere."

There is a meme, based on the disaster girl photo, which sums up why such a seemingly simple explanation is actually quite important:

So as simple as Docker's description sounds, it's actually a been utopia for most developers and IT operations teams for a number of years to have tool that can ensure that an application can consistently work across the following three main stages of an application's life cycle:

DevelopmentStaging and PreproductionProduction

To illustrate why this used to be a problem before Docker arrived at the scene, let's look at how the services were traditionally configured and deployed. People tended to typically use a mixture of dedicated machines and virtual machines. So let's look at these in more detail.

While this is possible using configuration management tools, such as Puppet, or orchestration tools, such as Ansible, to maintain consistency between server environments, it is difficult to enforce these across both servers and a developer's workstation.

Dedicated machines

Traditionally, these are a single piece of hardware that have been configured to run your application, while the applications have direct access to the hardware, you are constrained by the binaries and libraries you can install on a dedicated machine, as they have to be shared across the entire machine.

To illustrate one potential problem Docker has fixed, let's say you had a single dedicated server that was running your PHP application. When you initially deployed the dedicated machine, all three of the applications, which make up your e-commerce website, worked with PHP 5.6, so there was no problem with compatibility.

Your development team has been slowly working through the three PHP applications. You have deployed it on your host to make them work with PHP 7, as this will give them a good boost in performance. However, there is a single bug that they have not been able to resolve with App2, which means that it will not run under PHP 7 without crashing when a user adds an item to their shopping cart.

If you have a single host running your three applications, you will not be able to upgrade from PHP 5.6 to PHP 7 until your development team has resolved the bug with App2, unless you do one of the following:

Deploy a new host running PHP 7 and migrate App1 and App3 to it; this could be both time consuming and expensiveDeploy a new host running PHP 5.6 and migrate App2 to it; again this could be both time consuming and expensiveWait until the bug has been fixed; the performance improvements that the upgrade from PHP 5.6 to PHP 7 bring to the application could increase the sales and there is no ETA for the fix

If you go for the first two options, you also need to ensure that the new dedicated machine either matches the developer's PHP 7 environment or that a new dedicated machine is configured in exactly the same way as your existing environment; after all, you don't want to introduce further problems by having a poorly configured machine.

Virtual machines

One solution to the scenario detailed earlier would be to slice up your dedicated machine's resources and make them available to the application by installing a hypervisor such as the following:

KVM: http://www.linux-kvm.org/XenSource: http://www.xenproject.org/VMware vSphere: http://www.vmware.com/uk/products/vsphere-hypervisor/

Once installed, you can then install your binaries and libraries on each of the different virtual hosts and also install your applications on each one.

Going back to the scenario given in the dedicated machine section, you will be able to upgrade to PHP 7 on the virtual machines with App1 and App2 installed, while leaving App2 untouched and functional while the development work on the fix.

Great, so what is the catch? From the developer's view, there is none as they have their applications running with the PHP versions, which work best for them; however, from an IT operations point of view:

More CPU, RAM, and disk space: Each of the virtual machines will require additional resources as the overhead of running three guest OS, as well as the three applications have to be taken into accountMore management: IT operations now need to patch, monitor, and maintain four machines, the dedicated host machine along with three virtual machines, where as before they only had a single dedicated host.

As earlier, you also need to ensure that the configuration of the three virtual machines that are hosting your applications match the configuration that the developers have been using during the development process; again, you do not want to introduce additional problems due to configuration and process drift between departments.

Dedicated versus virtual machines

The following diagram shows the how a typical dedicated and virtual machine host would be configured:

As you can see, the biggest differences between the two are quite clear. You are making a trade-off between resource utilization and being able to run your applications using different binaries/libraries.

Containers

Now we have covered the way in which our applications have been traditionally deployed. Let's look at what Docker adds to the mix.

Back to our scenario of the three applications running on a single host machine. Installing Docker on the host and then deploying each of the applications as a container on this host gives you the benefits of the virtual machine, while vastly reducing the footprint, that is, removing the need for the hypervisor and guest operating system completely, and replacing them with a SlimLine interface directly into the host machines kernel.

The advantages this gives both the IT operations and development teams are as follows:

Low overhead: As mentioned already, the resource and management for the IT operations team is lowerDevelopment provide the containers: Rather than relying on the IT operations team to configure each of the three applications environments to machine the development environment, they can simply pass their containers to be put into production

As you can see from the following diagram, the layers between the application and host operating system have been reduced:

All of this means that the need to use the disaster girl meme at the beginning of this chapter should be now redundant as the development team are shipping the application to the operations in a container with all the configuration, binaries, and libraries intact, which means that if it works in development, it will work in production.

This may seem too good to be true, and to be honest, there is a "but". For most web applications or applications that are pre-compiled static binaries, you shouldn't have a problem.

However, as Docker shares resources with the underlying host machine, such as the Kernel version, if your application needs to be compiled or have a reliance on certain libraries that are only compatible with the shared resources, then you will have to deploy your containers on a like-for-like operating system, and in some cases, hardware.

Docker has tried to address this issue with the acquisition of a company called Unikernel Systems in January 2016. At the time of writing this book, not a lot is known about how Docker is planning to integrate this technology into their core product, if at all. You can find out more about this technology at https://blog.docker.com/2016/01/unikernel/.

Everyone should be using Docker?

So, is it really that simple, should everyone stop using virtual machines and use containers instead?

In July 2014, Wes Felter, Alexandre Ferreira, Ram Rajamony, and Juan Rubio published an IBM research report titled An Updated Performance Comparison of Virtual Machines and Linux Containers and concluded:

"Both VMs and containers are mature technology that have benefited from a decade of incremental hardware and software optimizations. In general, Docker equals or exceeds KVM performance in every case we tested. Our results show that both KVM and Docker introduce negligible overhead for CPU and memory performance (except in extreme cases). For I/O intensive workloads, both forms of virtualization should be used carefully."

It then goes on to say the following:

"Although containers themselves have almost no overhead, Docker is not without performance gotchas. Docker volumes have noticeably better performance than files stored in AUFS. Docker's NAT also introduces overhead for workloads with high packet rates. These features represent a tradeoff between ease of management and performance and should be considered on a case-by-case basis."

The full 12-page report, which is an interesting comparison to the traditional technologies we have discussed and containers, can be downloaded from the following URL:

http://domino.research.ibm.com/library/cyberdig.nsf/papers/0929052195DD819C85257D2300681E7B/$File/rc25482.pdf

Less than a year after the IBM research report was published, Docker introduced plugins for its ecosystem. One of the best descriptions I came across was from a Docker software engineer, Jessica Frazelle, who described the release as having batteries included, but replaceable, meaning that the core functionality can be easily replaced with third-party tools that can then be used to address the conclusions of the IBM research report.

At the time of writing this book, Docker currently supports volume and network driver plugins. Additional plugin types to expose more of the Docker core to third parties will be added in the future.

Life cycle of a container

Before we look at the various plugins and ways to extend Docker, we should look at what a typical life cycle of a container looks like.

Using the example from the previous section, let's launch the official PHP 5.6 container and then replace it with the official PHP 7.0 one.

Installing Docker

Before we can launch our containers, we need to get Docker up and running; luckily, this is a simple process.

In the following chapter, we will be getting into bootstrapping our Docker environments using Docker Machine; however, for now, let's perform a quick installation of Docker on a cloud server.

The following instructions will work on Ubuntu 14.04 LTS or CentOS 7 instances hosted on any of the public clouds, such as the following:

Digital Ocean: https://www.digitalocean.com/Amazon Web Services: https://aws.amazon.com/Microsoft Azure: https://azure.microsoft.com/VMware vCloud Air: http://vcloud.vmware.com/

You can also try a local virtual machine running locally using the follow

Vagrant: https://www.vagrantup.com/Virtualbox: https://www.virtualbox.org/VMware Fusion: http://www.vmware.com/uk/products/fusion/VMware Workstation: http://www.vmware.com/uk/products/workstation/

I am going to be using a CentOS 7 server hosted in Digital Ocean as it is convenient to quickly launch a machine and then terminate it.

Once you have your server up and running, you can install Docker from the official Yum or APT repositories by running the following command:

curl -sSL https://get.docker.com/ | sh

If, like me, you are running a CentOS 7 server, you will need to ensure that the service is running. To do this, type the following command:

systemctl start docker

Once installed, you should be able to check whether everything worked as expected by running the Docker hello-world container by entering the following command:

docker run hello-world

Once you have Docker installed and confirmed that it runs as expected, you can download the latest builds of the official PHP 5.6 and PHP 7.0 images by running the following command:

docker pull php:5.6-apache && docker pull php:7.0-apache

For more information on the official PHP images, refer to the Docker Hub page at https://hub.docker.com/_/php/.

Now that we have the images downloaded, it's time to deploy our application as we are keeping it really simple; all we going to be deploying is a phpinfo page, this will confirm the version of PHP we are running along with details on the rest of the containers environment:

mkdir app1 && cd app1 echo "<?php phpinfo(); ?>" > index.php

Now the index.php file is in place. Let's start the PHP 5.6 container by running the following command:

docker run --name app1 -d -p 80:80 -it -v "$PWD":/var/www/html php:5.6-apache

This will have launch an app1 container. If you enter the IP address of your server instance or a domain which resolves to, you should see a page that shows that you are running PHP 5.6:

Now that you have PHP 5.6 up and running, let's upgrade it to PHP 7. Traditionally, this would mean installing a new set of packages using either third-party YUM or APT repositories; speaking from experience, this process can be a little hit and miss, depending on the compatibility with the packages for the previous versions of PHP that you have installed.

Luckily in our case, we are using Docker, so all we have to do is terminate our PHP 5.6 container and replace with one running PHP 7. At any time during this process, you can check the containers that are running using the following command:

docker ps

This will print a list of the running containers to the screen (as seen in the screenshot at the end of this section). To stop and remove the PHP 5.6 container, run the following command:

docker rm -f app1

Once the container has terminated, run the following command to launch a PHP 7 container:

docker run --name app1 -d -p 80:80 -it -v "$PWD":/var/www/html php:7.0-apache

If you return to the phpinfo page in your browser, you will see that it is now running PHP 7:

To terminate the PHP 7 container, run the docker rm command again:

docker rm -f app1

A full copy of the preceding terminal session can be found in the following screenshot:

This example probably shows the biggest advantage of Docker, being able to quickly and consistently launch containers on top of code bases that are stored on your local storage. There are, however, some limits.

What are the limits?

So, in the previous example, we launched two containers, each running different versions of PHP on top of our (extremely simple) codebase. While it demonstrated how simple it is to launch containers, it also exposed some potential problems and single points of failure.

To start with, our codebase is stored on the host machines filesystem, which means that we can only run the container on our single-host machine. What if it goes down for any reason?

There are a few ways we could get around this with a vanilla Docker installation. The first is use the official PHP container as a base to build our own custom image so that we can ship our code along with PHP. To do this, add Dockerfile to the app1 directory that contains the following content:

### Dockerfile FROM php:5.6-apache MAINTAINER Russ McKendrick <[email protected]> ADD index.php /var/www/html/index.php

We can also build our custom image using the following command:

docker build -t app1:php-5.6 .

When you run the build command, you will see the following output:

Once you have your image built, you could push it as a private image to the Docker Hub or your own self-hosted private registry; another option is to export the custom image as a .tar file and then copy it to each of the instances that need to run your custom PHP container.

To do this, you will run the Docker save command:

docker save app1:php-5.6 > ~/app1-php-56.tar

This will make a copy of our custom image, as you can see from the following terminal output, the image should be around a 482M tar file:

Now that we have a copy of the image as a tar file, we can copy it to our other host machines. Once you have copied the tar file, you will need to run the Docker load command to import it onto our second host:

docker load < ~/app1-php-56.tar

Then we can launch a container that has our code baked in by running the following command:

docker run --name app1 -d -p 80:80 -it app1:php-5.6

The following terminal output gives you an idea of what you should see when importing and running our custom container:

So far so good? Well, yes and no.

It's great that we can add our codebase to a custom image out of the box, then ship the image via either of the following ways:

The official Docker HubOur own private registryExporting the image as a tar file and copying it across our other hosts

However, what about containers that are processing data that is changing all the time, such as a database? What are our options for a database?

Consider that we are running the official MySQL container from https://hub.docker.com/_/mysql/, we could mount the folder where our databases are stored (that is, /var/lib/mysql/) from the host machine, but that could cause us permissions issues with the files once they are mounted within the container.

To get around this, we could create a data volume that contains a copy of our /var/lib/mysql/ directory, this means that we are keeping our databases separate from our container so that we can stop, start, and even replace the MySQL container without destroying our data.

This approach, however, binds us to running our MySQL container on a single host, which is a big single point of failure.

If we have the resources available, we could make sure that the host where we are hosting our MySQL container has multiple redundancies, such as a number of hard drives in RAID configuration that allows us to weather more than one drive failure. We can have multiple power supply units (PSU) being fed by different power feeds, so if we have any problems with the power from one of our feeds, the host machine stays online.

We can also have the same with the networking on the host machine, NICs plugged into different switches being fed by different power feeds and network providers.

While this does leave us with a lot of redundancy, we are still left with a single host machine, which is now getting quite expensive as all of this redundancy with multiple drives, networking, and power feeds are additional costs on top of what we are already paying for our host machine.

So, what's the solution?