41,99 €
Master Docker and leverage its power in your day-to-day workflow
Docker has been a game-changer when it comes to how modern applications are deployed and created. It has now grown into a key driver of innovation beyond system administration, with an impact on the world of web development. But how can you make sure you're keeping up with the innovations it's driving, or be sure you're using it to its full potential? Mastering Docker shows you how; this book not only demonstrates how to use Docker more effectively, but also helps you rethink and reimagine what's possible with it.
You will cover concepts such as building, managing, and storing images, along with best practices to make you confident, before delving more into Docker security. You'll find everything related to extending and integrating Docker in new and innovative ways. Docker Compose, Docker Swarm, and Kubernetes will help you take control of your containers in an efficient manner.
By the end of the book, you will have a broad, yet detailed, sense of what's possible with Docker, and how seamlessly it fits in with a range of other platforms and tools.
If you are an I.T professional and recognize Docker's importance for innovation in everything from system administration to web development, but aren't sure how to use it to its full potential, Mastering Docker is for you.
Russ McKendrick is an experienced system administrator who has been working in IT and related industries for over 25 years. During his career, he has had varied responsibilities, from looking after an entire IT infrastructure to providing first-line, second-line, and senior support in both client-facing and internal teams for large organizations. Russ supports open source systems and tools on public and private clouds at Node4 Limited, where he is the Practice Manager (SRE and DevOps). Scott Gallagher has been fascinated with technology since he played Oregon Trail in elementary school. His love for it continued through middle school as he worked on more Apple IIe computers. In high school, he learned how to build computers and program in BASIC. His college years were all about server technologies such as Novell, Microsoft, and Red Hat. After college, he continued to work on Novell, all the while maintaining an interest in all technologies. He then moved on to manage Microsoft environments and, eventually, what he was most passionate about Linux environments. Now, his focus is on Docker and cloud environments.Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 377
Veröffentlichungsjahr: 2018
Copyright © 2018 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Commissioning Editor: Vijin BorichaAcquisition Editor: Shrilekha InaniContent Development Editor: Sharon RajTechnical Editor:Mohit HassijaCopy Editor:Safis EditingProject Coordinator: Drashti PanchalProofreader: Safis EditingIndexer: Tejal Daruwale SoniGraphics: Tom ScariaProduction Coordinator: Shantanu Zagade
First published: December 2015 Second edition: July 2017 Third edition: October 2018
Production reference: 1231018
Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.
ISBN 978-1-78961-660-6
www.packtpub.com
Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
Improve your learning with Skill Plans built especially for you
Get a free eBook or video every month
Mapt is fully searchable
Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
Russ McKendrick is an experienced system administrator who has been working in IT and related industries for over 25 years. During his career, he has had varied responsibilities, from looking after an entire IT infrastructure to providing first-line, second-line, and senior support in both client-facing and internal teams for large organizations.
Russ supports open source systems and tools on public and private clouds at Node4 Limited, where he is the Practice Manager (SRE and DevOps).
Scott Gallagher has been fascinated with technology since he played Oregon Trail in elementary school. His love for it continued through middle school as he worked on more Apple IIe computers. In high school, he learned how to build computers and program in BASIC. His college years were all about server technologies such as Novell, Microsoft, and Red Hat. After college, he continued to work on Novell, all the while maintaining an interest in all technologies. He then moved on to manage Microsoft environments and, eventually, what he was most passionate about Linux environments. Now, his focus is on Docker and cloud environments.
Paul Adamson has worked as an Ops engineer, a developer, a DevOps engineer, and all variations and mixes of all of these. When not reviewing this book, Paul keeps busy helping companies embrace the AWS infrastructure. His language of choice is PHP for all the good reasons and even some of the bad, but mainly because of habit. While reviewing this book, Paul has been working for Healthy Performance Ltd, helping to apply cutting-edge technology to a cutting-edge approach to well-being.
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
Title Page
Copyright and Credits
Mastering Docker Third Edition
Packt Upsell
Why subscribe?
Packt.com
Contributors
About the authors
About the reviewer
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the example code files
Download the color images
Code in Action
Conventions used
Get in touch
Reviews
Docker Overview
Technical requirements
Understanding Docker
Developers
The problem
The Docker solution
Operators
The problem
The Docker solution
Enterprise
The problem
The Docker solution
The differences between dedicated hosts, virtual machines, and Docker
Docker installation
Installing Docker on Linux (Ubuntu 18.04)
Installing Docker on macOS
Installing Docker on Windows 10 Professional
Older operating systems
The Docker command-line client
Docker and the container ecosystem
Open source projects
Docker CE and Docker EE
Docker, Inc.
Summary
Questions
Further reading
Building Container Images
Technical requirements
Introducing the Dockerfile
Reviewing the Dockerfile in depth
FROM
LABEL
RUN
COPY and ADD
EXPOSE
ENTRYPOINT and CMD
Other Dockerfile instructions
USER
WORKDIR
ONBUILD
ENV
Dockerfiles – best practices
Building container images
Using a Dockerfile to build a container image
Using an existing container
Building a container image from scratch
Using environmental variables
Using multi-stage builds
Summary
Questions
Further reading
Storing and Distributing Images
Technical requirements
Docker Hub
Dashboard
Explore
Organizations
Create
Profile and settings
Other menu options
Creating an automated build
Setting up your code
Setting up Docker Hub
Pushing your own image
Docker Store
Docker Registry
An overview of Docker Registry
Deploying your own registry
Docker Trusted Registry
Third-party registries
Microbadger
Summary
Questions
Further reading
Managing Containers
Technical requirements
Docker container commands
The basics
Interacting with your containers
attach
exec
Logs and process information
logs
top
stats
Resource limits
Container states and miscellaneous commands
Pause and unpause
Stop, start, restart, and kill
Removing containers
Miscellaneous commands
Docker networking and volumes
Docker networking
Docker volumes
Summary
Questions
Further reading
Docker Compose
Technical requirements
Introducing Docker Compose
Our first Docker Compose application
Docker Compose YAML file
Moby counter application
Example voting application
Docker Compose commands
Up and PS
Config
Pull, build, and create
Start, stop, restart, pause, and unpause
Top, logs, and events
Scale
Kill, rm, and down
Docker App
Summary
Questions
Further reading
Windows Containers
Technical requirements
An introduction to Windows containers
Setting up your Docker host for Windows containers
Windows 10 Professional
macOS and Linux
Running Windows containers
A Windows container Dockerfile
Windows containers and Docker Compose
Summary
Questions
Further reading
Docker Machine
Technical requirements
An introduction to Docker Machine
Deploying local Docker hosts with Docker Machine
Launching Docker hosts in the cloud
Using other base operating systems
Summary
Questions
Further reading
Docker Swarm
Technical requirements
Introducing Docker Swarm
Roles within a Docker Swarm cluster
Swarm manager
Swarm worker
Creating and managing a Swarm
Creating a cluster
Adding a Swarm manager to the cluster
Joining Swarm workers to the cluster
Listing nodes
Managing a cluster
Finding information on the cluster
Promoting a worker node
Demoting a manager node
Draining a node
Docker Swarm services and stacks
Services
Stacks
Deleting a Swarm cluster
Load balancing, overlays, and scheduling
Ingress load balancing
Network overlays
Scheduling
Summary
Questions
Further reading
Docker and Kubernetes
Technical requirements
An introduction to Kubernetes
A brief history of containers at Google
An overview of Kubernetes
Kubernetes and Docker
Enabling Kubernetes
Using Kubernetes
Kubernetes and other Docker tools
Summary
Questions
Further reading
Running Docker in Public Clouds
Technical requirements
Docker Cloud
Docker on-cloud
Docker Community Edition for AWS
Docker Community Edition for Azure
Docker for Cloud summary
Amazon ECS and AWS Fargate
Microsoft Azure App Services
Kubernetes in Microsoft Azure, Google Cloud, and Amazon Web Services
Azure Kubernetes Service
Google Kubernetes Engine 
Amazon Elastic Container Service for Kubernetes
Kubernetes summary
Summary
Questions
Further reading
Portainer - A GUI for Docker
Technical requirements
The road to Portainer
Getting Portainer up and running
Using Portainer
The Dashboard
Application templates
Containers
Stats
Logs
Console
Images
Networks and volumes
Networks
Volumes
Events
Engine
Portainer and Docker Swarm
Creating the Swarm
The Portainer service
Swarm differences
Endpoints
Dashboard and Swarm
Stacks
Services
Adding endpoints
Summary
Questions
Further reading
Docker Security
Technical requirements
Container considerations
The advantages
Your Docker host
Image trust
Docker commands
run command
diff command
Best practices
Docker best practices
The Center for Internet Security benchmark
Host configuration
Docker daemon configuration
Docker daemon configuration files
Container images/runtime and build files
Container runtime
Docker security operations
The Docker Bench Security application
Running the tool on Docker for macOS and Docker for Windows
Running on Ubuntu Linux
Understanding the output
Host configuration
Docker daemon configuration
Docker daemon configuration files
Container images and build files
Container runtime
Docker security operations
Docker Swarm configuration
Summing up Docker Bench
Third-party security services
Quay
Clair
Anchore
Summary
Questions
Further reading
Docker Workflows
Technical requirements
Docker for development
Monitoring
Extending to external platforms
Heroku
What does production look like?
Docker hosts
Mixing of processes
Multiple isolated Docker hosts
Routing to your containers
Clustering
Compatibility
Reference architectures
Cluster communication
Image registries
Summary
Questions
Further reading
Next Steps with Docker
The Moby Project
Contributing to Docker
Contributing to the code
Offering Docker support
Other contributions
The Cloud Native Computing Foundation
Graduated projects
Incubating projects
The CNCF landscape
Summary
Assessments
Chapter 1, Docker Overview
Chapter 2, Building Container Images
Chapter 3, Storing and Distributing Images
Chapter 4, Managing Containers
Chapter 5, Docker Compose
Chapter 6, Windows Containers
Chapter 7, Docker Machine
Chapter 8, Docker Swarm
Chapter 9, Docker and Kubernetes
Chapter 10, Running Docker in Public Clouds
Chapter 11, Portainer - A GUI for Docker
Chapter 12, Docker Security
Chapter 13, Docker Workflows
Other Books You May Enjoy
Leave a review - let other readers know what you think
Docker has been a game-changer when it comes to how modern applications are deployed and architectured. It has now grown into a key driver of innovation beyond system administration, and it has an impact on the world of web development and more. But how can you make sure you're keeping up with the innovations it's driving? How can you be sure you're using it to its full potential?
This book shows you how; it not only demonstrates how to use Docker more effectively, it also helps you rethink and re-imagine what's possible with Docker.
You will also cover basic topics, such as building, managing, and storing images, along with best practices to make you confident before delving into Docker security. You'll find everything related to extending and integrating Docker in new and innovative ways. Docker Compose, Docker Swarm, and Kubernetes will help you take control of your containers in an efficient way.
By the end of the book, you will have a broad and detailed sense of exactly what's possible with Docker and how seamlessly it fits into your local workflow, as well as to highly available public cloud platforms and other tools.
If you are an IT professional and recognize Docker's importance in innovation in everything from system administration to web development, but aren't sure how to use it to its full potential, this book is for you.
Chapter 1, Docker Overview, discusses where Docker came from, and what it means to developers, operators, and enterprises.
Chapter 2, Building Container Images, looks at the various ways in which you can build your own container images.
Chapter 3, Storing and Distributing Images, looks at how we can share and distribute images, now that we know how to build them.
Chapter 4, Managing Containers, takes a deep dive into learning how to manage containers.
Chapter 5, Docker Compose, looks at Docker Compose—a tool that allows us to share applications comprising multiple containers.
Chapter 6, Windows Containers, explains that, traditionally, containers have been a Linux-based tool. Working with Docker, Microsoft has now introduced Windows containers. In this chapter, we will look at the differences between the two types of containers.
Chapter 7, Docker Machine, looks at Docker Machine, a tool that allows you to launch and manage Docker hosts on various platforms.
Chapter 8, Docker Swarm, discusses that we have been targeting single Docker hosts until this point. Docker Swarm is a clustering technology by Docker that allows you to run your containers across multiple hosts.
Chapter 9, Docker and Kubernetes, takes a look at Kubernetes. Like Docker Swarm, you can use Kubernetes to create and manage clusters that run your container-based applications.
Chapter 10, Running Docker in Public Clouds, looks at using the tools provided by Docker to launch a Docker Swarm cluster in Amazon Web Services, and also Microsoft Azure. We will then look at the container solutions offered by Amazon Web Services, Microsoft Azure, and Google Cloud.
Chapter 11, Portainer - A GUI for Docker, explains that most of our interaction with Docker has been on the command line. Here, we will take a look at Portainer, a tool that allows you to manage Docker resources from a web interface.
Chapter 12, Docker Security, takes a look at Docker security. We will cover everything from the Docker host, to how you launch your images, to where you get them from, and also the contents of your images.
Chapter 13, Docker Workflows, starts to put all the pieces together so that you can start using Docker in your production environments and feel comfortable doing so.
Chapter 14, Next Steps with Docker, looks not only at how you can contribute to Docker but also at the larger ecosystem that has sprung up to support container-based applications and deployments.
To get the most out of this book you will need a machine capable of running Docker. This machine should have at least 8 GB RAM and 30 GB HDD free with an Intel i3 or above, running one of the following OSes:
macOS High Sierra or above
Windows 10 Professional
Ubuntu 18.04
Also, you will need access to one or all of the following public cloud providers: DigitalOcean, Amazon Web Services, Microsoft Azure, and Google Cloud.
You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
Log in or register at
www.packt.com
.
Select the
SUPPORT
tab.
Click on
Code Downloads & Errata
.
Enter the name of the book in the
Search
box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR/7-Zip for Windows
Zipeg/iZip/UnRarX for Mac
7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Mastering-Docker-Third-Edition. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: http://www.packtpub.com/sites/default/files/downloads/9781789616606_ColorImages.pdf.
Visit the following link to check out videos of the code being run:http://bit.ly/2PUB9ww
There are a number of text conventions used throughout this book.
CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "The first file is nginx.conf, which contains a basic nginx configuration file."
A block of code is set as follows:
user nginx;worker_processes 1;error_log /var/log/nginx/error.log warn;pid /var/run/nginx.pid;events { worker_connections 1024;}
Any command-line input or output is written as follows:
$ docker image inspect <IMAGE_ID>
Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Upon clicking on Create, you will be taken to a screen similar to the next screenshot."
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packt.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packt.com.
Welcome to Mastering Docker, Third Edition! This first chapter will cover the Docker basics that you should already have a pretty good handle on. But if you don't already have the required knowledge at this point, this chapter will help you with the basics, so that subsequent chapters don't feel as heavy. By the end of the book, you should be a Docker master, and will be able to implement Docker in your environments, building and supporting applications on top of them.
In this chapter, we're going to review the following high-level topics:
Understanding Docker
The differences between dedicated hosts, virtual machines, and Docker
Docker installers/installation
The Docker command
The Docker and container ecosystem
In this chapter, we are going to discuss how to install Docker locally. To do this, you will need a host running one of the three following operating systems:
macOS High Sierra and above
Windows 10 Professional
Ubuntu 18.04
Check out the following video to see the Code in Action:
http://bit.ly/2NXf3rd
Before we look at installing Docker, let's begin by getting an understanding of the problems that the Docker technology aims to solve.
The company behind Docker has always described the program as fixing the "it works on my machine" problem. This problem is best summed up by an image, based on the Disaster Girl meme, which simply had the tagline Worked fine in dev, ops problem now, that started popping up in presentations, forums, and Slack channels a few years ago. While it is funny, it is unfortunately an all-too-real problem and one I have personally been on the receiving end of - let's take a look at an example of what is meant by this.
Even in a world where DevOps best practices are followed, it is still all too easy for a developer's working environment to not match the final production environment.
For example, a developer using the macOS version of, say, PHP will probably not be running the same version as the Linux server that hosts the production code. Even if the versions match, you then have to deal with differences in the configuration and overall environment on which the version of PHP is running, such as differences in the way file permissions are handled between different operating system versions, to name just one potential problem.
All of this comes to a head when it is time for a developer to deploy their code to the host and it doesn't work. So, should the production environment be configured to match the developer's machine, or should developers only do their work in environments that match those used in production?
In an ideal world, everything should be consistent, from the developer's laptop all the way through to your production servers; however, this utopia has traditionally been difficult to achieve. Everyone has their way of working and their own personal preferences—enforcing consistency across multiple platforms is difficult enough when there is a single engineer working on the systems, let alone a team of engineers working with a team of potentially hundreds of developers.
Using Docker for Mac or Docker for Windows, a developer can easily wrap their code in a container that they have either defined themselves, or created as a Dockerfile while working alongside a sys-admin or operations team. We will be covering this in Chapter 2, Building Container Images, as well as Docker Compose files, which we will go into more detail about in Chapter 5, Docker Compose.
They can continue to use their chosen IDE and maintain their workflows when working with the code. As we will see in the upcoming sections of this chapter, installing and using Docker is not difficult; in fact, considering how much of a chore it was to maintain consistent environments in the past, even with automation, Docker feels a little too easy—almost like cheating.
I have been working in operations for more years than I would like to admit, and the following problem has cropped regularly.
Let's say you are looking after five servers: three load-balanced web servers, and two database servers that are in a master or slave configuration dedicated to running Application 1. You are using a tool, such as Puppet or Chef, to automatically manage the software stack and configuration across your five servers.
Everything is going great, until you are told, We need to deploy Application 2 on the same servers that are running Application 1. On the face of it, this is no problem—you can tweak your Puppet or Chef configuration to add new users, vhosts, pull the new code down, and so on. However, you notice that Application 2 requires a higher version of the software that you are running for Application 1.
To make matters worse, you already know that Application 1 flat out refuses to work with the new software stack, and that Application 2 is not backwards compatible.
Traditionally, this leaves you with a few choices, all of which just add to the problem in one way or another:
Ask for more servers? While this traditionally is probably the safest technical solution, it does not automatically mean that there will be the budget for additional resources.
Re-architect the solution? Taking one of the web and database servers out of the load balancer or replication, and redeploying them with the software stack for Application 2, may seem like the next easiest option from a technical point of view. However, you are introducing single points of failure for Application 2, and also reducing the redundancy for Application 1: there was probably a reason why you were running three web and two database servers in the first place.
Attempt to install the new software stack side-by-side on your servers? Well, this certainly is possible and may seem like a good short-term plan to get the project out of the door, but it could leave you with a house of cards that could come tumbling down when the first critical security patch is needed for either software stack.
This is where Docker starts to come into its own. If you have Application 1 running across your three web servers in containers, you may actually be running more than three containers; in fact, you could already be running six, doubling up on the containers, allowing you to run rolling deployments of your application without reducing the availability of Application 1.
Deploying Application 2 in this environment is as easy as simply launching more containers across your three hosts and then routing to the newly deployed application using your load balancer. As you are just deploying containers, you do not need to worry about the logistics of deploying, configuring, and managing two versions of the same software stack on the same server.
We will work through an example of this exact scenario in Chapter 5, Docker Compose.
Enterprises suffer from the same problems described previously, as they have both developers and operators; however, they have both of these entities on a much larger scale, and there is also a lot more risk involved.
Because of the aforementioned risk, along with the fact that any downtime could cost sales or impact reputation, enterprises need to test every deployment before it is released. This means that new features and fixes are stuck in a holding pattern while the following takes place:
Test environments are spun up and configured
Applications are deployed across the newly launched environments
Test plans are executed and the application and configuration are tweaked until the tests pass
Requests for change are written, submitted, and discussed to get the updated application deployed to production
This process can take anywhere from a few days to a few weeks, or even months, depending on the complexity of the application and the risk the change introduces. While the process is required to ensure continuity and availability for the enterprise at a technological level, it does potentially introduce risk at the business level. What if you have a new feature stuck in this holding pattern and a competitor releases a similar—or worse still—the same feature, ahead of you?
This scenario could be just as damaging to sales and reputation as the downtime that the process was put in place to protect you against in the first place.
Let me start by saying that Docker does not remove the need for a process, such as the one just described, to exist or be followed. However, as we have already touched upon, it does make things a lot easier as you are already working consistently. It means that your developers have been working with the same container configuration that is running in production. This means that it is not much of a step for the methodology to be applied to your testing.
For example, when a developer checks their code that they know works on their local development environment (as that is where they have been doing all of their work), your testing tool can launch the same containers to run your automated tests against. Once the containers have been used, they can be removed to free up resources for the next lot of tests. This means that, all of a sudden, your testing process and procedures are a lot more flexible, and you can continue to reuse the same environment, rather than redeploying or reimaging servers for the next set of testing.
This streamlining of the process can be taken as far as having your new application containers push all the way through to production.
The quicker this process can be completed, the quicker you can confidently launch new features or fixes and keep ahead of the curve.
So, we know what problems Docker was developed to solve. We now need to discuss what exactly Docker is and what it does.
Docker is a container management system that helps us easily manage Linux Containers (LXC) in an easier and universal fashion. This lets you create images in virtual environments on your laptop and run commands against them. The actions you perform to the containers, running in these environments locally on your machine, will be the same commands or operations that you run against them when they are running in your production environment.
This helps us in that you don't have to do things differently when you go from a development environment, such as the one on your local machine, to a production environment on your server. Now, let's take a look at the differences between Docker containers and typical virtual machine environments.
The following diagram demonstrates the difference between a dedicated, bare-metal server and a server running virtual machines:
As you can see, for a dedicated machine we have three applications, all sharing the same orange software stack. Running virtual machines allow us to run three applications, running two completely different software stacks. The following diagram shows the same orange and green applications running in containers using Docker:
This diagram gives us a lot of insight into the biggest key benefit of Docker, that is, there is no need for a complete operating system every time we need to bring up a new container, which cuts down on the overall size of containers. Since almost all the versions of Linux use the standard kernel models, Docker relies on using the host operating system's Linux kernel for the operating system it was built upon, such as Red Hat, CentOS, and Ubuntu.
For this reason, you can have almost any Linux operating system as your host operating system and be able to layer other Linux-based operating systems on top of the host. Well, that is, your applications are led to believe that a full operating system is actually installed—but in reality, we only install the binaries, such as a package manager and, for example, Apache/PHP and the libraries required to get just enough of an operating system for your applications to run.
For example, in the earlier diagram, we could have Red Hat running for the orange application, and Debian running for the green application, but there would never be a need to actually install Red Hat or Debian on the host. Thus, another benefit of Docker is the size of images when they are created. They are built without the largest piece: the kernel or the operating system. This makes them incredibly small, compact, and easy to ship.
Installers are one of the first pieces you need to get up and running with Docker on both your local machine and your server environments. Let's first take a look at which environments you can install Docker in:
Linux (various Linux flavors)
macOS
Windows 10 Professional
In addition, you can run them on public clouds, such as Amazon Web Services, Microsoft Azure, and DigitalOcean, to name a few. With each of the various types of installers listed previously, Docker actually operates in different ways on the operating system. For example, Docker runs natively on Linux, so if you are using Linux, then how Docker runs on your system is pretty straightforward. However, if you are using macOS or Windows 10, then it operates a little differently, since it relies on using Linux.
Let's look at quickly installing Docker on a Linux desktop running Ubuntu 18.04, and then on macOS and Windows 10.
As already mentioned, this is the most straightforward installation out of the three systems we will be looking at. To install Docker, simply run the following command from a Terminal session:
$ curl -sSL https://get.docker.com/ | sh
$ sudo systemctl start docker
You will also be asked to add your current user to the Docker group. To do this, run the following command, making sure you replace the username with your own:
$ sudo usermod -aG docker username
These commands will download, install, and configure the latest version of Docker from Docker themselves. At the time of writing, the Linux operating system version installed by the official install script is 18.06.
Running the following command should confirm that Docker is installed and running:
$ docker version
You should see something similar to the following output:
There are two supporting tools that we are going to use in future chapters, which are installed as part of the Docker for macOS or Windows 10 installers.
To ensure that we are ready to use these tools in later chapters, we should install them now. The first tool is Docker Machine. To install this, we first need to get the latest version number. You can find this by visiting the releases section of the project's GitHub page at https://github.com/docker/machine/releases/. At the time of writing, the version was 0.15.0—update the version number in the commands in the following code block with whatever the latest version is when you install it:
$ MACHINEVERSION=0.15.0
$ curl -L https://github.com/docker/machine/releases/download/v$MACHINEVERSION/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine
$ chmod +x /tmp/docker-machine
$ sudo mv /tmp/docker-machine /usr/local/bin/docker-machine
To download and install the next and final tool, Docker Compose, run the following commands, again checking that you are running the latest version by visiting the releases page at https://github.com/docker/compose/releases/:
$ COMPOSEVERSION=1.22.0
$ curl -L https://github.com/docker/compose/releases/download/$COMPOSEVERSION/docker-compose-`uname -s`-`uname -m` >/tmp/docker-compose
$ chmod +x /tmp/docker-compose
$ sudo mv /tmp/docker-compose /usr/local/bin/docker-compose
Once it's installed, you should be able to run the following two commands confirm the versions of the software is correctly:
$ docker-machine version
$ docker-compose version
Unlike the command-line Linux installation, Docker for Mac has a graphical installer.
You can download the installer from the Docker store, at https://store.docker.com/editions/community/docker-ce-desktop-mac. Just click on the Get Docker link. Once it's downloaded, you should have a DMG file. Double-clicking on it will mount the image, and opening the image mounted on your desktop should present you with something like this:
Once you have dragged the Docker icon to your Applications folder, double-click on it and you will be asked whether you want to open the application you have downloaded. Clicking Yes will open the Docker installer, showing the following:
Click on Next and follow the onscreen instructions. Once it is installed and started, you should see a Docker icon in the top-left icon bar on your screen. Clicking on the icon and selecting About Docker should show you something similar to the following:
You can also open a Terminal window. Run the following command, just as we did in the Linux installation:
$ docker version
You should see something similar to the following Terminal output:
You can also run the following commands to check the versions of Docker Compose and Docker Machine that were installed alongside Docker Engine:
$ docker-compose version
$ docker-machine version
Like Docker for Mac, Docker for Windows uses a graphical installer.
You can download the Docker for Windows installer from the Docker store at https://store.docker.com/editions/community/docker-ce-desktop-windows/. Just click on the Get Docker button to download the installer. Once it's downloaded, run the MSI package and you will be greeted with the following:
Click on Yes, and then follow the onscreen prompts, which will go through not only installing Docker, but also enabling Hyper-V, if you do not already have it enabled.
Once it's installed, you should see a Docker icon in the icon tray in the bottom right of your screen. Clicking on it and selecting About Docker from the menu will show the following:
Open a PowerShell window and type the following command:
$ docker version
This should also show you similar output to the Mac and Linux versions:
Again, you can also run the following commands to check the versions of Docker Compose and Docker Machine that were installed alongside Docker Engine:
$ docker-compose version$ docker-machine version
Again, you should see a similar output to the macOS and Linux versions. As you may have started to gather, once the packages are installed, their usage is going to be pretty similar. This will be covered in greater detail later in this chapter.
If you are not running a sufficiently new operating system on Mac or Windows, then you will need to use Docker Toolbox. Consider the output printed from running the following command:
$ docker version
On all three of the installations we have performed so far, it shows two different versions, a client and server. Predictably, the Linux version shows that the architecture for the client and server are both Linux; however, you may notice that the Mac version shows the client is running on Darwin, which is Apple's Unix-like kernel, and the Windows version shows Windows. Yet both of the servers show the architecture as being Linux, so what gives?
That is because both the Mac and Windows versions of Docker download and run a virtual machine in the background, and this virtual machine runs running a small, lightweight operating system based on Alpine Linux. The virtual machine runs using Docker's own libraries, which connect to the built-in hypervisor for your chosen environment.
For macOS, this is the built-in Hypervisor.framework, and for Windows, Hyper-V.
To ensure that no one misses out on the Docker experience, a version of Docker that does not use these built-in hypervisors is available for older versions of macOS and unsupported Windows versions. These versions utilize VirtualBox as the hypervisor to run the Linux server for your local client to connect to.
For more information on Docker Toolbox, see the project's website at https://www.docker.com/products/docker-toolbox/, where you can also download the macOS and Windows installers.
Now that we have Docker installed, let's look at some Docker commands that you should be familiar with already. We will start with some common commands and then take a peek at the commands that are used for the Docker images. We will then take a dive into the commands that are used for the containers.
The first command we will be taking a look at is one of the most useful commands, not only in Docker, but in any command-line utility you use—the help command. It is run simply like this:
$ docker help
This command will give you a full list of all of the Docker commands at your disposal, along with a brief description of what each command does. For further help with a particular command, you can run the following:
$ docker <COMMAND> --help
Next, let's run the hello-world container. To do this, simply run the following command:
$ docker container run hello-world
It doesn't matter what host you are running Docker on, the same thing will happen on Linux, macOS, and Windows. Docker will download the hello-world container image and then execute it, and once it's executed, the container will be stopped.
Your Terminal session should look like the following:
Let's try something a little more adventurous—let's download and run a nginx container by running the following two commands:
$ docker image pull nginx
$ docker container run -d --name nginx-test -p 8080:80 nginx
The first of the two commands downloads the nginx container image, and the second command launches a container in the background, called nginx-test, using the nginx image we pulled. It also maps port 8080 on our host machine to port 80 on the container, making it accessible to our local browser at http://localhost:8080/.
As you can see from the following screenshots, the command and results are exactly the same on all three OS types. Here we have Linux:
This is the result on macOS:
And this is how it looks on Windows:
In the following three chapters, we will look at using the Docker command-line client in more detail. For now, let's stop and remove our nginx-test container by running the following:
$ docker container stop nginx-test$ docker container rm nginx-test
As you can see, the experience of running a simple nginx container on all three of the hosts on which we have installed Docker is exactly the same. As am I sure you can imagine, trying to achieve this without something like Docker across all three platforms is a challenge, and also a very different experience on each platform. Traditionally, this has been one of the reasons for the difference in local development environments.