Docker Deep Dive. - Nigel Poulton - E-Book

Docker Deep Dive. E-Book

Nigel Poulton

0,0
25,19 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Most applications, even the funky cloud-native microservices ones, need high-performance, production-grade infrastructure to run on. Having impeccable knowledge of Docker will help you thrive in the modern cloud-first world. With this book, you will gain the skills you need in order to work with Docker and its containers.
The book begins with an introduction to containers and explains their functionality and application in the real world. You will then get an overview of VMware, Kubernetes, and Docker and learn to install Docker on Windows, Mac, and Linux. Once you have understood the Ops and Dev perspective of Docker, you will be able to see the big picture and understand what Docker exactly does. The book then turns its attention to the more technical aspects, guiding you through practical exercises covering Docker engine, Docker images, and Docker containers. You will learn techniques for containerizing an app, deploying apps with Docker Compose, and managing cloud-native applications with Swarm. You will also build Docker networks and Docker overlay networks and handle applications that write persistent data. Finally, you will deploy apps with Docker stacks and secure your Docker environment.
By the end of this book, you will be well-versed in Docker and containers and have developed the skills to create, deploy, and run applications on the cloud.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 295

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Docker Deep Dive

Zero to Docker in a single book!

June 2023 Edition

Weapons-grade container learning!

Nigel Poulton @nigelpoulton

© 2016 - 2023 Nigel Poulton

About this edition

This edition was published in June 2023.

In writing this edition, I've gone over every word in every chapter ensuring everything is up-to-date with the latest editions of Docker and latest trends in the industry. I've also removed repetitions and made every chapter more concise.

Major changes include:

- Added sections on multi-platform builds with buildx.

- Updated the Compose chapter to be in-line with Compose Spec.

- New example apps.

- Updated all images to higher quality.

- Added Multipass as a simple way to get a Docker lab.

Enjoy the book and get ready to master containers!

(c) 2023 Nigel Poulton

Huge thanks to my wife and kids for putting up with a geek in the house who genuinely thinks he’s a bunch of software running inside of a container on top of midrange biological hardware. It can’t be easy living with me!

Massive thanks as well to everyone who watches my Pluralsight videos. I love connecting with you and really appreciate all the feedback I’ve gotten over the years. This was one of the major reasons I decided to write this book! I hope it’ll be an amazing tool to help you drive your careers even further forward.

Author: Nigel Poulton

Nigel is a technology geek who spends his life creating books, training videos, and online hands-on training. He is the author of best-selling books on Docker and Kubernetes, and the most popular online training videos on the same topics. Nigel is a Docker Captain and is always playing with new technology - his latest interest is WebAssembly on the server (Wasm). Previously, Nigel held various senior infrastructure roles within large enterprises.

He is fascinated with technology and often daydreams about it. In his free time, he enjoys reading and watching science fiction. He wishes he lived in the future and could explore space-time, the universe, and other mind-bending phenomena. He is passionate about learning, cars, and football (soccer). He lives in England with his fabulous wife and three children.

Table of Contents

Part 1: The big picture stuff

1: Containers from 30,000 feet

The bad old days

Hello VMware!

VMwarts

Hello Containers!

Linux containers

Hello Docker!

Docker and Windows

Windows containers vs Linux containers

What about Mac containers?

What about Kubernetes

Chapter Summary

2: Docker

Docker - The TLDR

Docker, Inc.

The Docker technology

The Open Container Initiative (OCI)

Chapter summary

3: Installing Docker

Docker Desktop

Installing Docker with Multipass

Installing Docker on Linux

Play with Docker

Chapter Summary

4: The big picture

The Ops Perspective

The Dev Perspective

Chapter Summary

Part 2: The technical stuff

5: The Docker Engine

Docker Engine - The TLDR

Docker Engine - The Deep Dive

Chapter summary

6: Images

Docker images - The TLDR

Docker images - The deep dive

Images - The commands

Chapter summary

7: Containers

Docker containers - The TLDR

Docker containers - The deep dive

Containers - The commands

Chapter summary

8: Containerizing an app

Containerizing an app - The TLDR

Containerizing an app - The deep dive

Containerizing an app - The commands

Chapter summary

9: Multi-container apps with Compose

Deploying apps with Compose - The TLDR

Deploying apps with Compose - The Deep Dive

Deploying apps with Compose - The commands

Chapter Summary

10: Docker Swarm

Docker Swarm - The TLDR

Docker Swarm - The Deep Dive

Docker Swarm - The Commands

Chapter summary

11: Docker Networking

Docker Networking - The TLDR

Docker Networking - The Deep Dive

Docker Networking - The Commands

Chapter Summary

12: Docker overlay networking

Docker overlay networking - The TLDR

Docker overlay networking - The deep dive

Docker overlay networking - The commands

Chapter Summary

13: Volumes and persistent data

Volumes and persistent data - The TLDR

Volumes and persistent data - The Deep Dive

Volumes and persistent data - The Commands

Chapter Summary

14: Deploying apps with Docker Stacks

Deploying apps with Docker Stacks - The TLDR

Deploying apps with Docker Stacks - The Deep Dive

Deploying apps with Docker Stacks - The Commands

Chapter Summary

15: Security in Docker

Security in Docker - The TLDR

Security in Docker - The deep dive

Chapter Summary

16: What next

Feedback and reviews

Guide

Begin Reading

0: About the book

This is a book about Docker, no prior knowledge required. In fact, the motto of the book is Zero to Docker in a single book.

So, if you’re involved in the development and operations of cloud-native microservices apps and need to learn Docker, or if you want to be involved in that stuff, this book is dedicated to you.

Why should I read this book or care about Docker?

Docker is here and there’s no point hiding. If you want the best jobs working on the best technologies, you need to know Docker and containers. Docker and containers are central to Kubernetes, and knowing how they work will help you learn Kubernetes. They’re also positioned well for emerging cloud technologies such as WebAssembly on the server.

What if I’m not a developer

If you think Docker is just for developers, prepare to have your world turned upside-down.

Most applications, even the funky cloud-native microservices ones, need high-performance production-grade infrastructure to run on. If you think traditional developers are going to take care of that, think again. To cut a long story short, if you want to thrive in the modern cloud-first world, you need to know Docker. But don’t stress, this book will give you all the skills you need.

Should I buy the book if I’ve already watched your video training courses?

The choice is yours, but I normally recommend people watch my videos and read my books. And no, it’s not to make me rich. Learning via different mediums is a proven way to learn fast. So, I recommend you read my books, watch my videos, and get as much hands-on experience as possible.

Also, if you like my video courses you’ll probably like the book. If you don’t like my video courses you probably won’t like the book.

If you haven’t watched my video courses, you should! They’re fast-paced, lots of fun, and get rave reviews.

How the book is organized

I’ve divided the book into two sections:

The big picture stuffThe technical stuff

The big picture stuff covers things like:

What is DockerWhy do we have containersWhat does jargon like “cloud-native” and “microservices” mean…

It’s the kind of stuff that you need to know if you want a rounded knowledge of Docker and containers.

The technical stuff is where you’ll find everything you need to start working with Docker. It gets into the detail of images, containers, and the increasingly important topic of orchestration. It even cover’s the stuff that enterprises love — TLS, image signing, high-availability, backups, and more.

Each chapter covers theory and includes plenty of commands and examples.

Most of the chapters in the technical stuff section are divided into three parts:

The TLDRThe Deep DiveThe Commands

The TLDR gives you two or three paragraphs that you can use to explain the topic at the coffee machine. They’re also a great place to remind you what something is about.

The Deep Dive explains how things work and gives examples.

The Commands lists all the relevant commands in an easy-to-read list with brief reminders of what each one does.

I think you’ll love that format.

Editions of the book

Docker and the cloud-native ecosystem is developing fast. As a result, I’m committed to updating the book approximately every year.

If that sounds excessive, welcome to the new normal.

We no-longer live in a world where a 4-year-old book on a technology like Docker is valuable. That makes my life as an author really hard, but I’m not going to argue with the truth.

Having problems getting the latest updates on your Kindle?

It’s come to my attention that Kindle doesn’t always download the latest version of the book. To fix this:

Go to http://amzn.to/2l53jdg

Under Quick Solutions (on the left) select Digital Purchases. Search for your purchase of Docker Deep Dive kindle edition and select Content and Devices. Your purchase should show up in the list with a button that says “Update Available”. Click that button. Delete your old version on your Kindle and download the new one.

If this doesn’t work, contact Kindle support and they’ll resolve the issue for you. https://kdp.amazon.com/en_US/self-publishing/contact-us/.

Leave a review

Last but not least… be a legend and write a quick review on Amazon and Goodreads. You can even do this if you bought the book from a different reseller.

That’s everything. Let’s get rocking with Docker!

Part 1: The big picture stuff

1: Containers from 30,000 feet

Containers have taken over the world!

In this chapter we’ll get into things like; why we have containers, what they do for us, and where we can use them.

The bad old days

Applications are at the heart of businesses. If applications break, businesses break. Sometimes they even go bust. These statements get truer every day!

Most applications run on servers. In the past we could only run one application per server. The open-systems world of Windows and Linux just didn’t have the technologies to safely and securely run multiple applications on the same server.

As a result, the story went something like this… Every time the business needed a new application, the IT department would buy a new server. Most of the time nobody knew the performance requirements of the new application, forcing the IT department to make guesses when choosing the model and size of the server to buy.

As a result, IT did the only thing it could do — it bought big fast servers that cost a lot of money. After all, the last thing anyone wanted, including the business, was under-powered servers unable to execute transactions and potentially losing customers and revenue. So, IT bought big. This resulted in over-powered servers operating as low as 5-10% of their potential capacity. A tragic waste of company capital and environmental resources!

Hello VMware!

Amid all of this, VMware, Inc. gave the world a gift — the virtual machine (VM). And almost overnight, the world changed into a much better place. We finally had a technology that allowed us to run multiple business applications safely on a single server. Cue wild celebrations!

This was a game changer. IT departments no longer needed to procure a brand-new oversized server every time the business needed a new application. More often than not, they could run new apps on existing servers that were sitting around with spare capacity.

All of a sudden, we could squeeze massive amounts of value out of existing corporate assets, resulting in a lot more bang for the company’s buck ($).

VMwarts

But… and there’s always a but! As great as VMs are, they’re far from perfect!

The fact that every VM requires its own dedicated operating system (OS) is a major flaw. Every OS consumes CPU, RAM and other resources that could otherwise be used to power more applications. Every OS needs patching and monitoring. And in some cases, every OS requires a license. All of this results in wasted time and resources.

The VM model has other challenges too. VMs are slow to boot, and portability isn’t great — migrating and moving VM workloads between hypervisors and cloud platforms is harder than it needs to be.

Hello Containers!

For a long time, the big web-scale players, like Google, have been using container technologies to address the shortcomings of the VM model.

In the container model, the container is roughly analogous to the VM. A major difference is that containers do not require their own full-blown OS. In fact, all containers on a single host share the host’s OS. This frees up huge amounts of system resources such as CPU, RAM, and storage. It also reduces potential licensing costs and reduces the overhead of OS patching and other maintenance. Net result: savings on the time, resource, and capital fronts.

Containers are also fast to start and ultra-portable. Moving container workloads from your laptop, to the cloud, and then to VMs or bare metal in your data center is a breeze.

Linux containers

Modern containers started in the Linux world and are the product of an immense amount of work from a wide variety of people over a long period of time. Just as one example, Google LLC has contributed many container-related technologies to the Linux kernel. Without these, and other contributions, we wouldn’t have modern containers today.

Some of the major technologies that enabled the massive growth of containers in recent years include; kernel namespaces, control groups, capabilities, and of course Docker. To re-emphasize what was said earlier — the modern container ecosystem is deeply indebted to the many individuals and organizations that laid the strong foundations that we currently build on. Thank you!

Despite all of this, containers remained complex and outside of the reach of most organizations. It wasn’t until Docker came along that containers were effectively democratized and accessible to the masses.

Note: There are many operating system virtualization technologies similar to containers that pre-date Docker and modern containers. Some even date back to System/360 on the Mainframe. BSD Jails and Solaris Zones are some other well-known examples of Unix-type container technologies. However, in this book we are restricting our conversation to modern containers made popular by Docker.

Hello Docker!

We’ll talk about Docker in a bit more detail in the next chapter. But for now, it’s enough to say that Docker was the magic that made Linux containers usable for mere mortals. Put another way, Docker, Inc. made containers simple!

Docker and Windows

Microsoft has worked extremely hard to bring Docker and container technologies to the Windows platform.

At the time of writing, Windows desktop and server platforms support both of the following:

Windows containersLinux containers

Windows containers run Windows apps that require a host system with a Windows kernel. Windows 10 and Windows 11, as well as all modern versions of Windows Server, have native support Windows containers.

Any Windows host running the WSL 2 (Windows Subsystem for Linux) can also run Linux containers. This makes Windows 10 and 11 great platforms for developing and testing Windows and Linux containers.

However, despite all of the work Microsoft has done developing Windows containers, the vast majority of containers are Linux containers. This is because Linux containers are smaller and faster, and the majority of tooling exists for Linux.

All of the examples in this edition of the book are Linux containers.

Windows containers vs Linux containers

It’s vital to understand that a container shares the kernel of the host it’s running on. This means containerized Windows apps need a host with a Windows kernel, whereas containerized Linux apps need a host with a Linux kernel. Only… it’s not always that simple.

As previously mentioned, it’s possible to run Linux containers on Windows machines with the WSL 2 backend installed.

What about Mac containers?

There is currently no such thing as Mac containers.

However, you can run Linux containers on your Mac using Docker Desktop. This works by seamlessly running your containers inside of a lightweight Linux VM on your Mac. It’s extremely popular with developers, who can easily develop and test Linux containers on their Mac.

What about Kubernetes

Kubernetes is an open-source project out of Google that has quickly emerged as the de facto orchestrator of containerized apps. That’s just a fancy way of saying Kubernetes is the most popular tool for deploying and managing containerized apps.

Note: A containerized app is an application running as a container.

Kubernetes used to use Docker as its default container runtime – the low-level technology that pulls images and starts and stops containers. However, modern Kubernetes clusters have a pluggable container runtime interface (CRI) that makes it easy to swap-out different container runtimes. At the time of writing, most new Kubernetes clusters use containerd. We’ll cover more on containerd later in the book, but for now it’s enough to know that containerd is the small specialized part of Docker that does the low-level tasks of starting and stopping containers.

Check out these resources if you need to learn Kubernetes. Quick Start Kubernetes is ~100 pages and will get you up-to-speed with Kubernetes in a day! The Kubernetes Book is a lot more comprehensive and will get you very close to being a Kubernetes expert.

Figure 1.1

Chapter Summary

We used to live in a world where every time the business needed a new application we had to buy a brand-new server. VMware came along and allowed us to drive more value out of new and existing IT assets. As good as VMware and the VM model is, it’s not perfect. Following the success of VMware and hypervisors came a newer more efficient and portable virtualization technology called containers. But containers were initially hard to implement and were only found in the data centers of web giants that had Linux kernel engineers on staff. Docker came along and made containers easy and accessible to the masses.

Speaking of Docker… let’s go find who, why, and what Docker is!

2: Docker

No book or conversation about containers is complete without talking about Docker. But when we say “Docker”, we can be referring to either of the following:

Docker, Inc. the companyDocker the technology

Docker - The TLDR

Docker is software that runs on Linux and Windows. It creates, manages, and can even orchestrate containers. The software is currently built from various tools from the Moby open-source project. Docker, Inc. is the company that created the technology and continues to create technologies and solutions that make it easier to get the code on your laptop running in the cloud.

That’s the quick version. Let’s dive a bit deeper.

Docker, Inc.

Docker, Inc. is a technology company based out of San Francisco founded by French-born American developer and entrepreneur Solomon Hykes. Solomon is no longer at the company.

Figure 2.1 Docker, Inc. logo.

The company started out as a platform as a service (PaaS) provider called dotCloud. Behind the scenes, the dotCloud platform was built on Linux containers. To help create and manage these containers, they built an in-house tool that they eventually nick-named “Docker”. And that’s how the Docker technology was born!

It’s also interesting to know that the word “Docker” comes from a British expression meaning dock worker — somebody who loads and unloads cargo from ships.

In 2013 they got rid of the struggling PaaS side of the business, rebranded the company as “Docker, Inc.”, and focussed on bringing Docker and containers to the world. They’ve been immensely successful in this endeavour.

Throughout this book we’ll use the term “Docker, Inc.” when referring to Docker the company. All other uses of the term “Docker” will refer to the technology.

The Docker technology

When most people talk about Docker, they’re referring to the technology that runs containers. However, there are at least three things to be aware of when referring to Docker as a technology:

The runtimeThe daemon (a.k.a. engine)The orchestrator

Figure 2.2 shows the three layers and will be a useful reference as we explain each component. We’ll get deeper into each later in the book.

Figure 2.2

The runtime operates at the lowest level and is responsible for starting and stopping containers (this includes building all of the OS constructs such as namespaces and cgroups). Docker implements a tiered runtime architecture with high-level and low-level runtimes that work together.

The low-level runtime is called runc and is the reference implementation of Open Containers Initiative (OCI) runtime-spec. Its job is to interface with the underlying OS and start and stop containers. Every container on a Docker node was created and started by an instance of runc.

The higher-level runtime is called containerd. This manages the entire container lifecycle including pulling images and managing runc instances. containerd is pronounced “container-dee’ and is a graduated CNCF project used by Docker and Kubernetes.

A typical Docker installation has a single long-running containerd process instructing runc to start and stop containers. runc is never a long-running process and exits as soon as a container is started.

The Docker daemon (dockerd) sits above containerd and performs higher-level tasks such as exposing the Docker API, managing images, managing volumes, managing networks, and more…

A major job of the Docker daemon is to provide an easy-to-use standard interface that abstracts the lower levels.

Docker also has native support for managing clusters of nodes running Docker. These clusters are called swarms and the native technology is called Docker Swarm. Docker Swarm is easy-to-use and many companies are using it in real-world production. It’s a lot simpler to install and manage than Kubernetes but lacks a lot of the advanced features and ecosystem of Kubernetes.

The Open Container Initiative (OCI)

Earlier in the chapter we mentioned the Open Containers Initiative — OCI.

The OCI is a governance council responsible for standardizing the low-level fundamental components of container infrastructure. In particular it focusses on image format and container runtime (don’t worry if you’re not comfortable with these terms yet, we’ll cover them in the book).

It’s also true that no discussion of the OCI is complete without mentioning a bit of history. And as with all accounts of history, the version you get depends on who’s doing the talking. So, this is container history according to Nigel :-D

From day one, use of Docker grew like crazy. More and more people used it in more and more ways for more and more things. So, it was inevitable that some parties would get frustrated. This is normal and healthy.

The TLDR of this history according to Nigel is that a company called CoreOS (acquired by Red Hat which was then acquired by IBM) didn’t like the way Docker did certain things. So, they created an open standard called appc that defined things like image format and container runtime. They also created an implementation of the spec called rkt (pronounced “rocket”).

This put the container ecosystem in an awkward position with two competing standards.

Getting back to the story, this threatened to fracture the ecosystem and present users and customers with a dilemma. While competition is usually a good thing, competing standards is usually not. They cause confusion and slowdown user adoption. Not good for anybody.

With this in mind, everybody did their best to act like adults and came together to form the OCI — a lightweight agile council to govern container standards.

At the time of writing, the OCI has published two specifications (standards) -

The image-specThe runtime-specThe distribution-spec

An analogy that’s often used when referring to these two standards is rail tracks. These two standards are like agreeing on standard sizes and properties of rail tracks, leaving everyone else free to build better trains, better carriages, better signalling systems, better stations… all safe in the knowledge that they’ll work on the standardized tracks. Nobody wants two competing standards for rail track sizes!

It’s fair to say that the OCI specifications have had a major impact on the architecture and design of the core Docker product. All modern versions of Docker and Docker Hub implement the OCI specifications.

The OCI is organized under the auspices of the Linux Foundation.

Chapter summary

In this chapter, we learned about Docker, Inc. the company, and the Docker technology.

Docker, Inc. is a technology company out of San Francisco with an ambition to change the way we do software. They were arguably the first-movers and instigators of the modern container revolution.

The Docker technology focuses on running and managing application containers. It runs on Linux and Windows, can be installed almost anywhere, and is currently the most popular container runtime used by Kubernetes.

The Open Container Initiative (OCI) was instrumental in standardizing low-level container technologies such as runtimes, image format, and registries.

3: Installing Docker

There are lots of ways and places to install Docker. There’s Windows, Mac, and Linux. You can install in the cloud, on premises, and on your laptop. And there are manual installs, scripted installs, wizard-based installs…

But don’t let that scare you. They’re all really easy, and a simple search for “how to install docker on <insert your choice here>” will reveal up-to-date instructions that are easy to follow. As a result, we won’t waste too much space here. We’ll cover the following.

Docker Desktop WindowsMacOSMultipassServer installs on LinuxPlay with Docker

Docker Desktop

Docker Desktop is a desktop app from Docker, Inc. that makes it super-easy to work with containers. It includes the Docker engine, a slick UI, and an extension system with a marketplace. These extensions add some very useful features to Docker Desktop such as scanning images for vulnerabilities and making it easy to manage images and disk space.

Docker Desktop is free for educational purposes, but you’ll have to pay if you start using it for work and your company has over 250 employees or does more than $10M in annual revenue.

It runs on 64-bit versions of Windows 10, Windows 11, MacOS, and Linux.

Once installed, you have a fully working Docker environment that’s great for development, testing, and learning. It includes Docker Compose and you can even enable a single-node Kubernetes cluster if you need to learn Kubernetes.

Docker Desktop on Windows can run native Windows containers as well as Linux containers. Docker Desktop on Mac and Linux can only run Linux containers.

We’ll walk through the process of installing on Windows and MacOS.

Windows pre-reqs

Docker Desktop on Windows requires all of the following:

64-bit version of Windows 10/11 Hardware virtualization support must be enabled in your system’s BIOSWSL 2

Be very careful changing anything in your system’s BIOS.

Installing Docker Desktop on Windows 10 and 11

Search the internet or ask your AI assistant how to “install Docker Desktop on Windows”. This will take you to the relevant download page where you can download the installer and follow the instructions. You may need to install and enable the WSL 2 backend (Windows Subsystem for Linux).

Once the installation is complete you may have to manually start Docker Desktop from the Windows Start menu. It may take a minute to start but you can watch the start progress via the animated whale icon on the Windows task bar at the bottom of the screen.

Once it’s up and running you can open a terminal and type some simple docker commands.

$ docker version Client: Cloud integration: v1.0.31 Version: 20.10.23 API version: 1.41 Go version: go1.18.10 Git commit: 7155243 Built: Thu Jan 19 01:20:44 2023 OS/Arch: linux/amd64 Context: default Experimental: true Server: Engine: Version: 20.10.23 <Snip> OS/Arch: linux/amd64 Experimental: true

Notice the output is showing OS/Arch: linux/amd64 for the Server component. This is because a default installation assumes you’ll be working with Linux containers.

You can easily switch to Windows containers by right-clicking the Docker whale icon in the Windows notifications tray and selecting Switch to Windows containers....

Be aware that any existing Linux containers will keep running in the background but you won’t be able to see or manage them until you switch back to Linux containers mode.

Run another docker version command and look for the windows/amd64 line in the Server section of the output.

C:\> docker version Client: <Snip> Server: Engine: <Snip> OS/Arch: windows/amd64 Experimental: true

You can now run and manage Windows containers (containers running Windows applications).

Congratulations. You now have a working installation of Docker on your Windows machine.

Installing Docker Desktop on Mac

Docker Desktop for Mac is like Docker Desktop on Windows — a packaged product with a slick UI that gets you a single-engine installation of Docker that’s ideal for local development needs. You can also enable a single-node Kubernetes cluster.

Before proceeding with the installation, it’s worth noting that Docker Desktop on Mac installs all of the Docker engine components in a lightweight Linux VM that seamlessly exposes the API to your local Mac environment. This means you can open a terminal on your Mac and use the regular Docker commands without ever knowing it’s all running in a hidden VM. This is why Docker Desktop on Mac only work with Linux containers – it’s all running inside a Linux VM. This is fine as Linux is where most of the container action is.

Figure 3.1 shows the high-level architecture for Docker Desktop on Mac.

Figure 3.1

The simplest way to install Docker Desktop on your Mac is to search the web or ask your AI how to “install Docker Desktop on MacOS”. Follow the links to the download and then complete the simple installer.

Once the installation is complete you may have to manually start Docker Desktop from the MacOS Launchpad. It may take a minute to start but you can watch the animated Docker whale icon in the status bar at the top of your screen. Once it’s started you can click the whale icon to manage Docker Desktop.

Open a terminal window and run some regular Docker commands. Try the following.

$ docker version Client: Cloud integration: v1.0.31 Version: 23.0.5 API version: 1.42 <Snip> OS/Arch: darwin/arm64 Context: desktop-linux Server: Docker Desktop 4.19.0 (106363) Engine: Version: dev API version: 1.43 (minimum version 1.12) <Snip> OS/Arch: linux/arm64 Experimental: false containerd: Version: 1.6.20 GitCommit: 2806fc1057397dbaeefbea0e4e17bddfbd388f38 runc: Version: 1.1.5 GitCommit: v1.1.5-0-gf19387a <Snip>

Notice that the OS/Arch: for the Server component is showing as linux/amd64 or linux/arm64. This is because the daemon is running inside the Linux VM mentioned earlier. The Client component is a native Mac application and runs directly on the Mac OS Darwin kernel which is why it shows as either darwin/amd64 or darwin/arm64.

You can now use Docker on your Mac.

Installing Docker with Multipass

Multipass is a free tool for creating cloud-style Linux VMs on your Linux, Mac, or Windows machine. It’s my go-to choice for Docker testing on my laptop as it’s incredibly easy to spin-up and tear-down Docker VMs.

Just go to https://multipass.run/install and install the right edition for your hardware and OS.

Once installed you’ll only need the following three commands:

$ multipass launch $ multipass ls $ multipass shell

Let’s see how to launch and connect to a new VM that will have Docker pre-installed.

Run the following command to create a new VM called node1 based on the docker image. The docker image has Docker pre-installed and ready to go.

$ multipass launch docker --name node1

It’ll take a minute or two to download the image and launch the VM.

List VMs to make sure it launched properly.

$ multipass ls Name State IPv4 Image node1 Running 192.168.64.37 Ubuntu 22.04 LTS 172.17.0.1 172.18.0.1

You’ll use the 192 IP address when working with the examples later in the book.

Connect to the VM with the following command.

$ multipass shell node1

You’re now logged on to the VM and can run regular Docker commands.

Just type exit to log out of the VM. Use multipass delete node1 and then multipass purge to delete it.

Installing Docker on Linux

There are lots of ways to install Docker on Linux and most of them are easy. The recommended way is to search the web or ask your AI how to do it. The instructions in this section may be out of date and just for guidance purposes.

In this section we’ll look at one of the ways to install Docker on Ubuntu Linux 22.04 LTS. The procedure assumes you’ve already installed Linux and are logged on.

Remove existing Docker packages.
$ sudo apt-get remove docker docker-engine docker.io containerd runc
Update the apt package index.
$ sudo apt-get update $ sudo apt-get install ca-certificates curl gnupg ...
Add the Docker GPG kye.
$ sudo install -m 0755 -d /etc/apt/keyrings $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg $ sudo chmod a+r /etc/apt/keyrings/docker.gpg
Set-up the repository.
$ echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] \ https://download.docker.com/linux/ubuntu \ "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install Docker from the official repo.
$ sudo apt-get update $ sudo apt-get install \ docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Docker is now installed and you can test by running some commands.

$ sudo docker --version Docker version 24.0.0, build 98fdcd7 $ sudo docker info Server: Containers: 1 Running: 1 Paused: 0 Stopped: 0 Images: 1 Server Version: 24.0.0 Storage Driver: overlay2 ...

Play with Docker

Play with Docker (PWD) is a fully functional internet-based Docker playground that lasts for 4 hours. You can add multiple nodes and even cluster them in a swarm.

Sometimes performance can be slow, but for a free service it’s excellent!

Visit https://labs.play-with-docker.com/

Chapter Summary

You can run Docker almost anywhere and most of the installation methods are simple.

Docker Desktop provides you a fully-functional Docker environment on your Linux, Mac, or Windows machine. It’s easy to install, includes the Docker engine, has a slick UI, and has a marketplace with lots of extensions to extend its capabilities. It’s a great choice for a local Docker development environment and even lets you spin-up a single-node Kubernetes cluster.

Packages exist to install the Docker engine on most Linux distros.

Play with Docker is a free 4-hour Docker playground on the internet.

4: The big picture

The aim of this chapter is to paint a quick big-picture of what Docker is all about before we dive in deeper in later chapters.

We’ll break this chapter into two:

The Ops perspectiveThe Dev perspective

In the Ops Perspective section, we’ll download an image, start a new container, log in to the new container, run a command inside of it, and then destroy it.

In the Dev Perspective section, we’ll focus more on the app. We’ll clone some app code from GitHub, inspect a Dockerfile, containerize the app, run it as a container.

These two sections will give you a good idea of what Docker is all about and how the major components fit together. It’s recommended that you read both sections to get the dev and the ops perspectives. DevOps anyone?

Don’t worry if some of the stuff we do here is totally new to you. We’re not trying to make you an expert in this chapter. This is about giving you a feel of things