Docker Deep Dive - Nigel Poulton - E-Book

Docker Deep Dive E-Book

Nigel Poulton

0,0
9,60 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Docker Deep Dive (2025 Edition) - The best Docker book just got even better!


The #1 best-selling and most comprehensive Docker book - fully updated for 2025!
For nearly a decade, Docker Deep Dive has been the go-to guide for developers and IT professionals looking to master Docker. With more reviews than any other Docker book and a track record as the most popular Docker book on the market, this Amazon bestseller is your ultimate resource for mastering Docker.
What's New in the 2025 Edition?
This latest edition is fully updated with cutting-edge advancements in the Docker ecosystem, including:
Wasm containers (WebAssembly) - The future of fast, portable, high-performance workloads


Multi-container LLM chatbot apps - Deploy and manage AI-powered chatbots with Docker Compose & Ollama



Professional development environment, app templates, debugging, and vulnerability scanning, with Docker Desktop, Docker Init, Docker Debug, and Docker Scout
The demand for Docker expertise is skyrocketing as organizations build cloud-native and AI-powered applications. Whether you're deploying apps locally or in the cloud, this book equips you with everything you need to succeed in the real world.
What You'll Learn:
- Set up a professional Docker development environment - for free
- Create and manage containerized applications from scratch
- Build and share container images using Docker Hub
- Deploy and orchestrate multi-container apps with Docker Compose & Swarm
- Create Wasm apps, build them as Docker containers, and run them with Docker
- Deploy and manage LLM chatbots with Docker Compose
- Create production-ready images that run on x86 and ARM
- Easy and powerful debugging with Docker Debug
- Vulnerability scanning with Docker Scout
- Leverage the latest Docker technologies, including Docker Desktop, Docker Debug, Docker Init, Docker Scout, and more
Whether you're a beginner looking for a step-by-step guide or an experienced developer who wants to stay ahead of the curve, Docker Deep Dive (2025 Edition) is the ultimate resource for mastering Docker.
Get your copy today and take your container skills to the next level!

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 313

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Docker Deep Dive 

Jan 2025

Zero to Docker in a single book!

Nigel Poulton @nigelpoulton

About this edition

This edition was published in January 2025 and is up to date with the latest industry trends and the latest enhancements to Docker.

Major changes include:

- Added AI content — new AI chatbot app that uses Ollama and LLMs

- Updates to BuildKit, buildx, and Docker Build Cloud

- Updates to include the GA version of the docker init command

- Updates to Docker Debug content

- Updates to Wasm content

Enjoy the book and get ready to master containers!

(c) 2025 Nigel Poulton

Thank you to everyone who reads my books and watches my training videos. I'm incredibly passionate about all of my content, and it means a lot to me when you connect and give me feedback. I also love it when we bump into each other at events, airports, and all the other random places we meet. Please come and say hello—don't be shy!

A special shout out to Niki Kovacs for his extensive feedback on the 2023 edition. I'm grateful to every reader who gives feedback, but Niki paid great attention to the book and was very generous in his feedback and patience with my slow responses. You can find Niki at:

ttps://www.microlinux.fr

https://blog.microlinux.fr

https://twitter.com/microlinux_eu

About the author

Nigel Poulton (@nigelpoulton)

Nigel is a technology geek who is passionate about learning new technologies and making them easier for others to learn. He's the author of best-selling books on Docker and Kubernetes, and is the author of AI Explained: Facts, Fiction, and Future, an exciting read into the impacts of AI on society and the future of humanity. .

Nigel is a Docker Captain and has held senior technology roles at large and small enterprises.

In his free time, he listens to audiobooks and coaches youth football (soccer). He wishes he lived in the future and could understand the mysteries of life and the universe. He’s passionate about learning, cars, and football. He lives in England with his fabulous wife and three children.

•LinkedIn: Nigel Poulton

•Web: nigelpoulton.com

•BlueSky: @nigelpoulton

•X: @nigelpoulton

•Email: [email protected]

Table of Contents

 

0: About the book

Part 1: The big picture stuff

1: Containers from 30,000 feet

The bad old days

Hello VMware!

VMwarts

Hello Containers!

Linux containers

Hello Docker!

Docker and Windows

What about Wasm

Docker an AI

What about Kubernetes

2: Docker and container-related standards and projects

Docker

Container-related standards and projects

3: Getting Docker

Docker Desktop

Installing Docker with Multipass

Installing Docker on Linux

4: The big picture

The Ops Perspective

The Dev Perspective

Part 2: The technical stuff

5: The Docker Engine

Docker Engine – The TLDR

The Docker Engine

The influence of the Open Container Initiative (OCI)

runc

containerd

Starting a new container (example)

What’s the shim all about?

How it’s implemented on Linux

6: Working with Images

Docker images – The TLDR

Intro to images

Pulling images

Image registries

Image naming and tagging

Images and layers

Pulling images by digest

Multi-architecture images

Vulnerability scanning with Docker Scout

Deleting Images

Images – The commands

7: Working with containers

Containers – The TLDR

Containers vs VMs

Images and Containers

Check Docker is running

Starting a container

How containers start apps

Connecting to a running container

Inspecting container processes

The

docker inspect

command

Writing data to a container

Stopping, restarting, and deleting a container

Killing a container’s main process

Debugging slim images and containers with Docker Debug

Self-healing containers with restart policies

Containers – The commands

8: Containerizing an app

Containerizing an app – The TLDR

Containerize a single-container app

Moving to production with multi-stage builds

Buildx, BuildKit, drivers, and Build Cloud

Multi-architecture builds

A few good practices

Containerizing an app – The commands

9: Multi-container apps with Compose

Compose – The TLDR

Compose background

Installing Compose

The AI chatbot app

Compose files

Use the app

Inspect the app

Inspect the Ollama configuration

Multi-container apps with Compose – The commands

10: Docker and Wasm

Pre-reqs

Intro to Wasm and Wasm containers

Write a Wasm app

Containerize a Wasm app

Run a Wasm container

Clean up

Chapter summary

11: Docker Swarm

Docker Swarm – The TLDR

Swarm primer

Build a secure swarm

Docker Swarm – The Commands

12: Deploying apps to Swarm

Deploying apps with Docker Stacks – The TLDR

Build a Swarm lab

The sample app

Deploy the app

Inspect the app

Manage the app

Verify the rollout

Deploying apps with Docker Stacks – The Commands

13: Docker Networking

Docker Networking – The TLDR

Docker networking theory

Single-host bridge networks

External access via port mappings

Docker Networking – The Commands

14: Docker overlay networking

Docker overlay networking – The TLDR

Docker overlay networking history

Building and testing Docker overlay networks

Overlay networks explained

Docker overlay networking – The commands

15: Volumes and persistent data

Volumes and persistent data – The TLDR

Containers without volumes

Containers with volumes

Volumes and persistent data – The Commands

16: Docker security

Docker security – The TLDR

Kernel Namespaces

Control Groups

Capabilities

Mandatory Access Control systems

seccomp

Docker security technologies

Swarm security

Docker Scout and vulnerability scanning

Signing and verifying images with Docker Content Trust

Docker Secrets

What next

Terminology

More from the author

Guide

Begin Reading

0: About the book

This 2025 edition gets you up to speed with Docker and containers fast. No prior experience required.

Why should I read this book or care about Docker?

Docker has already changed how we build, share, and run applications, and it’s poised to play a major role with emerging technologies such as Wasm and AI.

So, if you want the best jobs working with the best technologies, you need a strong Docker skillset. It will even give you a head start learning and working with Kubernetes.

How I’ve organized the book

I’ve divided the book into two main sections:

The big picture stuffThe technical stuff

The big picture stuff gets you up to speed with the basics, such as what Docker is, why we use containers, and fundamental jargon such as cloud-native, microservices, and orchestration.

The technical stuff section covers everything you need to know about images, containers, multi-container microservices apps, and the increasingly important topic of orchestration. It even covers WebAssembly, AI, vulnerability scanning, debugging containers, high availability, and more.

Chapter breakdown

Chapter 1: Summarizes the history and future of Docker and containers Chapter 2: Explains the most important container-related standards and projectsChapter 3: Shows you a few ways to get DockerChapter 4: Walks you through deploying your first containerChapter 5: Deep dive into the Docker Engine architectureChapter 6: Deep dive into images and image managementChapter 7: Deep dive into containers and container managementChapter 8: Deep dive into containerizing applicationsChapter 9: Walks you through deploying and managing a multi-container AI chatbot app with Docker ComposeChapter 10: Walks you through building, containerizing, and running a Wasm app with DockerChapter 11: Builds a highly-available and secure Swarm clusterChapter 12: Walks you through deploying, scaling, and self-healing a multi-container app running on SwarmChapter 13: Deep dive into Docker networkingChapter 14: Walks you through building and working with overlay networksChapter 15: Introduces you to persistent and non-persistent data in DockerChapter 16: Covers all the major Linux and Docker security technologies

Editions and updates

Docker, AI, and the cloud-native ecosystem are evolving fast, and 2-3-year-old books are dangerously out of date. As a result, I’m committed to updating the book every year.

If that sounds excessive, welcome to the new normal.

The book is available in hardback, paperback, and e-book on all good book publishing platforms.

Kindle updates

Unfortunately, Kindle readers cannot get updates — even if you delete the book and buy it again, you’ll get the older version you originally purchased. I have no control over this and was devastated when this change happened.

Feedback

If you like the book and it helps your career, share the love by recommending it to a friend and leaving a review on Amazon or Goodreads.

If you spot a typo or want to make a recommendation, email me at [email protected] and I’ll do my best to respond.

That’s everything. Let’s get rocking with Docker!

Part 1: The big picture stuff

1: Containers from 30,000 feet

Containers have taken over the world!

In this chapter, you’ll learn why we have containers, what they do for us, and where we can use them.

The bad old days

Applications are the powerhouse of every modern business. When applications break, businesses break.

Most applications run on servers, and in the past, we were limited to running one application per server. As a result, the story went something like this:

Every time a business needed a new application, it had to buy a new server. Unfortunately, we weren’t very good at modeling the performance requirements of new applications, and we had to guess. This resulted in businesses buying bigger, faster, and more expensive servers than necessary. After all, nobody wanted underpowered servers incapable of handling the app, resulting in unhappy customers and lost revenue. As a result, we ended up with racks and racks of overpowered servers operating as low as 5-10% of their potential capacity. This was a tragic waste of company capital and environmental resources.

Hello VMware!

Amid all this, VMware, Inc. gave the world a gift — the virtual machine (VM) — a technology that allowed us to run multiple business applications on a single server safely.

It was a game-changer. Businesses could run new apps on the spare capacity of existing servers, spawning a golden age of maximizing the value of existing assets.

VMwarts

But, and there’s always a but! As great as VMs are, they’re far from perfect.

For example, every VM needs its own dedicated operating system (OS). Unfortunately, this has several drawbacks, including:

Every OS consumes CPU, RAM, and other resources we’d rather use on applicationsEvery VM and OS needs patchingEvery VM and OS needs monitoring

VMs are also slow to boot and not very portable.

Hello Containers!

While most of us were reaping the benefits of VMs, web scalers like Google had already moved on from VMs and were using containers.

A feature of the container model is that every container shares the OS of the host it’s running on. This means a single host can run more containers than VMs. For example, a host that can run 10 VMs might be able to run 50 containers, making containers far more efficient than VMs.

Containers are also faster and more portable than VMs.

Linux containers

Modern containers started in the Linux world and are the product of incredible work from many people over many years. For example, Google contributed many container-related technologies to the Linux kernel. It’s thanks to many contributions like these that we have containers today.

Some of the major technologies underpinning modern containers include kernel namespaces, control groups (cgroups), and capabilities.

However, despite all this great work, containers were incredibly complicated, and it wasn’t until Docker came along that they became accessible to the masses.

Note: I know that many container-like technologies pre-date Docker and modern containers. However, none of them changed the world the way Docker has.

Hello Docker!

Docker was the magic that made Linux containers easy and brought them to the masses. We’ll talk a lot more about Docker in the next chapter.

Docker and Windows

Microsoft worked hard to bring Docker and container technologies to the Windows platform.

At the time of writing, Windows desktop and server platforms support both of the following:

Windows containersLinux containers

Windows containers run Windows apps and require a host system with a Windows kernel. Windows 10, Windows 11, and all modern versions of Windows Server natively support Windows containers.

Windows systems can also run Linux containers via the WSL 2 (Windows Subsystem for Linux) subsystem.

This means Windows 10 and Windows 11 are great platforms for developing and testing Windows and Linux containers.

However, despite all the work developing Windows containers, almost all containers are Linux containers. This is because Linux containers are smaller and faster, and more tooling exists for Linux.

All of the examples in this edition of the book are Linux containers.

Windows containers vs Linux containers

It’s vital to understand that containers share the kernel of the host they’re running on. This means containerized Windows apps need a host with a Windows kernel, whereas containerized Linux apps need a host with a Linux kernel. However, as mentioned, you can run Linux containers on Windows systems that have the WSL 2 backend installed.

Terminology: A containerized app is an application running as a container. We’ll cover this in a lot of detail later.

What about Mac containers?

There is no such thing as Mac containers. However, Macs are great platforms for working with containers, and I do all of my daily work with containers on a Mac.

The most popular way of working with containers on a Mac is Docker Desktop. It works by running Docker inside a lightweight Linux VM on your Mac. Other tools, such as Podman and Rancher Desktop, are also great for working with containers on a Mac.

What about Wasm

Wasm (WebAssembly) is a modern binary instruction set that builds applications that are smaller, faster, more secure, and more portable than containers. You write your app in your favorite language and compile it as a Wasm binary that will run anywhere you have a Wasm runtime.

However, Wasm apps have many limitations, and we’re still developing many of the standards. As a result, containers remain the dominant model for cloud-native applications.

The container ecosystem is also much richer and more mature than the Wasm ecosystem.

As you’ll see in the Wasm chapter, Docker and the container ecosystem are adapting to work with Wasm apps, and you should expect a future where VMs, containers, and Wasm apps run side-by-side in most clouds and applications.

This book is up-to-date with the latest Wasm and container developments.

Docker an AI

Containers are already the industry standard for packaging and running applications, and Docker is the most popular developer tool for working with containers. As such, developers and businesses are turning to Docker as the preferred platform for developing and deploying AI and ML applications such as LLMs and other GenAI apps.

However, a vital part of any AI or ML application is access to specialized hardware such as Graphics Processing Units (GPU), Tensor Processing Units (TPU), and Neural Processing Units (NPU). At the time of writing, Docker supports NVIDIA GPUs. In the future, we should expect Docker to support other GPUs, TPUs, NPUs, and even WebGPU.

Later in the book, you’ll use Docker to deploy LLM applications that can run on GPUs or regular CPUs.

What about Kubernetes

Kubernetes is the industry standard platform for deploying and managing containerized apps.

Older versions of Kubernetes used Docker to start and stop containers. However, newer versions use containerd, which is a stripped-down version of Docker optimized for use by Kubernetes and other platforms.

The important thing to know is that all Docker containers work on Kubernetes.

Check out these books if you need to learn Kubernetes:

Quick Start Kubernetes: This is ~100 pages and will get you up-to-speed with Kubernetes in a single day!The Kubernetes Book. This is the ultimate book for mastering Kubernetes.

I update both books annually to ensure they’re up-to-date with the latest and greatest developments in the cloud native ecosystem.

Chapter Summary

We used to live in a world where every business application needed a dedicated, overpowered server. VMware came along and allowed us to run multiple applications on new and existing servers. However, following the success of VMware and hypervisors, a newer, more efficient, and portable virtualization technology called containers came along. However, containers were complex and hard to implement until Docker came along and made them easy. Wasm and AI are powering new innovations, and the Docker ecosystem is evolving to work with both. The book has entire chapters dedicated to working with AI apps and Wasm apps on Docker.

2: Docker and container-related standards and projects

This chapter introduces you to Docker and some of the most important standards and projects shaping the container ecosystem. The goal is to lay some foundations that we’ll build on in later chapters.

This chapter has two main parts:

DockerContainer-related standards and projects

Docker

Docker is at the heart of the container ecosystem. However, the term Docker can mean two things:

The Docker platformDocker, Inc.

The Docker platform is a neatly packaged collection of technologies for creating, managing, and orchestrating containers. Docker, Inc. is the company that created the Docker platform and continues to be the driving force behind developing new features.

Let’s dive a bit deeper.

Docker, Inc.

Docker, Inc. is a technology company based out of Palo Alto and founded by French-born American developer and entrepreneur Solomon Hykes. Solomon is no longer at the company.

The company started as a platform as a service (PaaS) provider called dotCloud. Behind the scenes, dotCloud delivered its services on top of containers and had an in-house tool to help them deploy and manage those containers. They called this in-house tool Docker.

The word Docker is a British expression short for dock worker referring to someone who loads and unloads cargo from ships.

In 2013, dotCloud dropped the struggling PaaS side of the business, rebranded as Docker, Inc., and focused on bringing Docker and containers to the world.

The Docker technology

The Docker platform makes it easy to build, share, and run containers.

At a high level, there are two major parts to the Docker platform:

The CLI (client)The engine (server)

The CLI is the familiar docker command-line tool for deploying and managing containers. It converts simple commands into API requests and sends them to the engine.

The engine comprises all the server-side components that run and manage containers.

Figure 2.1 shows the high-level architecture. The client and engine can be on the same host or connected over the network.

Figure 2.1 Docker client and engine.

In later chapters, you’ll see that the client and engine are complex and comprise a lot of small specialized parts. Figure 2.2 gives you an idea of some of the complexity behind the engine. However, the client hides all this complexity so you don’t have to care. For example, you type friendly docker commands into the CLI, the CLI converts them to API requests and sends them to the daemon, and the daemon takes care of everything else.

Figure 2.2 Docker CLI and daemon hiding complexity.

Let’s switch focus and briefly look at some standards and governance bodies.

Container-related standards and projects

Several important standards and governance bodies influence container development and the container ecosystem. Some of these include:

The OCIThe CNCFThe Moby Project

The Open Container Initiative (OCI)

The Open Container Initiative (OCI) is a governance council responsible for low-level container-related standards.

It operates under the umbrella of the Linux Foundation and was founded in the early days of the container ecosystem when some of the people at a company called CoreOS didn’t like the way Docker was dominating the ecosystem. In response, CoreOS created an open standard called appc that defined specifications for things such as image format and container runtime. They also created a reference implementation called rkt (pronounced “rocket”).

The appc standard did things differently from Docker and put the ecosystem in an awkward position with two competing standards.

While competition is usually a good thing, competing standards are generally bad, as they generate confusion that slows down user adoption. Fortunately, the main players in the ecosystem came together and formed the OCI as a vendor-neutral lightweight council to govern container standards. This allowed us to archive the appc project and place all low-level container-related specifications under the OCI’s governance.

At the time of writing, the OCI maintains three standards called specs:

The image-specThe runtime-specThe distribution-spec

We often use a rail tracks analogy when explaining the OCI standards:

When the size and properties of rail tracks were standardized, it gave entrepreneurs in the rail industry confidence the trains, carriages, signaling systems, platforms, and other products they built would work with the standardized tracks — nobody wanted competing standards for track sizes.

The OCI specifications did the same thing for the container ecosystem and it’s flourished ever since. Docker has also changed a lot since the formation of the OCI, and all modern versions of Docker implement all three OCI specs. For example:

The Docker builder (BuildKit) creates OCI compliant-imagesDocker uses an OCI-compliant runtime to create OCI-compliant containersDocker Hub implements the OCI distribution spec and is an OCI-compliant registry

Docker, Inc. and many other companies have people serving on the OCI’s technical oversight board (TOB).

The CloudNative Computing Foundation (CNCF)

The Cloud Native Computing Foundation (CNCF) is another Linux Foundation project that is influential in the container ecosystem. It was founded in 2015 with the goal of “…advancing container technologies… and making cloud native computing ubiquitous”.

Instead of creating and maintaining container-related specifications, the CNCF hosts important projects such as Kubernetes, containerd, Notary, Prometheus, Cilium, and lots more.

When we say the CNCF hosts these projects, we mean it provides a space, structure, and support for projects to grow and mature. For example, all CNCF projects pass through the following three phases or stages:

SandboxIncubatingGraduated

Each phase increases a project’s maturity level by requiring higher standards of governance, documentation, auditing, contribution tracking, marketing, community engagement, and more. For example, new projects accepted as sandbox projects may have great ideas and great technology but need help and resources to create strong governance, etc. The CNCF helps with all of that.

Graduated projects are considered ready for production and are guaranteed to have strong governance and implement good practices.

If you look back to Figure 2.2, you’ll see that Docker uses at least two CNCF technologies — containerd and Notary.

The Moby Project

Docker created the Moby project as a community-led place for developers to build specialized tools for building container platforms.

Platform builders can pick the specific Moby tools they need to build their container platform. They can even compose their platforms using a mix of Moby tools, in-house tools, and tools from other projects.

Docker, Inc. originally created the Moby project, but it now has members including Microsoft, Mirantis, and Nvidia.

The Docker platform is built using tools from various projects, including the Moby project, the CNCF, and the OCI.

Chapter summary

This chapter introduced you to Docker and some of the major influences in the container ecosystem.

Docker, Inc., is a technology company based in Palo Alto that is changing how we do software. They were the first movers and instigators of the modern container revolution.

The Docker platform focuses on running and managing application containers. It runs on Linux and Windows, can be installed almost anywhere, and offers a variety of free and paid-for products.

The Open Container Initiative (OCI) governs low-level container standards and maintains specifications for runtimes, image format, and registries.

The CNCF provides support for important cloud-native projects and helps them mature into production-grade tools.

The Moby project hosts low-level tools developers can use to build container platforms.

3: Getting Docker

There are lots of ways to get Docker and work with containers. This chapter will show you the following ways:

Docker DesktopMultipassServer installs on Linux

I strongly recommend you install and use Docker Desktop. It’s the best way to work with Docker, and you’ll be able to use it to follow most of the examples in the book. I use it every day.

If you can’t use Docker Desktop, we’ll show you how to install Docker in a Multipass VM, as well as how to perform a simple installation on Linux. However, these installations don’t have all the features of Docker Desktop.

Docker Desktop

Docker Desktop is a desktop app from Docker, Inc. and is the best way to work with containers. You get the Docker Engine, a slick UI, all the latest plugins and features, and an extension system with a marketplace. You even get Docker Compose and a Kubernetes cluster if you want to learn Kubernetes.

It’s free for personal use and education, but you’ll have to pay a license fee if you use it for work and your company has over 250 employees or does more than $10M in annual revenue.

Docker Desktop on Windows 10 and Windows 11 Professional and Enterprise editions supports Windows containersandLinux containers. Docker Desktop on Mac, Linux, and Home editions of Windows only support Linux containers. All of the examples in the book and almost all of the containers in the real world are Linux containers.

Let’s install Docker Desktop on Windows and MacOS.

Windows prereqs

Docker Desktop on Windows requires all of the following:

64-bit version of Windows 10/11 Hardware virtualization support must be enabled in your system’s BIOSWSL 2

Be very careful changing anything in your system’s BIOS.

Installing Docker Desktop on Windows 10 and 11

Search the internet for “install Docker Desktop on Windows”. This will take you to the relevant download page, where you can download the installer and follow the instructions. When prompted, you should install and enable the WSL 2 backend (Windows Subsystem for Linux).

Once the installation is complete, you need to manually start Docker Desktop from the Windows Start menu. It may take a minute to start, but you can watch the start progress via the animated whale icon on the Windows taskbar at the bottom of the screen.

Once it’s running, you can open a terminal and type some simple docker commands.

$ docker version <Snip> Server: Docker Desktop 4.37.0 (177092) Engine: Version: 27.4.0 API version: 1.47 (minimum version 1.24) Go version: go1.22.9 OS/Arch: linux/amd64 <Snip>

Congratulations. You now have a working installation of Docker on your Windows machine.

Notice how the Server output shows OS/Arch: linux/amd64. This is because a default installation assumes you’ll be working with Linux containers.

Some versions of Windows let you switch to Windows containers by right-clicking the Docker whale icon in the Windows notifications tray and selecting Switch to Windows containers…. Doing this keeps existing Linux containers running in the background, but you won’t be able to see or manage them until you switch back to Linux containers mode.

Make sure you’re running in Linux containers mode so you can follow along with the examples later in the book.

Installing Docker Desktop on Mac

Docker Desktop for Mac is like Docker Desktop for Windows — a packaged product with a slick UI that gets you the full Docker experience on your laptop.

Before proceeding with the installation, you need to know that Docker Desktop on Mac installs the daemon and server-side components inside a lightweight Linux VM that seamlessly exposes the API to your local Mac environment. This means you can open a terminal on your Mac and run docker commands without ever knowing it’s all running in a hidden VM. This is also why Mac versions of Docker Desktop only work with Linux containers — everything’s running inside a Linux VM.

Figure 3.1 shows the high-level architecture for Docker Desktop on Mac.

Figure 3.1

The simplest way to install Docker Desktop on your Mac is to search the web for “install Docker Desktop on MacOS”, follow the links to the download, and then complete the simple installer.

When the installer finishes, you’ll have to start Docker Desktop from the MacOS Launchpad. It may take a minute to start, but you can watch the animated Docker whale icon in the status bar at the top of your screen. Once it’s started, you can click the whale icon to manage Docker Desktop.

Open a terminal window and run some regular Docker commands. Try the following.

$ docker version Client: Version: 27.4.0-rc.3 API version: 1.47 OS/Arch: darwin/arm64 <Snip> Server: Docker Desktop 4.37.0 (177092) Engine: Version: 27.4.0-rc.3 API version: 1.47 (minimum version 1.24) OS/Arch: linux/arm64 containerd: Version: 1.7.21 runc: Version: 1.1.13 docker-init: Version: 0.19.0 <Snip>

Notice that the OS/Arch: for the Server component shows as linux/amd64 or linux/arm64. This is because the daemon runs inside the Linux VM mentioned earlier. The Client component is a native Mac application and runs directly on the Mac OS Darwin kernel. This is why it shows as darwin/amd64 or darwin/arm64.

You can now use Docker on your Mac.

Installing Docker with Multipass

Only consider this section if you can’t use Docker Desktop.

Multipass installations don’t ship with out-of-the-box support for features such as docker scout, docker debug, and docker init.

Multipass is a free tool for creating cloud-style Linux VMs on your Linux, Mac, or Windows machine and is incredibly easy to install and use. It’s an easy way to create multi-node production-like Docker clusters.

Go to https://multipass.run/install and install the right edition for your hardware and OS.

Once installed, you only need three commands:

$ multipass launch $ multipass ls $ multipass shell

Run the following command to create a new VM called node1 based on the docker image. The docker image has Docker pre-installed and ready to go.

$ multipass launch docker --name node1

It’ll take a minute or two to download the image and launch the VM.

List VMs to make sure yours launched properly.

$ multipass ls Name State IPv4 Image node1 Running 192.168.64.37 Ubuntu 24.04 LTS 172.17.0.1 172.18.0.1

You’ll use the 192.168.x.x IP address when working with the examples later in the book.

Connect to the VM with the following command.

$ multipass shell node1

Once connected, you can run the following commands to check your Docker version and list installed CLI plugins.

$ docker --version Docker version 26.1.0, build 9714adc $ docker info Client: Docker Engine - Community Version: 27.3.1 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.17.1 Path: /usr/libexec/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.29.7 Path: /usr/libexec/docker/cli-plugins/docker-compose <Snip>

You can type exit to log out of the VM, and multipass shell node1 to log back in. You can also type multipass delete node1 and then multipass purge to delete it.

Installing Docker on Linux

Only consider this section if you can’t use Docker Desktop, as it doesn’t give you access to docker scout, docker debug, or docker init.

These instructions show you how to install Docker on Ubuntu Linux 24.04 and are just for guidance purposes. Lots of other installation methods exist, and you should search the web for the latest instructions.

$ sudo snap install docker <Snip> docker 27.2.0 from Canonical✓ installed

Run some commands to test the installation. You’ll have to prefix them with sudo.

$ sudo docker --version Docker version 27.2.0, build 3ab4256 $ sudo docker info <Snip> Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 27.2.0 <Snip>

If you don’t like adding sudo before Docker commands, you can run the following commands to create a docker group and add your user account to it.

$ sudo groupadd docker $ sudo usermod -aG docker $(whoami)

You’ll need to restart Docker for the changes to take effect. This is how you restart Docker on many Ubuntu Linux distributions. Yours may be different.

$ sudo service docker start

Chapter Summary

You can run Docker almost anywhere, and installing it’s easier than ever.

Docker Desktop gives you a fully functional Docker environment on your Linux, Mac, or Windows machine and is the best way to get a Docker development environment on your local machine. It’s easy to install, includes the Docker Engine, has a slick UI, and has a marketplace with lots of extensions to extend its capabilities. It works with docker scout, docker debug, and docker init, and it even lets you spin up a Kubernetes cluster.

Multipass is a great way to spin up a local VM running Docker, and there are lots of ways to install Docker on Linux servers. These give you access to most of the free Docker features but lack some of the features of Docker Desktop.

4: The big picture

This chapter will give you some hands-on experience and a high-level view of images and containers. The goal is to prepare you for more detail in the upcoming chapters.

We’ll break this chapter into two parts:

The Ops perspectiveThe Dev perspective

The ops perspective focuses on starting, stopping, deleting containers, and executing commands inside them.

The dev perspective focuses more on the application side of things and runs through taking application source code, building it into a container image, and running it as a container.

I recommend you read both sections and follow the examples, as this will give you the dev and ops perspectives. DevOps anyone?

The Ops Perspective

In this section, you’ll complete all of the following:

Check Docker is workingDownload an imageStart a container from the imageExecute a command inside the containerDelete the container

A typical Docker installation installs the client and the engine on the same machine and configures them to talk to each other.

Run a docker version command to ensure both are installed and running.

$ docker version Client: <<---- Start of client response Version: 27.4.0-rc.3 -----┐ API version: 1.47 | Go version: go1.22.9 | Client info block OS/Arch: darwin/arm64 | Context: desktop-linux -----┘ Server: Docker Desktop 4.37.0 (177092) <<---- Start of server response Engine: -----┐ Version: 27.4.0-rc.3 | API version: 1.47 (minimum version 1.24) | Go version: go1.22.9 | OS/Arch: linux/arm64 | containerd: | Server block Version: 1.7.21 | runc: | Version: 1.1.13 | docker-init: | Version: 0.19.0 -----┘

If your response from the client and server looks like the output in the book, everything is working as expected.

If you’re on Linux and get a permission denied while trying to connect to the Docker daemon... error, try again with sudo in front of the command — sudo docker version. If it works with sudo, you’ll need to prefix all future docker commands with sudo.

Download an image

Images are objects that contain everything an app needs to run. This includes an OS filesystem, the application, and all dependencies. If you work in operations, they’re similar to VM templates. If you’re a developer, they’re similar to classes.

Run a docker images command.

$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE

If you are working from a clean installation, you’ll have no images, and your output will be the same as the book. If you’re working with Multipass, you might see an image called portainer/portainer-ce.

Copying new images onto your Docker host is called pulling. Pull the ubuntu:latest image.

$ docker pull nginx:latest latest: Pulling from library/nginx ad5932596f78: Download complete e4bc5c1a6721: Download complete 1bd52ec2c0cb: Download complete 411a98463f95: Download complete df25b2e5edb3: Download complete e93f7200eab8: Download complete Digest: sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be Status: Downloaded newer image for nginx:latest docker.io/library/nginx:latest

Run another docker images to confirm your pull command worked.

$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx latest fb197595ebe7 10 days ago 280MB

We’ll discuss where the image is stored and what’s inside it in later chapters. For now, all you need to know is that images contain enough of an operating system (OS) and all the code and dependencies required to run a desired application. The NGINX image you pulled includes a stripped-down version of Linux and the NGINX web server app.

Start a container from the image

If you’ve been following along, you’ll have a copy of the nginx:latest image and you can use the docker run command to start a container from it.

Run the following docker run command to start a new container called test from the ubuntu:latest image.

$ docker run --name test -d -p 8080:80 nginx:latest e08c3535...30557225

The long number confirms the container was created.

Let’s quickly examine that docker run command.

docker run tells Docker to start a new container. The --name flag told Docker to call this container test and the -d flag told it to start the container in the background (detached mode) so it doesn’t take over your terminal. The -p flag told Docker to map port 80 in the container to port 8080 on your Docker host. Finally, the command told Docker to base the container on the nginx:latest image.

Run a docker ps command to see the running container.

$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e08c35352ff3 nginx:latest "/docker..." 3 mins ago Up 2 mins 0.0.0.0:8080->80/tcp test

You should recognize the CONTAINER ID from the long number printed after the docker run command. You should also recognize the IMAGE,