Docker Deep Dive - Nigel Poulton - E-Book

Docker Deep Dive E-Book

Nigel Poulton

0,0
12,90 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
  • Herausgeber: WS
  • Kategorie: Fachliteratur
  • Sprache: Englisch
  • Veröffentlichungsjahr: 2023
Beschreibung

The demand for Docker skills and professionals who can develop and manage cloud-native microservices apps is skyrocketing. This book will get you ahead of the curve, providing you with everything you need — from containerizing apps to executing in the cloud.


You'll learn:


- How to build and run apps as containers
- How to deploy and manage multi-container apps with Compose and Swarm
- How to build secure, efficient production-grade containers for multiple architectures
- How to work with containers and WebAssembly (Wasm)
- All the latest Docker technologies, including Docker Desktop, Docker Debug, Docker Init, Docker Scout, and more


If you're looking for a comprehensive book to help you master Docker for the real world, you've found it! You'll learn all the theory and practical skills to succeed with containers in the real world. Whether you're a seasoned developer or just getting started, Docker Deep Dive is the number one resource that will take your Docker skills to the next level.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 313

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Docker Deep Dive 

May 2024

Zero to Docker in a single book!

Nigel Poulton @nigelpoulton

About this edition

This edition was published in May 2024.

In writing this edition, I've gone over every word in every chapter ensuring everything is up-to-date with the latest editions of Docker and latest trends in the industry. I've also removed repetitions and made every chapter more concise.

Major changes include:

- Lots more content on BuildKit, buildx, and the new Docker Build Cloud

- Brand new sections on Docker Scout

- Brand new content using the docker init command

- Brand new content on Docker Debug and using the docker debug command

- New chapter on using WebAssembly (Wasm) with Docker

- Updated all images to higher quality

- Added a terminology section

Enjoy the book and get ready to master containers!

(c) 2024 Nigel Poulton

Thank you to everyone who reads my books and watches my training videos. I'm incredibly passionate about all of my content, and it means a lot to me when you connect and give me feedback. I also love it when we bump into each other at events, airports, and all the other random places we meet. Please come and say hello—don't be shy!

A special shout out to Niki Kovacs for his extensive feedback on the 2023 edition. I'm grateful to every reader who gives feedback, but Niki paid great attention to the book and was very generous in his feedback and patience with my slow responses. You can find Niki at:

ttps://www.microlinux.fr

https://blog.microlinux.fr

https://twitter.com/microlinux_eu

About the author

Nigel Poulton (@nigelpoulton)

Nigel is a technology geek who spends his life diving into cool technologies and creating books and videos that make them easier to learn. He's the author of best-selling books on Docker and Kubernetes, as well as some of the most popular online training videos on the same topics.

Nigel is a Docker Captain, and his latest hyperfixation is WebAssembly on the server (Wasm). Previously, Nigel has held senior technology roles at large and small enterprises.

In his free time, he listens to audio books and watches science fiction. He wishes he lived in the future and could explore space-time, the universe, and other mind-bending phenomena. He's passionate about learning, cars, and football (soccer). He lives in England with his fabulous wife and three children.

•X: @nigelpoulton

•LinkedIn: Nigel Poulton

•Mastodon: @[email protected]

•Web: nigelpoulton.com

•Email: [email protected]

Table of Contents

 

0: About the book

Part 1: The big picture stuff

1: Containers from 30,000 feet

The bad old days

Hello VMware!

VMwarts

Hello Containers!

Linux containers

Hello Docker!

Docker and Windows

What about WebAssembly

What about Kubernetes

2: Docker and container-related standards and projects

Docker

Container-related standards and projects

3: Getting Docker

Docker Desktop

Installing Docker with Multipass

Installing Docker on Linux

4: The big picture

The Ops Perspective

The Dev Perspective

Part 2: The technical stuff

5: The Docker Engine

Docker Engine – The TLDR

The Docker Engine

The influence of the Open Container Initiative (OCI)

runc

containerd

Starting a new container (example)

What’s the shim all about?

How it’s implemented on Linux

6: Working with Images

Docker images – The TLDR

Intro to images

Pulling images

Image registries

Image naming and tagging

Images and layers

Pulling images by digest

Multi-architecture images

Vulnerability scanning with Docker Scout

Deleting Images

Images – The commands

7: Working with containers

Containers – The TLDR

Containers vs VMs

Images and Containers

Check Docker is running

Starting a container

How containers start apps

Connecting to a container

Inspecting container processes

The

docker inspect

command

Writing data to a container

Stopping, restarting, and deleting a container

Killing a container’s main process

Debugging slim images and containers with Docker Debug

Self-healing containers with restart policies

Containers – The commands

8: Containerizing an app

Containerizing an app – The TLDR

Containerize a single-container app

Moving to production with multi-stage builds

Buildx, BuildKit, drivers, and Build Cloud

Multi-architecture builds

A few good practices

Containerizing an app – The commands

9: Multi-container apps with Compose

Docker Compose – The TLDR

Compose background

Installing Compose

The sample app

Compose files

Deploying apps with Compose – The commands

10: Docker Swarm

Docker Swarm – The TLDR

Swarm primer

Build a secure swarm cluster

Docker Swarm – The Commands

11: Deploying apps with Docker Stacks

Deploying apps with Docker Stacks – The TLDR

Build a Swarm lab

The sample app

Deploy the app

Managing the app

Deploying apps with Docker Stacks – The Commands

12: Docker and WebAssembly

Pre-reqs

Intro to Wasm and Wasm containers

Write a Wasm app

Containerize a Wasm app

Run a Wasm container

Clean up

Chapter summary

13: Docker Networking

Docker Networking – The TLDR

Docker networking theory

Single-host bridge networks

External access via port mappings

Docker Networking – The Commands

14: Docker overlay networking

Docker overlay networking – The TLDR

Docker overlay networking history

Building and testing Docker overlay networks

Overlay networks explained

Docker overlay networking – The commands

15: Volumes and persistent data

Volumes and persistent data – The TLDR

Containers without volumes

Containers with volumes

Volumes and persistent data – The Commands

16: Docker security

Docker security – The TLDR

Kernel Namespaces

Control Groups

Capabilities

Mandatory Access Control systems

seccomp

Docker security technologies

Swarm security

Docker Scout and vulnerability scanning

Signing and verifying images with Docker Content Trust

Docker Secrets

What next

Terminology

Landmarks

Begin Reading

0: About the book

This is a book about Docker and containers; no prior knowledge required! In fact, the book’s motto is Zero to Docker in a single book.

So, if you want to work with cloud and cloud-native technologies, this book is dedicated to you.

Why should I read this book or care about Docker?

Docker is here, and it’s changed the world. If you want the best jobs working with the best technologies, you need to know Docker and containers. They’re even central to Kubernetes, and a strong Docker skill set will help you learn Kubernetes. Docker and containers are also well-positioned for emerging cloud technologies such as WebAssembly and AI workloads.

What if I’m not a developer

Most applications, even modern cloud-native microservices, need high-performance production-grade infrastructure. If you think traditional developers will take care of this, think again. To cut a long story short, if you want to thrive in the modern cloud-first world, you must know Docker. But don’t stress, this book will give you all the skills you need.

How I’ve organized the book

I’ve divided the book into two main sections:

The big picture stuffThe technical stuff

The big picture stuff gets you up to speed with things like what Docker is, why we have containers, and the fundamental jargon such as cloud-native, microservices, and orchestration.

The technical stuff section covers everything you need to know about images, containers, multi-container microservices apps, and the increasingly important topic of orchestration. It even covers WebAssembly, vulnerability scanning with Docker Scout, debugging containers, high availability, and more.

Chapter breakdown

Chapter 1: Summarises the history and potential future of Docker and containers Chapter 2: Explains the most important container-related standards and projectsChapter 3: Shows you a few ways to get DockerChapter 4: Walks you through a very simple hands-on container workflow Chapter 5: Explains the architecture of the Docker Engine Chapter 6: Dives deep into images and image managementChapter 7: Dives deep into containers and container managementChapter 8: Walks you through the process of containerizing an appChapter 9: Shows you how to build, deploy, and manage multi-container apps with ComposeChapter 10: Walks you through building a secure swarmChapter 11: Deploys and manages a multi-container app on a secure swarmChapter 12: Walks you through building and containerizing a WebAssembly appChapter 13: Dives into Docker networkingChapter 14: Builds and tests Docker overlay networksChapter 15: Introduces you to persistent and non-persistent data in DockerChapter 16: Covers all the major Linux and Docker security technologies

Editions and updates

Docker and the cloud-native ecosystem are evolving fast, and a 2-3-year-old book on Docker isn’t valuable. As a result, I’m committed to updating the book every year.

If that sounds excessive, welcome to the new normal.

The book is available in hardback, paperback, and e-book on all good book publishing platforms.

When you purchase the Kindle edition, you’re entitled to all future updates. However, Kindle doesn’t always download the latest edition.

A potential solution is to go to http://amzn.to/2l53jdg and choose Quick Solutions. Then select Digital Purchases, search for your Docker Deep Dive Kindle edition purchase, and select Content and Devices. Your purchase should appear in the list with a button that says Update Available. Click that button. Delete your old version on your Kindle and download the new one.

If this doesn’t work, your only option is to contact Kindle Support.

Feedback

If you like the book and it helps your career, share the love by recommending it to a friend and leaving a review on Amazon or Goodreads.

If you spot a typo or want to make a recommendation, email me at [email protected]

That’s everything. Let’s get rocking with Docker!

Part 1: The big picture stuff

1: Containers from 30,000 feet

Containers have taken over the world!

In this chapter, you’ll learn why we have containers, what they do for us, and where we can use them.

The bad old days

Applications are the powerhouse of every modern business. When applications break, businesses break.

Most applications run on servers, and in the past, we were limited to running one application per server. As a result, the story went something like this:

Every time a business needed a new application, it had to buy a new server. Unfortunately, we weren’t very good at modeling the performance requirements of new applications, and the IT departments had to guess. This often resulted in businesses buying very expensive servers with a lot more performance capability than the apps needed. After all, nobody wanted underpowered servers incapable of handling the app, resulting in unhappy customers and lost revenue. As a result, companies ended up with racks and racks of overpowered servers operating as low as 5-10% of their potential capacity. This was a tragic waste of company capital and environmental resources!

Hello VMware!

Amid all this, VMware, Inc. gave the world a gift — the virtual machine (VM).

As soon as VMware came along, the world became much better. We finally had a technology that allowed us to safely run multiple business applications on a single server.

It was a game-changer. Businesses could run new apps on the spare capacity of existing servers, spawning a golden age of maximizing the value of existing assets.

VMwarts

But, and there’s always a but! As great as VMs are, they’re far from perfect.

A feature of the VM model is every VM needing its own dedicated operating system (OS). Unfortunately, this has several drawbacks, including:

Every OS consumes CPU, RAM, and other resources we’d rather use on applicationsEvery VM and OS needs patchingEvery VM and OS needs monitoring

VMs are also slow to boot and not very portable.

Hello Containers!

While most of us were reaping the benefits of VMs, web scalers like Google had already moved on from VMs and were using containers.

A feature of the container model is that every container shares the OS of the host it’s running on. This means a single host can run more containers than VMs. For example, a host that can run 10 VMs might be able to run 50 containers, making containers far more efficient than VMs.

Containers are also faster and more portable than VMs.

Linux containers

Modern containers started in the Linux world and are the product of incredible work from many people over many years. For example, Google contributed many container-related technologies to the Linux kernel. It’s thanks to many contributions like these that we have containers today.

Some of the major technologies behind modern containers include; kernel namespaces, control groups (cgroups), capabilities, and more.

However, despite all this great work, containers were incredibly complicated, and it wasn’t until Docker came along that they became accessible to the masses.

Note: I know that many container-like technologies pre-date Docker and modern containers. However, none of them changed the world the way Docker has.

Hello Docker!

Docker was the magic that made Linux containers easy and brought them to the masses. We’ll talk a lot more about Docker in the next chapter.

Docker and Windows

Microsoft worked hard to bring Docker and container technologies to the Windows platform.

At the time of writing, Windows desktop and server platforms support both of the following:

Windows containersLinux containers

Windows containers run Windows apps and require a host system with a Windows kernel. Windows 10, Windows 11, and all modern versions of Windows Server natively support Windows containers.

Windows systems can also run Linux containers via the WSL 2 (Windows Subsystem for Linux) subsystem.

This means Windows 10 and Windows 11 are great platforms for developing and testing Windows and Linux containers.

However, despite all the work developing Windows containers, almost all containers are Linux containers. This is because Linux containers are smaller and faster, and more tooling exists for Linux.

All of the examples in this edition of the book are Linux containers.

Windows containers vs Linux containers

It’s vital to understand that containers share the kernel of the host they’re running on. This means containerized Windows apps need a host with a Windows kernel, whereas containerized Linux apps need a host with a Linux kernel. However, as mentioned, you can run Linux containers on Windows systems that have the WSL 2 backend installed.

What about Mac containers?

There is no such thing as Mac containers. However, Macs are great platforms for working with containers, and I do all of my daily work with containers on a Mac.

The most popular way of working with containers on a Mac is Docker Desktop. It works by running Docker inside a lightweight Linux VM on your Mac. Other tools, such as Podman and Rancher Desktop, are also great for working with containers on a Mac.

What about WebAssembly

WebAssembly (Wasm) is a modern binary instruction set that builds applications that are smaller, faster, more secure, and more portable than containers. You write your app in your favorite language and compile it as a Wasm binary, and it’ll run anywhere you have a Wasm runtime.

However, Wasm apps have many limitations, and we’re still developing many of the standards. As a result, containers remain the dominant model for cloud-native applications.

The container ecosystem is also much richer and more mature than the Wasm ecosystem.

As you’ll see in the Wasm chapter, Docker and the container ecosystem are adapting to work with Wasm apps, and you should expect a future where VMs, containers, and Wasm apps run side-by-side in most clouds and applications.

This book is up-to-date with the latest Wasm and container developments.

What about Kubernetes

Kubernetes is the industry standard platform for deploying and managing containerized apps.

Terminology: A containerized app is an application running as a container. We’ll cover this in a lot of detail later.

Older versions of Kubernetes used Docker to start and stop containers. However, newer versions use containerd, which is a stripped-down version of Docker optimized for use by Kubernetes and other platforms.

The important thing to know is that all Docker containers work on Kubernetes.

Check out these resources if you need to learn Kubernetes:

Quick Start Kubernetes: This is ~100 pages and will get you up-to-speed with Kubernetes in one day!The Kubernetes Book. This is the ultimate book for mastering Kubernetes.

I update both books annually to ensure they’re up-to-date with the latest and greatest developments in the cloud native ecosystem, including WebAssembly.

Chapter Summary

We used to live in a world where every time the business needed a new application, we had to buy a brand-new server. VMware came along and allowed us to drive more value out of new and existing servers. However, following the success of VMware and hypervisors came a newer, more efficient, and portable virtualization technology called containers. However, containers were complex and hard to implement until Docker came along and made them easy. WebAssembly is powering a third wave of cloud computing, but Docker and the container ecosystem are evolving to work with WebAssembly, and the book has an entire chapter dedicated to Docker and WebAssembly.

2: Docker and container-related standards and projects

This chapter introduces you to Docker and some of the most important standards and projects shaping the container ecosystem. The goal is to lay some foundations that we’ll build on in later chapters.

This chapter has two main parts:

DockerContainer-related standards and projects

Docker

Docker is at the heart of the container ecosystem. However, the term Docker can mean two things:

The Docker platformDocker, Inc.

The Docker platform is a neatly packaged collection of technologies for creating, managing, and orchestrating containers. Docker, Inc. is the company that created the Docker platform and continues to be the driving force behind developing new features.

Let’s dive a bit deeper.

Docker, Inc.

Docker, Inc. is a technology company based out of Palo Alto and founded by French-born American developer and entrepreneur Solomon Hykes. Solomon is no longer at the company.

The company started as a platform as a service (PaaS) provider called dotCloud. Behind the scenes, dotCloud delivered their services on top of containers and had an in-house to help them deploy and manage those containers. They called this in-house tool Docker.

The word Docker is a British expression meaning dock work____er____ that refers to a person who loads and unloads cargo from ships.

In 2013, dotCloud dropped the struggling PaaS side of the business, rebranded as Docker, Inc., and focussed on bringing Docker and containers to the world.

The Docker technology

The Docker platform is designed to make it as easy as possible to build, ship, and run containers.

At a high level, there are two major parts to the Docker platform:

The CLI (client)The engine (server)

The CLI is the familiar docker command-line tool for deploying and managing containers. It converts simple commands into API requests and sends them to the engine.

The engine comprises all the server-side components that run and manage containers.

Figure 2.1 shows the high-level architecture. The client and engine can be on the same host or connected over the network.

Figure 2.1 Docker client and engine.

In later chapters, you’ll see that the client and engine are complex and comprise a lot of small specialized parts. Figure 2.2 gives you an idea of some of the complexity behind the engine. However, the client hides all this complexity so you don’t have to care. For example, you type friendly docker commands into the CLI, the CLI converts them to API requests and sends them to the daemon, and the daemon takes care of everything else.

Figure 2.2 Docker CLI and daemon hiding complexity.

Let’s switch focus and briefly look at some standards and governance bodies.

Container-related standards and projects

There are several important standards and governance bodies influencing the development of containers and the container ecosystem. Some of these include:

The OCIThe CNCFThe Moby Project

The Open Container Initiative (OCI)

The Open Container Initiative (OCI) is a governance council responsible for low-level container-related standards.

It operates under the umbrella of the Linux Foundation and was founded in the early days of the container ecosystem when some of the people at a company called CoreOS didn’t like the way Docker was dominating the ecosystem. In response, CoreOS created an open standard called appc that defined specifications for things such as image format and container runtime. They also created a reference implementation called rkt (pronounced “rocket”).

The appc standard did things differently from Docker and put the ecosystem in an awkward position with two competing standards.

While competition is usually a good thing, competing standards are generally bad, as they generate confusion that slows down user adoption. Fortunately, the main players in the ecosystem came together and formed the OCI as a vendor-neutral lightweight council to govern container standards. This allowed us to archive the appc project and place all low-level container-related specifications under the OCI’s governance.

At the time of writing, the OCI maintains three standards called specs:

The image-specThe runtime-specThe distribution-spec

We often use a rail tracks analogy when explaining the OCI standards:

When the size and properties of rail tracks were standardized, it gave entrepreneurs in the rail industry confidence the trains, carriages, signaling systems, platforms, and other rail infrastructure they built would work with the standardized tracks — nobody wanted competing standards for track sizes.

The OCI specifications did the same thing for the container ecosystem and it’s flourished ever since. Docker has also changed a lot since the formation of the OCI, and all modern versions of Docker implement all three OCI specs. For example:

The Docker builder (BuildKit) creates OCI compliant-imagesDocker uses an OCI-compliant runtime to create OCI-compliant containersDocker Hub implements the OCI distribution spec and is an OCI-compliant registry

Docker, Inc. and many other companies have people on the technical oversight board (TOB) of the OCI.

The CloudNative Computing Foundation (CNCF)

The Cloud Native Computing Foundation (CNCF) is another Linux Foundation project that is influential in the container ecosystem. It was founded in 2015 with the goal of “…advancing container technologies… and making cloud native computing ubiquitous”.

Instead of creating and maintaining container-related specifications, the CNCF hosts important projects such as Kubernetes, containerd, Notary, Prometheus, Cilium, and lots more.

When we say the CNCF hosts these projects, we mean it provides a space, structure, and support for projects to grow and mature. For example, all CNCF projects pass through the following three phases or stages:

SandboxIncubatingGraduated

Each phase increases a project’s maturity level by requiring higher standards of governance, documentation, auditing, contribution tracking, marketing, community engagement, and more. For example, new projects accepted as sandbox projects may have great ideas and great technology but need help and resources to create strong governance, etc. The CNCF helps with all of that.

Graduated projects are considered ready for production and are guaranteed to have strong governance and implement good practices.

If you look back to Figure 2.2, you’ll see that Docker uses at least two CNCF technologies — containerd and Notary.

The Moby Project

Docker created the Moby project as a community-led place for developers to build specialized tools for building container platforms.

Platform builders can pick the specific Moby tools they need to build their container platform. They can even compose their platforms from a mix of Moby tools, in-house tools, and tools from other projects.

Docker, Inc. originally created the Moby project, but it now has members including Microsoft, Mirantis, and Nvidia.

The Docker platform is built using tools from various projects, including the Moby project, the CNCF, and the OCI.

Chapter summary

This chapter introduced you to Docker and some of the major influences in the container ecosystem.

Docker, Inc., is a technology company based in Palo Alto that is changing how we do software. They were the first movers and instigators of the modern container revolution.

The Docker platform focuses on running and managing application containers. It runs on Linux and Windows, can be installed almost anywhere, and offers a variety of free and paid-for products.

The Open Container Initiative (OCI) governs low-level container standards and maintains specifications for runtimes, image format, and registries.

The CNCF hosts important cloud-native projects and helps them mature into production-grade tools.

The Moby project hosts low-level tools developers can use to build container platforms.

3: Getting Docker

There are lots of ways to get Docker and work with containers. This chapter will show you the following ways:

Docker DesktopMultipassServer installs on Linux

Docker Desktop is the best way to work with Docker and gets you the complete Docker experience on your laptop with all the latest tools, plugins, and extensions. I use it every day, and I recommend it to everyone.

We’ll also show you how to install Docker on your laptop with Multipass and a super-simple Linux installation. However, you should only consider these if you can’t use Docker Desktop, as they offer fewer features.

Docker Desktop

Docker Desktop is a desktop app from Docker, Inc. and is the best way to work with containers. You get the Docker Engine, a slick UI, all the latest plugins and features, and an extension system with a marketplace. You even get Docker Compose and a single-node Kubernetes cluster if you want to learn Kubernetes.

It’s free for personal use and education, but you’ll have to pay a license fee if you use it for work and your company has over 250 employees or does more than $10M in annual revenue.

Docker Desktop on Windows 10 and Windows 11 Professional and Enterprise editions supports Windows containersandLinux containers. Docker Desktop on Mac, Linux, and Home editions of Windows only support Linux containers. All of the examples in the book and almost all of the containers in the real world are Linux containers.

Let’s install Docker Desktop on Windows and MacOS.

Windows prereqs

Docker Desktop on Windows requires all of the following:

64-bit version of Windows 10/11 Hardware virtualization support must be enabled in your system’s BIOSWSL 2

Be very careful changing anything in your system’s BIOS.

Installing Docker Desktop on Windows 10 and 11

Search the internet for _“install Docker Desktop on Windows”. This will take you to the relevant download page, where you can download the installer and follow the instructions. When prompted, you should install and enable the WSL 2 backend (Windows Subsystem for Linux).

Once the installation is complete, you need to manually start Docker Desktop from the Windows Start menu. It may take a minute to start, but you can watch the start progress via the animated whale icon on the Windows taskbar at the bottom of the screen.

Once it’s running, you can open a terminal and type some simple docker commands.

$ docker version <Snip> Server: Docker Desktop 4.30.0 (149282) Engine: Version: 26.1.1 API version: 1.45 (minimum version 1.24) Go version: go1.21.9 Built: Tue Apr 30 11:48:28 2024 OS/Arch: linux/amd64 <Snip>

Notice how the Server output shows OS/Arch: linux/amd64. This is because a default installation assumes you’ll be working with Linux containers.

Some versions of Windows let you switch to Windows containers by right-clicking the Docker whale icon in the Windows notifications tray and selecting Switch to Windows containers…. Doing this keeps existing Linux containers running in the background, but you won’t be able to see or manage them until you switch back to Linux containers mode.

Congratulations. You now have a working installation of Docker on your Windows machine.

Make sure you’re running in Linux containers mode so you can follow along with the examples later in the book.

Installing Docker Desktop on Mac

Docker Desktop for Mac is like Docker Desktop for Windows — a packaged product with a slick UI that gets you the full Docker experience on your laptop. You can also enable the built-in single-node Kubernetes cluster.

Before proceeding with the installation, you need to know that Docker Desktop on Mac installs the daemon and server-side components inside a lightweight Linux VM that seamlessly exposes the API to your local Mac environment. This means you can open a terminal on your Mac and run docker commands without ever knowing it’s all running in a hidden VM. This is also why Mac versions of Docker Desktop only work with Linux containers — everything’s running inside a Linux VM.

Figure 3.1 shows the high-level architecture for Docker Desktop on Mac.

Figure 3.1

The simplest way to install Docker Desktop on your Mac is to search the web for “install Docker Desktop on MacOS”, follow the links to the download, and then complete the simple installer.

When the installer finishes, you’ll have to start Docker Desktop from the MacOS Launchpad manually. It may take a minute to start, but you can watch the animated Docker whale icon in the status bar at the top of your screen. Once it’s started, you can click the whale icon to manage Docker Desktop.

Open a terminal window and run some regular Docker commands. Try the following.

$ docker version Client: Cloud integration: v1.0.35+desktop.13 Version: 26.1.1 API version: 1.45 OS/Arch: darwin/arm64 <Snip> Server: Docker Desktop 4.30.0 (149282) Engine: Version: 26.1.1 API version: 1.45 (minimum version 1.24) OS/Arch: linux/arm64 containerd: Version: 1.6.31 runc: Version: 1.1.12 docker-init: Version: 0.19.0 <Snip>

Notice that the OS/Arch: for the Server component shows as linux/amd64 or linux/arm64. This is because the daemon runs inside the Linux VM mentioned earlier. The Client component is a native Mac application and runs directly on the Mac OS Darwin kernel. This is why it shows as darwin/amd64 or darwin/arm64.

You can now use Docker on your Mac.

Installing Docker with Multipass

Only consider this section if you can’t use Docker Desktop, as Multipass installations don’t ship with out of the box support for docker scout, docker debug, or docker init.

Multipass is a free tool for creating cloud-style Linux VMs on your Linux, Mac, or Windows machine and is incredibly easy to install and use. It’s an easy way to create multi-node production-like Docker clusters.

Go to https://multipass.run/install and install the right edition for your hardware and OS.

Once installed, you only need three commands:

$ multipass launch $ multipass ls $ multipass shell

Let’s see how to launch and connect to a new VM with Docker pre-installed.

Run the following command to create a new VM called node1 based on the docker image. The docker image has Docker pre-installed and ready to go.

$ multipass launch docker --name node1

It’ll take a minute or two to download the image and launch the VM.

List VMs to make sure yours launched properly.

$ multipass ls Name State IPv4 Image node1 Running 192.168.64.37 Ubuntu 22.04 LTS 172.17.0.1 172.18.0.1

You’ll use the 192 IP address when working with the examples later in the book.

Connect to the VM with the following command.

$ multipass shell node1

Once connected, you can run the following commands to check your Docker version and list installed CLI plugins.

$ docker --version Docker version 26.1.0, build 9714adc $ docker info Client: Docker Engine - Community Version: 26.1.0 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.14.0 Path: /usr/libexec/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.26.1 Path: /usr/libexec/docker/cli-plugins/docker-compose <Snip>

The installation in the example only has the buildx and compose CLI plugins. You’ll have to manually install the relevant plugins if you want to follow the Docker Scout, Docker Init, and Docker Debug examples later in the book.

You can type exit to log out of the VM, and multipass shell node1 to log back in. You can also type multipass delete node1 and then multipass purge to delete it.

Installing Docker on Linux

Only consider this section if you can’t use Docker Desktop. This installation doesn’t install the docker scout, docker debug, or docker init CLI plugins that we’ll use in some of the later chapters.

These instructions show you how to install Docker on Ubuntu Linux 22.04 and are just for guidance purposes. Lots of other installation methods exist, and you should search the web for the latest instructions.

$ sudo snap install docker docker 24.0.5 from Canonical ✓ installed

Run some commands to test the installation. You’ll have to prefix them with sudo.

$ sudo docker --version Docker version 24.0.5, build ced0996 $ sudo docker info <Snip> Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 24.0.5 <Snip>

Chapter Summary

You can run Docker almost anywhere, and installing it’s easier than ever.

Docker Desktop gives you a fully functional Docker environment on your Linux, Mac, or Windows machine and is the best way to get a Docker development environment on your local machine. It’s easy to install, includes the Docker Engine, has a slick UI, and has a marketplace with lots of extensions to extend its capabilities. It works with docker scout, docker debug, and docker init, and it even lets you spin up a single-node Kubernetes cluster.

Multipass is a great way to spin up a local VM running Docker, and there are lots of ways to install Docker on Linux servers. These give you access to most of the free Docker features but lack some of the features of Docker Desktop.

4: The big picture

This chapter will give you some hands-on experience and a high-level view of images and containers. The goal is to prepare you for more detail in the upcoming chapters.

We’ll break this chapter into two parts:

The Ops perspectiveThe Dev perspective

The ops perspective focuses on starting, stopping, deleting containers, and executing commands inside them.

The dev perspective focuses more on the application side of things and runs through taking application source code, building it into a container image, and running it as a container.

I recommend you read both sections and follow the examples, as this will give you the dev and ops perspectives. DevOps anyone?

The Ops Perspective

In this section, you’ll complete all of the following:

Check Docker is workingDownload an imageStart a container from the imageExecute a command inside the containerDelete the container

A typical Docker installation installs the client and the engine on the same machine and configures them to talk to each other.

Run a docker version command to ensure both are installed and running.

$ docker version Client: <<---- Start of client response Cloud integration: v1.0.35+desktop.13 -----┐ Version: 26.1.1 | API version: 1.45 | Client info block Go version: go1.21.9 | OS/Arch: darwin/arm64 | Context: desktop-linux -----┘ Server: Docker Desktop 4.30.0 (149282) <<---- Start of server response Engine: -----┐ Version: 26.1.1 | API version: 1.45 (minimum version 1.24) | Go version: go1.21.9 | OS/Arch: linux/arm64 | containerd: | Server block Version: 1.6.31 | runc: | Version: 1.1.12 | docker-init: | Version: 0.19.0 -----┘

If your response from the client and server looks like the output in the book, everything is working as expected.

If you’re on Linux and get a permission denied while trying to connect to the Docker daemon... error, try again with sudo in front of the command — sudo docker version. If it works with sudo, you’ll need to prefix all future docker commands with sudo.

Download an image

Images are objects that contain everything an app needs to run. This includes an OS filesystem, the application, and all dependencies. If you work in operations, they’re similar to VM templates. If you’re a developer, they’re similar to classes.

Run a docker images command.

$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE

If you are working from a clean installation, you’ll have no images, and your output will be the same as the book. If you’re working with Multipass, you might see an image called protainer/protainer-ce.

Copying new images onto your Docker host is called pulling. Pull the ubuntu:latest image.

$ docker pull ubuntu:latest latest: Pulling from library/ubuntu b91d8878f844: Download complete Digest: sha256:e9569c25505f33ff72e88b2990887c9dcf230f23259da296eb814fc2b41af999 Status: Downloaded newer image for ubuntu:latest docker.io/library/ubuntu:latest

Run another docker images to confirm your pull command worked.

$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest e9569c25505f 10 days ago 106MB

We’ll discuss the details of where the image is stored and what’s inside it in later chapters. For now, all you need to know is that images contain enough of an operating system (OS) and all the code and dependencies required to run the application. The Ubuntu image you pulled includes a stripped-down version of the Ubuntu Linux filesystem and a few of the standard Linux utilities that ship with Ubuntu.

If you pull an application container, such as nginx:latest, you’ll get an image with a minimal OS and the code to run the NGINX app.

Start a container from the image

If you’ve been following along, you’ll have a copy of the ubuntu:latest image on your Docker host, and you can use the docker run command to start a container from it.

Run the following docker run command to start a new container called test from the ubuntu:latest image.

$ docker run --name test -it ubuntu:latest bash root@bbd2e5ad1817:/#

Notice how your shell prompt has changed. This is because the container is already running and your shell is attached to it.

Let’s quickly examine that