The Ultimate Docker Container Book - Dr. Gabriel N. Schenker - E-Book

The Ultimate Docker Container Book E-Book

Dr. Gabriel N. Schenker

0,0
35,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The Ultimate Docker Container Book, 3rd edition enables you to leverage Docker containers for streamlined software development. You’ll uncover Docker fundamentals and how containers improve software supply chain efficiency and enhance security.
You’ll start by learning practical skills such as setting up Docker environments, handling stateful components, running and testing code within containers, and managing Docker images. You’ll also explore how to adapt legacy applications for containerization and understand distributed application architecture. Next, you’ll delve into Docker's networking model, software-defined networks for secure applications, and Docker compose for managing multi-service applications along with tools for log analysis and metrics. You’ll further deepen your understanding of popular orchestrators like Kubernetes and Docker swarmkit, exploring their key concepts, and deployment strategies for resilient applications. In the final sections, you’ll gain insights into deploying containerized applications on major cloud platforms, including Azure, AWS, and GCE and discover techniques for production monitoring and troubleshooting.
By the end of this book, you’ll be well-equipped to manage and scale containerized applications effectively.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 788

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



The Ultimate Docker Container Book

Build, test, ship, and run containers with Docker and Kubernetes

Dr. Gabriel N. Schenker

BIRMINGHAM—MUMBAI

The Ultimate Docker Container Book

Copyright © 2023 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Preet Ahuja

Publishing Product Manager: Suwarna Rajput

Senior Editor: Sayali Pingale

Technical Editor: Rajat Sharma

Copy Editor: Safis Editing

Project Coordinator: Aryaa Joshi

Proofreader: Safis Editing

Indexer: Manju Arasan

Production Designer: Shyam Sundar Korumilli

DevRel Marketing Coordinator: Rohan Dobhal

First published: April 2018

Second edition: March 2020

Third edition: August 2023

Production reference: 1100823

Published by Packt Publishing Ltd.

Grosvenor House

11 St Paul’s Square

Birmingham

B3 1RB, UK

ISBN 978-1-80461-398-6

www.packtpub.com

To my wonderful wife, Veronicah, for being my loving partner throughout our joint life journey.

– Gabriel Schenker

Contributors

About the author

Dr. Gabriel N. Schenker has more than 30 years of experience as an independent consultant, architect, leader, trainer, mentor, and developer. Currently, Gabriel works as a senior software engineering manager and VP at iptiQ by Swiss Re. Prior to that, Gabriel worked as a lead solution architect at Techgroup Switzerland. Despite loving his home country of Switzerland, Gabriel also lived and worked for almost 10 years in the US, where, among other companies, he worked for Docker and Confluent. Gabriel has a Ph.D. in physics, and he is a former Docker Captain, a Certified Docker Associate, a Certified Kafka Developer, a Certified Kafka Operator, and an ASP Insider. When not working, Gabriel enjoys spending time with his wonderful wife Veronicah, and his children.

I want to thank my wife, Veronicah, for her endless love and unconditional support.

About the reviewers

Mátyás Kovács is an enthusiastic IT consultant with more than 9 years of hands-on experience in architecting, automating, optimizing, and supporting mission-critical systems. He has worked in modern development practices, CI/CD, configuration management, virtualization technologies, and cloud services. He is currently working as a lead systems engineer and people team lead. He is an eager mentor and huge supporter of lifelong learning.

Jeppe Cramon, a seasoned software professional and owner of Cloud Create, is known for his pioneering work in building distributed systems, strategic monoliths, and microservices. His deep understanding of these fields is evident in his comprehensive blog posts, where he provides detailed insights into the nature of distributed systems and how they should be implemented.

Not just a practitioner, Cramon is also an educator, regularly sharing his insights on distributed systems in his blog and at conferences, driving the conversation on these topics and their implications for autonomous services.

Table of Contents

Preface

Part 1: Introduction

1

What Are Containers and Why Should I Use Them?

What are containers?

Why are containers important?

What is the benefit of using containers for me or for my company?

The Moby project

Docker products

Docker Desktop

Docker Hub

Docker Enterprise Edition

Container architecture

Summary

Further reading

Questions

Answers

2

Setting Up a Working Environment

Technical requirements

The Linux command shell

PowerShell for Windows

Installing and using a package manager

Installing Homebrew on macOS

Installing Chocolatey on Windows

Installing Git and cloning the code repository

Choosing and installing a code editor

Installing VS Code on macOS

Installing VS Code on Windows

Installing VS Code on Linux

Installing VS Code extensions

Installing Docker Desktop on macOS or Windows

Testing Docker Engine

Testing Docker Desktop

Installing Docker Toolbox

Enabling Kubernetes on Docker Desktop

Installing minikube

Installing minikube on Linux, macOS, and Windows

Testing minikube and kubectl

Working with a multi-node minikube cluster

Installing Kind

Testing Kind

Summary

Further reading

Questions

Answers

Part 2: Containerization Fundamentals

3

Mastering Containers

Technical requirements

Running the first container

Starting, stopping, and removing containers

Running a random trivia question container

Listing containers

Stopping and starting containers

Removing containers

Inspecting containers

Exec into a running container

Attaching to a running container

Retrieving container logs

Logging drivers

Using a container-specific logging driver

Advanced topic – changing the default logging driver

The anatomy of containers

Architecture

Namespaces

Control groups

Union filesystem

Container plumbing

Summary

Further reading

Questions

Answers

4

Creating and Managing Container Images

What are images?

The layered filesystem

The writable container layer

Copy-on-write

Graph drivers

Creating Docker images

Interactive image creation

Using Dockerfiles

Saving and loading images

Lift and shift – containerizing a legacy app

Analyzing external dependencies

Source code and build instructions

Configuration

Secrets

Authoring the Dockerfile

Why bother?

Sharing or shipping images

Tagging an image

Demystifying image namespaces

Explaining official images

Pushing images to a registry

Summary

Questions

Answers

5

Data Volumes and Configuration

Technical requirements

Creating and mounting data volumes

Modifying the container layer

Creating volumes

Mounting a volume

Removing volumes

Accessing Docker volumes

Sharing data between containers

Using host volumes

Defining volumes in images

Configuring containers

Defining environment variables for containers

Using configuration files

Defining environment variables in container images

Environment variables at build time

Summary

Further reading

Questions

Answers

6

Debugging Code Running in Containers

Technical requirements

Evolving and testing code running in a container

Mounting evolving code into the running container

Auto-restarting code upon changes

Auto-restarting for Node.js

Auto-restarting for Java and Spring Boot

Auto-restarting for Python

Auto-restarting for .NET

Line-by-line code debugging inside a container

Debugging a Node.js application

Debugging a .NET application

Instrumenting your code to produce meaningful logging information

Instrumenting a Python application

Instrumenting a .NET C# application

Using Jaeger to monitor and troubleshoot

Summary

Questions

Answers

7

Testing Applications Running in Containers

Technical requirements

Benefits of testing applications in containers

Why do we test?

Manual versus automated testing

Why do we test in containers?

Different types of testing

Unit tests

Integration tests

Acceptance tests

Commonly used tools and technologies

Implementing a sample component

Implementing and running unit and integration tests

Implementing and running black box tests

Best practices for setting up a testing environment

Tips for debugging and troubleshooting issues

Challenges and considerations when testing applications running in containers

Case studies

Summary

Questions

Answers

8

Increasing Productivity with Docker Tips and Tricks

Technical requirements

Keeping your Docker environment clean

Using a .dockerignore file

Executing simple admin tasks in a container

Running a Perl script

Running a Python script

Limiting the resource usage of a container

Avoiding running a container as root

Running Docker from within Docker

Optimizing your build process

Scanning for vulnerabilities and secrets

Using Snyk to scan a Docker image

Using docker scan to scan a Docker image for vulnerabilities

Running your development environment in a container

Summary

Questions

Answers

Part 3: Orchestration Fundamentals

9

Learning about Distributed Application Architecture

What is a distributed application architecture?

Defining the terminology

Patterns and best practices

Loosely coupled components

Stateful versus stateless

Service discovery

Routing

Load balancing

Defensive programming

Redundancy

Health checks

Circuit breaker pattern

Running in production

Logging

Tracing

Monitoring

Application updates

Summary

Further reading

Questions

Answers

10

Using Single-Host Networking

Technical requirements

Dissecting the container network model

Network firewalling

Working with the bridge network

The host and null networks

The host network

The null network

Running in an existing network namespace

Managing container ports

HTTP-level routing using a reverse proxy

Containerizing the monolith

Extracting the first microservice

Using Traefik to reroute traffic

Summary

Further reading

Questions

Answers

11

Managing Containers with Docker Compose

Technical requirements

Demystifying declarative versus imperative orchestration of containers

Running a multi-service app

Building images with Docker Compose

Running an application with Docker Compose

Scaling a service

Building and pushing an application

Using Docker Compose overrides

Summary

Further reading

Questions

Answers

12

Shipping Logs and Monitoring Containers

Technical requirements

Why is logging and monitoring important?

Shipping containers and Docker daemon logs

Shipping container logs

Shipping Docker daemon logs

Querying a centralized log

Step 1 – accessing Kibana

Step 2 – setting up an index pattern

Step 3 – querying the logs in Kibana

Step 4 – visualizing the logs

Collecting and scraping metrics

Step 1 – running cAdvisor in a Docker container

Step 2 – setting up and running Prometheus

Monitoring a containerized application

Step 1 – setting up Prometheus

Step 2 – instrumenting your application with Prometheus metrics

Step 3 – configuring Prometheus to scrape your application metrics

Step 4 – setting up Grafana for visualization

Step 5 – setting up alerting (optional)

Step 6 – monitoring your containerized application

Summary

Questions

Answers

13

Introducing Container Orchestration

What are orchestrators and why do we need them?

The tasks of an orchestrator

Reconciling the desired state

Replicated and global services

Service discovery

Routing

Load balancing

Scaling

Self-healing

Data persistence and storage management

Zero downtime deployments

Affinity and location awareness

Security

Introspection

Overview of popular orchestrators

Kubernetes

Docker Swarm

Apache Mesos and Marathon

Amazon ECS

AWS EKS

Microsoft ACS and AKS

Summary

Further reading

Questions

Answers

14

Introducing Docker Swarm

The Docker Swarm architecture

Swarm nodes

Stacks, services, and tasks

Services

Tasks

Stacks

Multi-host networking

Creating a Docker Swarm

Creating a local single-node swarm

Using PWD to generate a Swarm

Creating a Docker Swarm in the cloud

Deploying a first application

Creating a service

Inspecting the service and its tasks

Testing the load balancing

Logs of a service

Reconciling the desired state

Deleting a service or a stack

Deploying a multi-service stack

Removing the swarm in AWS

Summary

Questions

Answers

15

Deploying and Running a Distributed Application on Docker Swarm

The swarm routing mesh

Zero-downtime deployment

Popular deployment strategies

Rolling updates

Health checks

Rolling back

Blue-green deployments

Canary releases

Storing configuration data in the swarm

Protecting sensitive data with Docker secrets

Creating secrets

Using a secret

Simulating secrets in a development environment

Secrets and legacy applications

Updating secrets

Summary

Questions

Answers

Part 4: Docker, Kubernetes, and the Cloud

16

Introducing Kubernetes

Technical requirements

Understanding Kubernetes architecture

Kubernetes master nodes

Cluster nodes

Introduction to Play with Kubernetes

Kubernetes support in Docker Desktop

Introduction to pods

Comparing Docker container and Kubernetes pod networking

Sharing the network namespace

Pod life cycle

Pod specifications

Pods and volumes

Kubernetes ReplicaSets

ReplicaSet specification

Self-healing

Kubernetes Deployments

Kubernetes Services

Context-based routing

Comparing SwarmKit with Kubernetes

Summary

Further reading

Questions

Answers

17

Deploying, Updating, and Securing an Application with Kubernetes

Technical requirements

Deploying our first application

Deploying the web component

Deploying the database

Defining liveness and readiness

Kubernetes liveness probes

Kubernetes readiness probes

Kubernetes startup probes

Zero-downtime deployments

Rolling updates

Blue-green deployment

Kubernetes secrets

Manually defining secrets

Creating secrets with kubectl

Using secrets in a pod

Secret values in environment variables

Summary

Further reading

Questions

Answers

18

Running a Containerized Application in the Cloud

Technical requirements

Why choose a hosted Kubernetes service?

Running a simple containerized application on Amazon EKS

Exploring Microsoft’s AKS

Preparing the Azure CLI

Creating a container registry on Azure

Pushing our images to ACR

Creating a Kubernetes cluster

Deploying our application to the Kubernetes cluster

Understanding GKE

Summary

Questions

Answers

19

Monitoring and Troubleshooting an Application Running in Production

Technical requirements

Monitoring an individual service

Using OpenTracing for distributed tracing

A Java example

Instrumenting a Node.js-based service

Instrumenting a .NET service

Leveraging Prometheus and Grafana to monitor a distributed application

Architecture

Deploying Prometheus to Kubernetes

Deploying our application services to Kubernetes

Deploying Grafana to Kubernetes

Defining alerts based on key metrics

Metrics

Alerts

Defining alerts

Runbooks

Troubleshooting a service running in production

The netshoot container

Summary

Questions

Answers

Index

Other Books You May Enjoy

Preface

In today’s fast-paced world, developers are under constant pressure to build, modify, test, and deploy highly distributed applications quickly and efficiently. Operations engineers need a consistent deployment strategy that can handle their growing portfolio of applications, while stakeholders want to keep costs low. Docker containers, combined with a container orchestrator such as Kubernetes, provide a powerful solution to these challenges.

Docker containers streamline the process of building, shipping, and running highly distributed applications. They supercharge CI/CD pipelines and allow companies to standardize on a single deployment platform, such as Kubernetes. Containerized applications are more secure and can be run on any platform capable of running containers, whether on-premises or in the cloud. With Docker containers, developers, operations engineers, and stakeholders can achieve their goals and stay ahead of the curve.

Who this book is for

This book is designed for anyone who wants to learn about Docker and its capabilities. Whether you’re a system administrator, operations engineer, DevOps engineer, developer, or business stakeholder, this book will guide you through the process of getting started with Docker from scratch.

With clear explanations and practical examples, you’ll explore all the capabilities that this technology offers, ultimately providing you with the ability to deploy and run highly distributed applications in the cloud. If you’re looking to take your skills to the next level and harness the power of Docker, then this book is for you.

What this book covers

Chapter 1, What Are Containers and Why Should I Use Them? focuses on the software supply chain and the friction within it. It then presents containers as a means to reduce this friction and add enterprise-grade security on top of it. In this chapter, we also look into how containers and the ecosystem around them are assembled. We specifically point out the distinction between the upstream OSS components (Moby) that form the building blocks of the downstream products of Docker and other vendors.

Chapter 2, Setting Up a Working Environment, discusses in detail how to set up an ideal environment for developers, DevOps, and operators that can be used when working with Docker containers.

Chapter 3, Mastering Containers, teaches you how to start, stop, and remove containers. This chapter also teaches you how to inspect containers to retrieve additional metadata from them. Furthermore, it explains how to run additional processes and how to attach to the main process in an already-running container. It also shows how to retrieve logging information from a container that is produced by the processes running inside it. Finally, the chapter introduces the inner workings of a container including such things as Linux namespaces and groups.

Chapter 4, Creating and Managing Container Images, presents different ways to create container images, which serve as templates for containers. It introduces the inner structure of an image and how it is built. This chapter also shows how to “lift and shift” an existing legacy application such that it runs in containers.

Chapter 5, Data Volumes and Configuration, discusses data volumes, which can be used by stateful components running in containers. This chapter also shows how you can define individual environment variables for the application running inside the container, as well as how to use files containing whole sets of configuration settings.

Chapter 6, Debugging Code Running in Containers, introduces techniques commonly used to allow you to evolve, modify, debug, and test your code while running in a container. With these techniques at hand, you will enjoy a frictionless development process for applications running in a container, similar to what you experience when developing applications that run natively.

Chapter 7, Testing Applications Running in Containers, discusses software testing for applications and application services running in containers. You will be introduced to the various test types that exist and understand how they can be optimally implemented and executed when using containers. The chapter explains how all tests can be run locally on a developer’s machine or as individual quality gates of a fully automated CI/CD pipeline.

Chapter 8, Increasing Productivity with Docker Tips and Tricks, shows miscellaneous tips, tricks, and concepts that are useful when containerizing complex distributed applications, or when using Docker to automate sophisticated tasks. You will also learn how to leverage containers to run your whole development environment in them.

Chapter 9, Learning about Distributed Application Architecture, introduces the concept of a distributed application architecture and discusses the various patterns and best practices that are required to run a distributed application successfully. Finally, it discusses the additional requirements that need to be fulfilled to run such an application in production.

Chapter 10, Using Single-Host Networking, presents the Docker container networking model and its single host implementation in the form of the bridge network. The chapter introduces the concept of Software-Defined Networks (SDNs) and how they are used to secure containerized applications. It also covers how container ports can be opened to the public and thus make containerized components accessible from the outside world. Finally, it introduces Traefik, a reverse proxy, to enable sophisticated HTTP application-level routing between containers.

Chapter 11, Managing Containers with Docker Compose, introduces the concept of an application consisting of multiple services, each running in a container, and explains how Docker Compose allows us to easily build, run, and scale such an application using a declarative approach.

Chapter 12, Shipping Logs and Monitoring Containers, shows how the container logs can be collected and shipped to a central location where the aggregated log can then be parsed for useful information. You will also learn how to instrument an application so that it exposes metrics and how those metrics can be scraped and shipped again to a central location. Finally, the chapter teaches you how to convert those collected metrics into graphical dashboards that can be used to monitor a containerized application.

Chapter 13, Introducing Container Orchestration, elaborates on the concept of container orchestrators. It explains why orchestrators are needed and how they conceptually work. The chapter will also provide an overview of the most popular orchestrators and name a few of their respective pros and cons.

Chapter 14, Introducing Docker Swarm, introduces Docker’s native orchestrator called SwarmKit. It elaborates on all the concepts and objects SwarmKit uses to deploy and run a distributed, resilient, robust, and highly available application in a cluster on-premises or in the cloud.

Chapter 15, Deploying and Running a Distributed Application on Docker Swarm, introduces routing mesh and demonstrates how to deploy a first application consisting of multiple services onto the Swarm.

Chapter 16, Introducing Kubernetes, presents the currently most popular container orchestrator, Kubernetes. It introduces the core Kubernetes objects that are used to define and run a distributed, resilient, robust, and highly available application in a cluster. Finally, it introduces minikube as a way to locally deploy a Kubernetes application and also covers the integration of Kubernetes with Docker Desktop.

Chapter 17, Deploying, Updating, and Securing an Application with Kubernetes,teaches you how to deploy, update, and scale applications into a Kubernetes cluster. It also shows you how to instrument your application services with liveness and readiness probes, to support Kubernetes in its health and availability checking. Furthermore, the chapter explains how zero downtime deployments are achieved to enable disruption-free updates and rollbacks of mission-critical applications. Finally, it introduces Kubernetes Secrets as a means to configure services and protect sensitive data.

Chapter 18, Running a Containerized Application in the Cloud, gives an overview of some of the most popular ways of running containerized applications in the cloud. Fully managed offerings on Microsoft Azure, Amazon AWS, and Google Cloud Engine are discussed. We will create a hosted Kubernetes cluster on each cloud and deploy a simple distributed application to each of those clusters. We will also compare the ease of setup and use of the three offerings.

Chapter 19, Monitoring and Troubleshooting an Application Running in Production, covers different techniques used to instrument and monitor an individual service or a whole distributed application running on a Kubernetes cluster. You will be introduced to the concept of alerting based on key metrics. The chapter also shows how you can troubleshoot an application service that is running in production without altering the cluster or the cluster nodes on which the service is running.

To get the most out of this book

Software/hardware covered in the book

Operating system requirements

Docker v23.x

Windows, macOS, or Linux

Docker Desktop

Kubernetes

Docker SwarmKit

If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/The-Ultimate-Docker-Container-Book/. If there’s an update to the code, it will be updated in the GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “Once Chocolatey has been installed, test it with the choco--version command.”

A block of code is set as follows:

while : do     curl -s http://jservice.io/api/random | jq '.[0].question'     sleep 5 done

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

… secrets: demo-secret: "<<demo-secret-value>>" other-secret: "<<other-secret-value>>" yet-another-secret: "<<yet-another-secret-value>>" …

Any command-line input or output is written as follows:

$ docker version $ docker container run hello-world

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “From the menu, select Dashboard.”

Tips or important notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share Your Thoughts

Once you’ve read The Ultimate Docker Container Book, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Download a free PDF copy of this book

Thanks for purchasing this book!

Do you like to read on the go but are unable to carry your print books everywhere? Is your eBook purchase not compatible with the device of your choice?

Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.

Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application. 

The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily

Follow these simple steps to get the benefits:

Scan the QR code or visit the link below

https://packt.link/free-ebook/9781804613986

Submit your proof of purchaseThat’s it! We’ll send your free PDF and other benefits to your email directly

Part 1:Introduction

The objective of Part 1 is to introduce you to the concept of containers and explain why they are so extremely useful in the software industry. You will also be shown how to prepare your working environment for the use of Docker.

This section has the following chapters:

Chapter 1, What Are Containers and Why Should I Use Them?Chapter 2, Setting Up a Working Environment

1

What Are Containers and Why Should I Use Them?

This first chapter will introduce you to the world of containers and their orchestration. This book starts from the very beginning, in that it assumes that you have limited prior knowledge of containers, and will give you a very practical introduction to the topic.

In this chapter, we will focus on the software supply chain and the friction within it. Then, we’ll present containers, which are used to reduce this friction and add enterprise-grade security on top of it. We’ll also look into how containers and the ecosystem around them are assembled. We’ll specifically point out the distinctions between the upstream Open Source Software (OSS) components, united under the code name Moby, that form the building blocks of the downstream products of Docker and other vendors.

The chapter covers the following topics:

What are containers?Why are containers important?What’s the benefit of using containers for me or for my company?The Moby projectDocker productsContainer architecture

After completing this chapter, you will be able to do the following:

Explain what containers are, using an analogy such as physical containers, in a few simple sentences to an interested laypersonJustify why containers are so important using an analogy such as physical containers versus traditional shipping, or apartment homes versus single-family homes, and so on, to an interested laypersonName at least four upstream open source components that are used by Docker products, such as Docker DesktopDraw a high-level sketch of the Docker container architecture

Let’s get started!

What are containers?

A software container is a pretty abstract thing, so it might help to start with an analogy that should be pretty familiar to most of you. The analogy is a shipping container in the transportation industry. Throughout history, people have transported goods from one location to another by various means. Before the invention of the wheel, goods would most probably have been transported in bags, baskets, or chests on the shoulders of humans themselves, or they might have used animals such as donkeys, camels, or elephants to transport them. With the invention of the wheel, transportation became a bit more efficient as humans built roads that they could move their carts along. Many more goods could be transported at a time. When the first steam-driven machines, and later gasoline-driven engines, were introduced, transportation became even more powerful. We now transport huge amounts of goods on planes, trains, ships, and trucks. At the same time, the types of goods became more and more diverse, and sometimes complex to handle. In all these thousands of years, one thing hasn’t changed, and that is the necessity to unload goods at a target location and maybe load them onto another means of transportation. Take, for example, a farmer bringing a cart full of apples to a central train station where the apples are then loaded onto a train, together with all the apples from many other farmers. Or think of a winemaker bringing their barrels of wine on a truck to the port where they are unloaded, and then transferred to a ship that will transport those barrels overseas.

This unloading from one means of transportation and loading onto another means of transportation was a really complex and tedious process. Every type of product was packaged in its own way and thus had to be handled in its own particular way. Also, loose goods faced the risk of being stolen by unethical workers or damaged in the process of being handled.

Figure 1.1 – Sailors unloading goods from a ship

Then, containers came along, and they totally revolutionized the transportation industry. A container is just a metallic box with standardized dimensions. The length, width, and height of each container are the same. This is a very important point. Without the world agreeing on a standard size, the whole container thing would not have been as successful as it is now. Now, with standardized containers, companies who want to have their goods transported from A to B package those goods into these containers. Then, they call a shipper, who uses a standardized means of transportation. This can be a truck that can load a container, or a train whose wagons can each transport one or several containers. Finally, we have ships that are specialized in transporting huge numbers of containers. Shippers never need to unpack and repackage goods. For a shipper, a container is just a black box, and they are not interested in what is in it, nor should they care in most cases. It is just a big iron box with standard dimensions. Packaging goods into containers is now fully delegated to the parties who want to have their goods shipped, and they should know how to handle and package those goods. Since all containers have the same agreed-upon shape and dimensions, shippers can use standardized tools to handle containers; that is, cranes that unload containers, say from a train or a truck, and load them onto a ship and vice versa. One type of crane is enough to handle all the containers that come along over time. Also, the means of transportation can be standardized, such as container ships, trucks, and trains. Because of all this standardization, all the processes in and around shipping goods could also be standardized and thus made much more efficient than they were before the introduction of containers.

Figure 1.2 – Container ship being loaded in a port

Now, you should have a good understanding of why shipping containers are so important and why they revolutionized the whole transportation industry. I chose this analogy purposefully since the software containers that we are going to introduce here fulfill the exact same role in the so-called software supply chain as shipping containers do in the supply chain of physical goods.

Let’s then have a look at what this whole thing means when translated to the IT industry and software development, shall we? In the old days, developers would develop new applications. Once an application was completed in their eyes, they would hand that application over to the operations engineers, who were then supposed to install it on the production servers and get it running. If the operations engineers were lucky, they even got a somewhat accurate document with installation instructions from the developers. So far, so good, and life was easy. But things got a bit out of hand when, in an enterprise, there were many teams of developers that created quite different types of applications, yet all of them needed to be installed on the same production servers and kept running there. Usually, each application has some external dependencies, such as which framework it was built on, what libraries it uses, and so on. Sometimes, two applications use the same framework but of different versions that might or might not be compatible with each other. Our operations engineers’ lives became much harder over time. They had to become really creative with how they loaded their ships, that is, their servers, with different applications without breaking something. Installing a new version of a certain application was now a complex project on its own, and often needed months of planning and testing beforehand. In other words, there was a lot of friction in the software supply chain.

But these days, companies rely more and more on software, and the release cycles need to become shorter and shorter. Companies cannot afford to just release application updates once or twice a year anymore. Applications need to be updated in a matter of weeks or days, or sometimes even multiple times per day. Companies that do not comply risk going out of business due to the lack of agility. So, what’s the solution? One of the first approaches was to use virtual machines (VMs). Instead of running multiple applications all on the same server, companies would package and run a single application on each VM. With this, all the compatibility problems were gone, and life seemed to be good again. Unfortunately, that happiness didn’t last long. VMs are pretty heavy beasts on their own since they all contain a full-blown operating system such as Linux or Windows Server, and all that for just a single application. This is as if you used a whole ship just to transport a single truckload of bananas in the transportation industry. What a waste! That would never be profitable. The ultimate solution to this problem was to provide something much more lightweight than VMs also able to perfectly encapsulate the goods it needed to transport. Here, the goods are the actual application that has been written by our developers, plus – and this is important – all the external dependencies of the application, such as its framework, libraries, configurations, and more. This holy grail of a software packaging mechanism is the Docker container.

Developers package their applications, frameworks, and libraries into Docker containers, and then they ship those containers to the testers or operations engineers. For testers and operations engineers, a container is just a black box. It is a standardized black box, though. All containers, no matter what application runs inside them, can be treated equally. The engineers know that if any container runs on their servers, then any other containers should run too. And this is actually true, apart from some edge cases, which always exist. Thus, Docker containers are a means to package applications and their dependencies in a standardized way. Docker then coined the phrase Build, ship, and run anywhere.

Why are containers important?

These days, the time between new releases of an application becomes shorter and shorter, yet the software itself does not become any simpler. On the contrary, software projects increase in complexity. Thus, we need a way to tame the beast and simplify the software supply chain. Also, every day, we hear that cyber-attacks are on the rise. Many well-known companies are and have been affected by security breaches. Highly sensitive customer data gets stolen during such events, such as social security numbers, credit card information, health-related information, and more. But not only is customer data compromised – sensitive company secrets are stolen too. Containers can help in many ways. In a published report, Gartner found that applications running in a container are more secure than their counterparts not running in a container. Containers use Linux security primitives such as Linux kernel namespaces to sandbox different applications running on the same computers and control groups (cgroups) to avoid the noisy-neighbor problem, where one bad application uses all the available resources of a server and starves all other applications. Since container images are immutable, as we will learn later, it is easy to have them scanned for common vulnerabilities and exposures (CVEs), and in doing so, increase the overall security of our applications. Another way to make our software supply chain more secure is to have our containers use content trust. Content trust ensures that the author of a container image is who they say they are and that the consumer of the container image has a guarantee that the image has not been tampered with in transit. The latter is known as a man-in-the-middle (MITM) attack.

Everything I have just said is, of course, technically also possible without using containers, but since containers introduce a globally accepted standard, they make it so much easier to implement these best practices and enforce them. OK, but security is not the only reason containers are important. There are other reasons too. One is the fact that containers make it easy to simulate a production-like environment, even on a developer’s laptop. If we can containerize any application, then we can also containerize, say, a database such as Oracle, PostgreSQL, or MS SQL Server. Now, everyone who has ever had to install an Oracle database on a computer knows that this is not the easiest thing to do, and it takes up a lot of precious space on your computer. You would not want to do that to your development laptop just to test whether the application you developed really works end to end. With containers to hand, we can run a full-blown relational database in a container as easily as saying 1, 2, 3. And when we are done with testing, we can just stop and delete the container and the database will be gone, without leaving a single trace on our computer. Since containers are very lean compared to VMs, it is common to have many containers running at the same time on a developer’s laptop without overwhelming the laptop. A third reason containers are important is that operators can finally concentrate on what they are good at – provisioning the infrastructure and running and monitoring applications in production. When the applications they must run on a production system are all containerized, then operators can start to standardize their infrastructure. Every server becomes just another Docker host. No special libraries or frameworks need to be installed on those servers – just an OS and a container runtime such as Docker. Furthermore, operators do not have to have intimate knowledge of the internals of applications anymore, since those applications run self-contained in containers that ought to look like black boxes to them like how shipping containers look to personnel in the transportation industry.

What is the benefit of using containers for me or for my company?

Somebody once said “...today every company of a certain size has to acknowledge that they need to be a software company...” In this sense, a modern bank is a software company that happens to specialize in the business of finance. Software runs all businesses, period. As every company becomes a software company, there is a need to establish a software supply chain. For the company to remain competitive, its software supply chain must be secure and efficient. Efficiency can be achieved through thorough automation and standardization. But in all three areas – security, automation, and standardization – containers have been shown to shine. Large and well-known enterprises have reported that when containerizing existing legacy applications (many call them traditional applications) and establishing a fully automated software supply chain based on containers, they can reduce the cost for the maintenance of those mission-critical applications by a factor of 50% to 60% and they can reduce the time between new releases of these traditional applications by up to 90%. That being said, the adoption of container technologies saves these companies a lot of money, and at the same time, it speeds up the development process and reduces the time to market.

The Moby project

Originally, when Docker (the company) introduced Docker containers, everything was open source. Docker did not have any commercial products then. Docker Engine, which the company developed, was a monolithic piece of software. It contained many logical parts, such as the container runtime, a network library, a RESTful (REST) API, a command-line interface, and much more. Other vendors or projects such as Red Hat or Kubernetes used Docker Engine in their own products, but most of the time, they were only using part of its functionality. For example, Kubernetes did not use the Docker network library for Docker Engine but provided its own way of networking. Red Hat, in turn, did not update Docker Engine frequently and preferred to apply unofficial patches to older versions of Docker Engine, yet they still called it Docker Engine.

For all these reasons, and many more, the idea emerged that Docker had to do something to clearly separate Docker’s open source part from Docker’s commercial part. Furthermore, the company wanted to prevent competitors from using and abusing the name Docker for their own gains. This was the main reason the Moby project was born. It serves as an umbrella for most of the open source components Docker developed and continues to develop. These open source projects do not carry the name Docker anymore. The Moby project provides components used for image management, secret management, configuration management, and networking and provisioning. Also, part of the Moby project are special Moby tools that are, for example, used to assemble components into runnable artifacts. Some components that technically belong to the Moby project have been donated by Docker to the Cloud Native Computing Foundation (CNCF) and thus do not appear in the list of components anymore. The most prominent ones are notary, containerd, and runc, where the first is used for content trust and the latter two form the container runtime.

In the words of Docker, “... Moby is an open framework created by Docker to assemble specialized container systems without reinventing the wheel. It provides a “Lego set” of dozens of standard components and a framework for assembling them into custom platforms....”

Docker products

In the past, up until 2019, Docker separated its product lines into two segments. There was the Community Edition (CE), which was closed source yet completely free, and then there was the Enterprise Edition (EE), which was also closed source and needed to be licensed yearly. These enterprise products were backed by 24/7 support and were supported by bug fixes.

In 2019, Docker felt that what they had were two very distinct and different businesses. Consequently, they split away the EE and sold it to Mirantis. Docker itself wanted to refocus on developers and provide them with the optimal tools and support to build containerized applications.

Docker Desktop

Part of the Docker offering are products such as Docker Toolbox and Docker Desktop with its editions for Mac, Windows, and Linux. All these products are mainly targeted at developers. Docker Desktop is an easy-to-install desktop application that can be used to build, debug, and test dockerized applications or services on a macOS, Windows, or Linux machine. Docker Desktop is a complete development environment that is deeply integrated with the hypervisor framework, network, and filesystem of the respective underlying operating system. These tools are the fastest and most reliable ways to run Docker on a Mac, Windows, or Linux machine.

Note

Docker Toolbox has been deprecated and is no longer in active development. Docker recommends using Docker Desktop instead.

Docker Hub

Docker Hub is the most popular service for finding and sharing container images. It is possible to create individual, user-specific accounts and organizational accounts under which Docker images can be uploaded and shared inside a team, an organization, or with the wider public. Public accounts are free while private accounts require one of several commercial licenses. Later in this book, we will use Docker Hub to download existing Docker images and upload and share our own custom Docker images.

Docker Enterprise Edition

Docker EE – now owned by Mirantis – consists of the Universal Control Plane (UCP) and the Docker Trusted Registry (DTR), both of which run on top of Docker Swarm. Both are Swarm applications. Docker EE builds on top of the upstream components of the Moby project and adds enterprise-grade features such as role-based access control (RBAC), multi-tenancy, mixed clusters of Docker Swarm and Kubernetes, a web-based UI, and content trust, as well as image scanning on top.

Docker Swarm

Docker Swarm provides a powerful and flexible platform for deploying and managing containers in a production environment. It provides the tools and features you need to build, deploy, and manage your applications with ease and confidence.

Container architecture

Now, let us discuss how a system that can run Docker containers is designed at a high level. The following diagram illustrates what a computer that Docker has been installed on looks like. Note that a computer that has Docker installed on it is often called a Docker host because it can run or host Docker containers:

Figure 1.3 – High-level architecture diagram of Docker Engine

In the preceding diagram, we can see three essential parts:

At the bottom, we have the Linux Operating SystemIn the middle, we have the Container RuntimeAt the top, we have Docker Engine

Containers are only possible because the Linux OS supplies some primitives, such as namespaces, control groups, layer capabilities, and more, all of which are used in a specific way by the container runtime and Docker Engine. Linux kernel namespaces, such as process ID (pid) namespaces or network (net) namespaces, allow Docker to encapsulate or sandbox processes that run inside the container. Control groups make sure that containers do not suffer from noisy-neighbor syndrome, where a single application running in a container can consume most or all the available resources of the whole Docker host. Control groups allow Docker to limit the resources, such as CPU time or the amount of RAM, that each container is allocated. The container runtime on a Docker host consists of containerd and runc. runc is the low-level functionality of the container runtime such as container creation or management, while containerd, which is based on runc, provides higher-level functionality such as image management, networking capabilities, or extensibility via plugins. Both are open source and have been donated by Docker to the CNCF. The container runtime is responsible for the whole life cycle of a container. It pulls a container image (which is the template for a container) from a registry, if necessary, creates a container from that image, initializes and runs the container, and eventually stops and removes the container from the system when asked. Docker Engine provides additional functionality on top of the container runtime, such as network libraries or support for plugins. It also provides a REST interface over which all container operations can be automated. The Docker command-line interface that we will use often in this book is one of the consumers of this REST interface.

Summary

In this chapter, we looked at how containers can massively reduce friction in the software supply chain and, on top of that, make the supply chain much more secure. In the next chapter, we will familiarize ourselves with containers. We will learn how to run, stop, and remove containers and otherwise manipulate them. We will also get a pretty good overview of the anatomy of containers. For the first time, we are really going to get our hands dirty and play with these containers. So, stay tuned!

Further reading

The following is a list of links that lead to more detailed information regarding the topics we discussed in this chapter:

Docker overview: https://docs.docker.com/engine/docker-overview/The Moby project: https://mobyproject.org/Docker products: https://www.docker.com/get-startedDocker Desktop: https://www.docker.com/products/docker-desktop/Cloud-Native Computing Foundation: https://www.cncf.io/containerd: https://containerd.io/Getting Started with Docker Enterprise 3.1: https://www.mirantis.com/blog/getting-started-with-docker-enterprise-3-1/

Questions

Please answer the following questions to assess your learning progress:

Which statements are correct (multiple answers are possible)?A container is kind of a lightweight VMA container only runs on a Linux hostA container can only run one processThe main process in a container always has PID 1A container is one or more processes encapsulated by Linux namespaces and restricted by cgroupsIn your own words, using analogies, explain what a container is.Why are containers considered to be a game-changer in IT? Name three or four reasons.What does it mean when we claim, if a container runs on a given platform, then it runs anywhere? Name two to three reasons why this is true.Is the following claim true or false: Docker containers are only useful for modern greenfield applications based on microservices? Please justify your answer.How much does a typical enterprise save when containerizing its legacy applications?20%33%50%75%Which two core concepts of Linux are containers based on?On which operating systems is Docker Desktop available?

Answers

The correct answers are Dand E.A Docker container is to IT what a shipping container is to the transportation industry. It defines a standard on how to package goods. In this case, goods are the application(s) developers write. The suppliers (in this case, the developers) are responsible for packaging the goods into the container and making sure everything fits as expected. Once the goods are packaged into a container, it can be shipped. Since it is a standard container, the shippers can standardize their means of transportation, such as lorries, trains, or ships. The shipper does not really care what is in the container. Also, the loading and unloading process from one means of transportation to another (for example, train to ship) can be highly standardized. This massively increases the efficiency of transportation. Analogous to this is an operations engineer in IT, who can take a software container built by a developer and ship it to a production system and run it there in a highly standardized way, without worrying about what is in the container. It will just work.Some of the reasons why containers are game-changers are as follows:Containers are self-contained and thus if they run on one system, they run anywhere that a Docker container can run.Containers run on-premises and in the cloud, as well as in hybrid environments. This is important for today’s typical enterprises since it allows a smooth transition from on-premises to the cloud.Container images are built or packaged by the people who know best – the developers.Container images are immutable, which is important for good release management.Containers are enablers of a secure software supply chain based on encapsulation (using Linux namespaces and cgroups), secrets, content trust, and image vulnerability scanning.A container runs on any system that can host containers. This is possible for the following reasons:Containers are self-contained black boxes. They encapsulate not only an application but also all its dependencies, such as libraries and frameworks, configuration data, certificates, and so on.Containers are based on widely accepted standards such as OCI.The answer is false. Containers are useful for modern applications and to containerize traditional applications. The benefits for an enterprise when doing the latter are huge. Cost savings in the maintenance of legacy apps of 50% or more have been reported. The time between new releases of such legacy applications could be reduced by up to 90%. These numbers have been publicly reported by real enterprise customers.50% or more.Containers are based on Linux namespaces (network, process, user, and so on) and cgroups. The former help isolate processes running on the same machine, while the latter are used to limit the resources a given process can access, such as memory or network bandwidth.Docker Desktop is available for macOS, Windows, and Linux.

2

Setting Up a Working Environment

In the previous chapter, we learned what Docker containers are and why they’re important. We learned what kinds of problems containers solve in a modern software supply chain. In this chapter, we are going to prepare our personal or working environment to work efficiently and effectively with Docker. We will discuss in detail how to set up an ideal environment for developers, DevOps, and operators that can be used when working with Docker containers.

This chapter covers the following topics:

The Linux command shellPowerShell for WindowsInstalling and using a package managerInstalling Git and cloning the code repositoryChoosing and installing a code editorInstalling Docker Desktop on macOS or WindowsInstalling Docker ToolboxEnabling Kubernetes on Docker DesktopInstalling minikubeInstalling Kind

Technical requirements

For this chapter, you will need a laptop or a workstation with either macOS or Windows, preferably Windows 11, installed. You should also have free internet access to download applications and permission to install those applications on your laptop. It is also possible to follow along with this book if you have a Linux distribution as your operating system, such as Ubuntu 18.04 or newer. I will try to indicate where commands and samples differ significantly from the ones on macOS or Windows.

The Linux command shell

Docker containers were first developed on Linux for Linux. Hence, it is natural that the primary command-line tool used to work with Docker, also called a shell, is a Unix shell; remember, Linux derives from Unix. Most developers use the Bash shell. On some lightweight Linux distributions, such as Alpine, Bash is not installed and consequently, you must use the simpler Bourne shell, just called sh. Whenever we are working in a Linux environment, such as inside a container or on a Linux VM, we will use either /bin/bash or /bin/sh, depending on their availability.

Although Apple’s macOS is not a Linux OS, Linux and macOS are both flavors of Unix and hence support the same set of tools. Among those tools are the shells. So, when working on macOS, you will probably be using the Bash or zsh shell.

In this book, we expect you to be familiar with the most basic scripting commands in Bash and PowerShell, if you are working on Windows. If you are an absolute beginner, then we strongly recommend that you familiarize yourself with the following cheat sheets:

Linux Command Line Cheat Sheet by Dave Child at http://bit.ly/2mTQr8lPowerShell Basic Cheat Sheetat http://bit.ly/2EPHxze

PowerShell for Windows

On a Windows computer, laptop, or server, we have multiple command-line tools available. The most familiar is the command shell. It has been available on any Windows computer for decades. It is a very simple shell. For more advanced scripting, Microsoft has developed PowerShell. PowerShell is very powerful and very popular among engineers working on Windows. Finally, on Windows 10 or later, we have the so-called Windows Subsystem for Linux, which allows us to use any Linux tool, such as the Bash or Bourne shells. Apart from this, other tools install a Bash shell on Windows, such as the Git Bash shell. In this book, all commands will use Bash syntax. Most of the commands also run in PowerShell.

Therefore, we recommend that you either use PowerShell or any other Bash tool to work with Docker on Windows.

Installing and using a package manager

The easiest way to install software on a Linux, macOS, or Windows laptop is to use a good package manager. On macOS, most people use Homebrew, while on Windows, Chocolatey is a good choice. If you’re using a Debian-based Linux distribution such as Ubuntu, then the package manager of choice for most is apt, which is installed by default.

Installing Homebrew on macOS

Homebrew is the most popular package manager on macOS, and it is easy to use and very versatile. Installing Homebrew on macOS is simple; just follow the instructions at https://brew.sh/:

In a nutshell, open a new Terminal window and execute the following command to install Homebrew: $ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"Once the installation has finished, test whether Homebrew is working by entering brew --version in the Terminal. You should see something like this: $ brew --version Homebrew 3.6.16 Homebrew/homebrew-core (git revision 025fe79713b; last commit 2022-12-26) Homebrew/homebrew-cask (git revision 15acb0b64a; last commit 2022-12-26)Now, we are ready to use Homebrew to install tools and utilities. If we, for example, want to install the iconic Vi text editor (note that this is not a tool we will use in this book; it serves just as an example), we can do so like this: $ brew install vim

This will download and install the editorfor you.

Installing Chocolatey on Windows

Chocolatey is a popular package manager for Windows, built on PowerShell. To install the Chocolatey package manager, please follow the instructions at https://chocolatey.org/ or open a new PowerShell window in admin mode and execute the following command:

PS> Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

Note

It is important to run the preceding command as an administrator; otherwise, the installation will not succeed. It is also important to note that the preceding command is one single line and has only been broken into several lines here due to the limited line width.

Once Chocolatey has been installed, test it with the choco --version command. You should see output similar to the following:

PS> choco --version 0.10.15

To install an application such as the Vi editor, use the following command:

PS> choco install -y vim

The -y parameter makes sure that the installation happens without Chocolatey asking for a reconfirmation. As mentioned previously, we will not use Vim in our exercises; it has only been used as an example.

Note

Once Chocolatey has installed an application, you may need to open a new PowerShell window to use that application.

Installing Git and cloning the code repository

We will be using Git to clone the sample code accompanying this book from its GitHub repository. If you already have Git installed on your computer, you can skip this section:

To install Git on macOS, use the following command in a Terminal window: $ brew install gitTo install Git on Windows, open a PowerShell window and use Chocolatey to install it: PS> choco install git -yFinally, on a Debian or Ubuntu machine, open a Bash console and execute the following command: $ sudo apt update && sudo apt install -y gitOnce Git has been installed, verify that it is working. On all platforms, use the following command: $ git --version

This should output the version of Git that’s been installed. On the author’s MacBook Air, the output is as follows:

git version 2.39.1

Note

If you see an older version, then you are probably using the version that came installed with macOS by default. Use Homebrew to install the latest version by running $ brew install git.

Now that Git is working, we can clone the source code accompanying this book from GitHub. Execute the following command: $ cd ~ $ git clone https://github.com/PacktPublishing/The-Ultimate-Docker-Container-Book

This will clone the content of the main branch into your local folder, ~/The-Ultimate-Docker-Container-Book. This folder will now contain all of the sample solutions for the labs we are going to do together in this book. Refer to these sample solutions if you get stuck.

Now that we have installed the basics, let’s continue with the code editor.

Choosing and installing a code editor

Using a good code editor is essential to working productively with Docker. Of course, which editor is the best is highly controversial and depends on your personal preference. A lot of people use Vim, or others such as Emacs, Atom, Sublime, or Visual Studio Code (VS Code), to just name a few. VS Code is a completely free and lightweight editor, yet it is very powerful and is available for macOS, Windows, and Linux. According to Stack Overflow, it is currently by far the most popular code editor. If you are not yet sold on another editor, I highly recommend that you give VS Code a try.