Hands-On Kubernetes on Azure - Nills Franssens - E-Book

Hands-On Kubernetes on Azure E-Book

Nills Franssens

0,0
28,79 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Kick-start your DevOps career by learning how to effectively deploy Kubernetes on Azure in an easy, comprehensive, and fun way with hands-on coding tasks

Key Features

  • Understand the fundamentals of Docker and Kubernetes
  • Learn to implement microservices architecture using the Kubernetes platform
  • Discover how you can scale your application workloads in Azure Kubernetes Service (AKS)

Book Description

From managing versioning efficiently to improving security and portability, technologies such as Kubernetes and Docker have greatly helped cloud deployments and application development.

Starting with an introduction to Docker, Kubernetes, and Azure Kubernetes Service (AKS), this book will guide you through deploying an AKS cluster in different ways. You’ll then explore the Azure portal by deploying a sample guestbook application on AKS and installing complex Kubernetes apps using Helm. With the help of real-world examples, you'll also get to grips with scaling your application and cluster. As you advance, you'll understand how to overcome common challenges in AKS and secure your application with HTTPS and Azure AD (Active Directory). Finally, you’ll explore serverless functions such as HTTP triggered Azure functions and queue triggered functions.

By the end of this Kubernetes book, you’ll be well-versed with the fundamentals of Azure Kubernetes Service and be able to deploy containerized workloads on Microsoft Azure with minimal management overhead.

What you will learn

  • Plan, configure, and run containerized applications in production
  • Use Docker to build apps in containers and deploy them on Kubernetes
  • Improve the configuration and deployment of apps on the Azure Cloud
  • Store your container images securely with Azure Container Registry
  • Install complex Kubernetes applications using Helm
  • Integrate Kubernetes with multiple Azure PaaS services, such as databases, Event Hubs and Functions.

Who this book is for

This book is for aspiring DevOps professionals, system administrators, developers, and site reliability engineers looking to understand test and deployment processes and improve their efficiency. If you’re new to working with containers and orchestration, you’ll find this book useful.

Nills Franssens is a technology enthusiast and a specialist in multiple open source technologies. He has been working with public cloud technologies since 2013. In his current position as senior cloud solution architect at Microsoft, he works with Microsoft’s strategic customers on their cloud adoption. He has enabled multiple customers in their migration to Azure. One of these migrations was the migration and replatforming of a major public website to Kubernetes. Outside of Kubernetes, Nills’s areas of expertise are networking and storage in Azure. He holds a master’s degree in engineering from the University of Antwerp, Belgium. When he’s not working, you can find Nills playing board games with his wife Kelly and friends, or running one of the many trails in San Jose, California. Shivakumar Gopalakrishnan is DevOps architect at Varian Medical Systems. He has introduced Docker, Kubernetes, and other cloud-native tools to Varian product development to enable "Everything as Code". He has years of software development experience in a wide variety of fields, including networking, storage, medical imaging, and currently, DevOps. He has worked to develop scalable storage appliances specifically tuned for medical imaging needs and has helped architect cloud-native solutions for delivering modular AngularJS applications backed by microservices. He has spoken at multiple events on incorporating AI and machine learning in DevOps to enable a culture of learning in large enterprises. He has helped teams in highly regulated large medical enterprises adopt modern agile/DevOps methodologies, including the “You build it, you run it” model. He has defined and leads the implementation of a DevOps roadmap that transforms traditional teams to teams that seamlessly adopt security- and quality-first approaches using CI/CD tools. He holds a bachelor of engineering degree from College of Engineering, Guindy, and a master of science degree from University of Maryland, College Park. Gunther Lenz is senior director of the technology office at Varian. He is an innovative software R&D leader, architect, MBA, published author, public speaker, and strategic technology visionary with more than 20 years of experience. He has a proven track record of successfully leading large, innovative, and transformational software development and DevOps teams of more than 50 people, with a focus on continuous improvement. He has defined and lead distributed teams throughout the entire software product lifecycle by leveraging ground-breaking processes, tools, and technologies such as the cloud, DevOps, lean/agile, microservices architecture, digital transformation, software platforms, AI, and distributed machine learning. He was awarded Microsoft Most Valuable Professional for Software Architecture (2005-2008). Gunther has published two books, .NET – A Complete Development Cycle and Practical Software Factories in .NET.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 313

Veröffentlichungsjahr: 2020

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Hands-On Kubernetes on Azure

Second Edition

Automate management, scaling, and deployment of containerized applications

Nills Franssens, Shivakumar Gopalakrishnan, and Gunther Lenz

Hands-On Kubernetes on Azure - Second Edition

Copyright © 2020 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Authors: Nills Franssens, Shivakumar Gopalakrishnan, and Gunther Lenz

Technical Reviewers: Peter De Tender and Suleyman Akbas

Managing Editors: Afzal Shaikh and Priyanka Sawant

Acquisitions Editor: Rahul Hande

Production Editor: Deepak Chavan

Editorial Board: Ben Renow-Clarke and Ian Hough

First Published: March 2019

Second Published: May 2020

Production Reference: 1120520

ISBN: 978-1-80020-967-1

Published by Packt Publishing Ltd.

Livery Place, 35 Livery Street

Birmingham B3 2PB, UK

To mama and papa. This book would not have been possible without everything you did for me. I love you both.

To Kelly. I wouldn’t be the person I am today without you.

- Nills Franssens

I dedicate this book to my parents. Without their support on everything from getting my first computer to encouraging me on whatever path I took, this book wouldn’t have happened.

- Shivakumar Gopalakrishnan

To Okson and Hugo.

To everyone reading this.

– Gunther Lenz

Table of Contents

Preface   i

Section 1: The Basics   1

1. Introduction to Docker and Kubernetes   3

The software evolution that brought us here    5

Microservices   5

DevOps   8

Fundamentals of Docker containers   9

Docker images   10

Kubernetes as a container orchestration platform   14

Pods in Kubernetes   14

Deployments in Kubernetes   16

Services in Kubernetes   16

Azure Kubernetes Service   17

Summary   18

2. Kubernetes on Azure (AKS)   21

Different ways to deploy an AKS cluster   22

Getting started with the Azure portal   23

Creating your first AKS cluster   23

A quick overview of your cluster in the Azure portal   30

Accessing your cluster using Azure Cloud Shell   32

Deploying your first demo application   35

Summary   41

Section 2: Deploying on AKS   43

3. Application deployment on AKS   45

Deploying the sample guestbook application   47

Introducing the application   47

Deploying the Redis master   48

Redis master with a ConfigMap   54

Complete deployment of the sample guestbook application   60

Exposing the Redis master service   60

Deploying the Redis slaves   63

Deploying and exposing the front end   66

The guestbook application in action   73

Installing complex Kubernetes applications using Helm   74

Installing WordPress using Helm   75

Summary   82

4. Building scalable applications   85

Scaling your application   87

Implementing scaling of your application   87

Scaling the guestbook front-end component   90

Using the HPA   91

Scaling your cluster   95

Manually scaling your cluster   95

Scaling your cluster using the cluster autoscaler   97

Upgrading your application   100

Upgrading by changing YAML files   101

Upgrading an application using kubectl edit   105

Upgrading an application using kubectl patch   107

Upgrading applications using Helm   109

Summary   111

5. Handling common failures in AKS   113

Handling node failures   114

Solving out-of-resource failures   120

Fixing storage mount issues   124

Starting the WordPress installation   125

Using persistent volumes to avoid data loss   125

Summary   138

6. Securing your application with HTTPS and Azure AD   141

HTTPS support   143

Installing an Ingress controller   143

Adding an Ingress rule for the guestbook application   144

Getting a certificate from Let's Encrypt   147

Authentication versus authorization   159

Authentication and common authN providers   159

Deploying the oauth2_proxy proxy   160

Summary   170

7. Monitoring the AKS cluster and the application   173

Commands for monitoring applications   174

The kubectl get command   174

The kubectl describe command   177

Debugging applications   181

Logs   187

Readiness and liveness probes   188

Building two web containers   189

Experimenting with liveness and readiness probes   192

Metrics reported by Kubernetes   196

Node status and consumption   196

Pod consumption   198

Metrics reported from Azure Monitor   200

AKS Insights   200

Summary   208

Section 3: Leveraging advanced Azure PaaS services   211

8. Connecting an app to an Azure database   213

Setting up OSBA   214

The benefits of using a managed database service   214

What is OSBA?   215

Installing OSBA on the cluster   216

Deploying OSBA   217

Deploying WordPress   220

Securing MySQL   222

Connecting to the WordPress site   223

Exploring advanced database operations   224

Restoring from a backup   224

Disaster Recovery (DR) options   234

Reviewing audit logs   235

Summary   238

9. Connecting to Azure Event Hubs   241

Deploying a set of microservices   242

Deploying the application using Helm   243

Using Azure Event Hubs   250

Creating the event hub   250

Modifying the Helm files   254

Summary   260

10. Securing your AKS cluster   263

Role-based access control   264

Creating a new cluster with Azure AD integration   266

Creating users and groups in Azure AD   271

Configuring RBAC in AKS   275

Verifying RBAC   279

Setting up secrets management   283

Creating your own secrets   283

Creating the Docker registry key   288

Creating the TLS secret   289

Using your secrets   290

Secrets as environment variables   290

Secrets as files   291

Why secrets as files is the best method   293

Using secrets stored in Key Vault   295

Creating a Key Vault   295

Setting up Key Vault FlexVolume   299

Using Key Vault FlexVolume to mount a secret in a Pod   301

The Istio service mesh at your service   303

Describing the Istio service mesh   304

Installing Istio   305

Injecting Envoy as a sidecar automatically   306

Enforcing mutual TLS   307

Globally enabling mTLS   311

Summary   315

11. Serverless functions   317

Multiple functions platforms   319

Setting up prerequisites   320

Azure Container Registry   321

Creating a development machine   322

Creating an HTTP-triggered Azure function   326

Creating a queue-triggered function   330

Creating a queue   330

Creating a queue-triggered function   333

Scale testing functions   338

Summary   340

Index   343

>

Preface

About

This section briefly introduces the authors, the coverage of this book, the technical skills you'll need to get started, and the hardware and software requirements required to complete all of the included activities and exercises.

About Hands-On Kubernetes on Azure, Second Edition

Kubernetes is the leading standard in container orchestration, used by start-ups and large enterprises alike. Microsoft is one of the largest contributors to the open source project, and it offers a managed service to run Kubernetes clusters at scale.

This book will walk you through what it takes to build and run applications on top of the Azure Kubernetes Service (AKS). It starts with an explanation of the fundamentals of Docker and Kubernetes, after which you will build a cluster and start deploying multiple applications. With the help of real-world examples, you'll learn how to deploy applications on top of AKS, implement authentication, monitor your applications, and integrate AKS with other Azure services such as databases, Event Hubs, and Functions.

By the end of this book, you'll have become proficient in running Kubernetes on Azure and leveraging the tools required for deployment.

About the authors

Nills Franssens is a technology enthusiast and a specialist in multiple open source technologies. He has been working with public cloud technologies since 2013.

In his current position as senior cloud solution architect at Microsoft, he works with Microsoft's strategic customers on their cloud adoption. He has enabled multiple customers in their migration to Azure. One of these migrations was the migration and replatforming of a major public website to Kubernetes.

Outside of Kubernetes, Nills's areas of expertise are networking and storage in Azure. 

He holds a master's degree in engineering from the University of Antwerp, Belgium.

When he's not working, you can find Nills playing board games with his wife Kelly and friends, or running one of the many trails in San Jose, California.

Gunther Lenz is senior director of the technology office at Varian. He is an innovative software R&D leader, architect, MBA, published author, public speaker, and strategic technology visionary with more than 20 years of experience.

He has a proven track record of successfully leading large, innovative, and transformational software development and DevOps teams of more than 50 people, with a focus on continuous improvement. 

He has defined and lead distributed teams throughout the entire software product lifecycle by leveraging ground-breaking processes, tools, and technologies such as the cloud, DevOps, lean/agile, microservices architecture, digital transformation, software platforms, AI, and distributed machine learning. 

He was awarded Microsoft Most Valuable Professional for Software Architecture (2005-2008).

Gunther has published two books, .NET – A Complete Development Cycle and Practical Software Factories in .NET.

Shivakumar Gopalakrishnan is DevOps architect at Varian Medical Systems. He has introduced Docker, Kubernetes, and other cloud-native tools to Varian product development to enable "Everything as Code".

He has years of software development experience in a wide variety of fields, including networking, storage, medical imaging, and currently, DevOps. He has worked to develop scalable storage appliances specifically tuned for medical imaging needs and has helped architect cloud-native solutions for delivering modular AngularJS applications backed by microservices. He has spoken at multiple events on incorporating AI and machine learning in DevOps to enable a culture of learning in large enterprises.

He has helped teams in highly regulated large medical enterprises adopt modern agile/DevOps methodologies, including the "You build it, you run it" model. He has defined and leads the implementation of a DevOps roadmap that transforms traditional teams to teams that seamlessly adopt security- and quality-first approaches using CI/CD tools.

He holds a bachelor of engineering degree from College of Engineering, Guindy, and a master of science degree from University of Maryland, College Park.

Learning Objectives

By the end of this book, you will be able to:

Understand the fundamentals of Docker and KubernetesSet up an AKS clusterDeploy applications to AKSMonitor applications on AKS and handle common failuresSet up authentication for applications on top of AKSIntegrate AKS with Azure Database for MySQLLeverage Azure Event Hubs from an application in AKSSecure your clusterDeploy serverless functions to your cluster

Audience

If you're a cloud engineer, cloud solution provider, sysadmin, site reliability engineer, or developer who's interested in DevOps and is looking for an extensive guide to running Kubernetes in the Azure environment, then this book is for you.

Approach

This book provides a combination of practical and theoretical knowledge. It covers engaging real-world scenarios that demonstrate how Kubernetes-based applications run on the Azure platform. Each chapter is designed to enable you to apply everything you learn in a practical context. After chapter 1 and 2, each chapter is self-contained, and can be run independently from previous chapters.

Software Requirements

We also recommend that you have the following software installed in advance:

A computer with a Linux, Windows 10, or macOS operating systemAn internet connection and web browser so you can connect to Azure

All the examples in the book have been designed to work using the Azure Cloud Shell. You won't have to install additional software on your machine.

Conventions

Code words in the text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows:

"The following code snippet will use the kubectl command line tool to create an application that is defined in the file guestbook-all-in-one.yaml."

Here's a sample block of code:

kubectl create -f guestbook-all-in-one.yaml

We'll use a backslash, \, to indicate that a line of code will span multiple lines in the book. You can either copy the backslash and continue on the next line or ignore the backslash and type the complete multi-line code on a single line. For example:

az aks nodepool update --disable-cluster-autoscaler \

-g rg-handsonaks --cluster-name handsonaks --name agentpool

On many occasions, we have used angled brackets, <>. You need to replace these with the actual parameter, and not use these brackets within the commands.

Download Resources

The code bundle for this book is also hosted on GitHub at https://github.com/PacktPublishing/Hands-On-Kubernetes-on-Azure---Second-Edition. You can find the YAML and other files used in this book, which are referred to at relevant instances.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Section 1: The Basics

In Section 1 of this book, we will cover the basic concepts that you need to understand in order to follow the examples in this book.

We will start this section by explaining the basics of these underlying concepts, such as Docker and Kubernetes. Then, we will explain how to create a Kubernetes cluster on Azure and deploy an example application.

When you finish this section, you will have a baseline foundational knowledge of Docker and Kubernetes and a Kubernetes cluster up and running in Azure that will allow you to follow the examples in this book.

This section contains the following chapters:

Chapter 1, Introduction to Docker and KubernetesChapter 2, Kubernetes on Azure (AKS)

1. Introduction to Docker and Kubernetes

Kubernetes has become the leading standard in container orchestration. Since its inception in 2014, it has gained tremendous popularity. It has been adopted by start-ups as well as major enterprises, and the major public cloud vendors all offer a managed Kubernetes service.

Kubernetes builds upon the success of the Docker container revolution. Docker is both a company and the name of a technology. Docker as a technology is the standard way of creating and running software containers, often called Docker containers. A container itself is a way of packaging software that makes it easy to run that software on any platform, ranging from your laptop to a server in a data center, to a cluster running in the public cloud.

Docker is also the name of the company behind the Docker technology. Although the core technology is open source, the Docker company focuses on reducing complexity for developers through a number of commercial offerings.

Kubernetes takes Docker containers to the next level. Kubernetes is a container orchestrator. A container orchestrator is a software platform that makes it easy to run many thousands of containers on top of thousands of machines. It automates a lot of the manual tasks required to deploy, run, and scale applications. The orchestrator will take care of scheduling the right container to run on the right machine, and it will take care of health monitoring and failover, as well as scaling your deployed application.

Docker and Kubernetes are both open-source software projects. Open-source software allows developers from many companies to collaborate on a single piece of software. Kubernetes itself has contributors from companies such as Microsoft, Google, Red Hat, VMware, and many others.

The three major public cloud platforms – Azure,Amazon Web Services (AWS), and Google Cloud Platform (GCP) – all offer a managed Kubernetes service. This is attracting a lot of interest in the market since the virtually unlimited compute power and the ease of use of these managed services make it easy to build and deploy large-scale applications.

Azure Kubernetes Service (AKS) is Azure's managed service for Kubernetes. It manages the complexity of putting together all the preceding services for you. In this book, you will learn how to use AKS to run your applications. Each chapter will introduce new concepts, which you will apply through the many examples in this book.

As an engineer, however, it is still very useful to understand the technologies that underpin AKS. We will explore these foundations in this chapter. You will learn about Linux processes, and how they are related to Docker. You will see how various processes fit nicely into Docker, and how Docker fits nicely into Kubernetes. Even though Kubernetes is technically a container runtime-agnostic platform, Docker is the most commonly used container technology and is used everywhere.

This chapter introduces fundamental Docker concepts so that you can begin your Kubernetes journey. This chapter also briefly introduces the basics that will help you build containers, implement clusters, perform container orchestration, and troubleshoot applications on AKS. Having cursory knowledge of what's in this chapter will demystify much of the work needed to build your authenticated, encrypted, highly scalable applications on AKS. Over the next chapters, you will gradually build scalable and secure applications.

The following topics will be covered in this chapter:

The software evolution that brought us hereThe fundamentals of DockerThe fundamentals of KubernetesThe fundamentals of AKS

The aim of this chapter is to introduce the essentials rather than to provide a thorough information source describing Docker and Kubernetes. To begin with, we'll first take a look at how software has evolved to get us to where we are now.

The software evolution that brought us here

There are two major software development evolutions that enabled the popularity of Docker and Kubernetes. One is the adoption of a microservices architectural style. Microservices allow an application to be built from a collection of small services that each serve a specific function. The other evolution that enabled Docker and Kubernetes is DevOps. DevOps is a set of cultural practices that allows people, processes, and tools to build and release software faster, more frequently, and more reliably.

Although you can use both Docker and Kubernetes without using either microservices or DevOps, the technologies are most widely adopted for deploying microservices using DevOps methodologies.

In this section, we'll discuss both evolutions, starting with microservices.

Microservices

Software development has drastically evolved over time. Initially, software was developed and run on a single system, typically a mainframe. A client could connect to the mainframe through a terminal, and only through that terminal. This changed when computer networks became common when the client-server programming model emerged. A client could connect remotely to a server, and even run part of the application on their own system while connecting to the server to retrieve part of the data the application required.

The client-server programming model has evolved toward truly distributed systems. Distributed systems are different from the traditional client-server model as they have multiple different applications running on multiple different systems, all interconnected.

Nowadays, a microservices architecture is common when developing distributed systems. A microservices-based application consists of a group of services that work together to form the application, while the individual services themselves can be built, tested, deployed, and scaled independently from each other. The style has many benefits but also has several disadvantages.

A key part of a microservices architecture is the fact that each individual service serves one and only one core function. Each service serves a single bounded business function. Different services work together to form the complete application. Those services work together over network communication, commonly using HTTP REST APIs or gRPC.

This architectural approach is commonly adopted by applications run using Docker and Kubernetes. Docker is used as the packaging format for the individual services, while Kubernetes is the orchestrator that deploys and manages the different services running together.

Before we dive into the Docker and Kubernetes specifics, let's first explore the benefits and downsides of adopting microservices.

Advantages of running microservices

There are several advantages to running a microservices-based application. The first is the fact that each service is independent of the other services. The services are designed to be small enough (hence micro) to handle the needs of a business domain. As they are small, they can be made self-contained and independently testable, and so are independently releasable.

This leads to the fact that each microservice is independently scalable as well. If a certain part of the application is getting more demand, that part of the application can be scaled independently from the rest of the application.

The fact that services are independently scalable also means they are independently deployable. There are multiple deployment strategies when it comes to microservices. The most popular are rolling upgrades and blue/green deployments.

With a rolling upgrade, a new version of the service is deployed only to part of the end user community. This new version is carefully monitored and gradually gets more traffic if the service is healthy. If something goes wrong, the previous version is still running, and traffic can easily be cut over.

With a blue/green deployment, you would deploy the new version of the service in isolation. Once the new version of the service is deployed and tested, you would cut over 100% of the production traffic to the new version. This allows for a clean transition between service versions.

Another benefit of the microservices architecture is that each service can be written in a different programming language. This is described as being polyglot – able to understand and use multiple languages. For example, the front end service can be developed in a popular JavaScript framework, the back end can be developed in C#, while the machine learning algorithm can be developed in Python. This allows you to select the right language for the right service, and to have the developers use the languages they are most familiar with.

Disadvantages of running microservices

There's a flip side to every coin, and the same is true for microservices. While there are multiple advantages to a microservices-based architecture, this architecture has its downsides as well.

Microservices designs and architectures require a high degree of software development maturity in order to be implemented correctly. Architects who understand the domain very well must ensure that each service is bounded and that different services are cohesive. Since services are independent of each other and versioned independently, the software contract between these different services is important to get right.

Another common issue with a microservices design is the added complexity when it comes to monitoring and troubleshooting such an application. Since different services make up a single application, and those different services run on multiple servers, both logging and tracing such an application is a complicated endeavor.

Linked to the aforementioned disadvantages is that, typically, in microservices, you need to build more fault tolerance into your application. Due to the dynamic nature of the different services in an application, faults are more likely to happen. In order to guarantee application availability, it is important to build fault tolerance into the different microservices that make up an application. Implementing patterns such as retry logic or circuit breakers is critical to avoid a single fault causing application downtime.

Often linked to microservices, but a separate transformation, is the DevOps movement. We will explore what DevOps means in the next section.

DevOps

DevOps literally means the combination of development and operations. More specifically, DevOps is the union of people, processes, and tools to deliver software faster, more frequently, and more reliably. DevOps is more about a set of cultural practices than about any specific tools or implementations. Typically, DevOps spans four areas of software development: planning, developing, releasing, and operating software.

Note

Many definitions of DevOps exist. The authors have adopted this definition, but you as a reader are encouraged to explore different definitions in the literature around DevOps.

The DevOps culture starts with planning. In the planning phase of a DevOps project, the goals of a project are outlined. These goals are outlined both at a high level (called an Epic) and at a lower level (in Features and Tasks). The different work items in a DevOps project are captured in the feature backlog. Typically, DevOps teams use an agile planning methodology working in programming sprints. Kanban boards are often used to represent project status and to track work. As a task changes status from to do to doing to done, it moves from left to right on a Kanban board.

When work is planned, actual development can be done. Development in a DevOps culture isn't only about writing code, but also about testing, reviewing, and integrating with team members. A version control system such as Git is used for different team members to share code with each other. An automated continuous integration (CI) tool is used to automate most manual tasks such as testing and building code.

When a feature is code-complete, tested, and built, it is ready to be delivered. The next phase in a DevOps project can start: delivery. A continuous delivery (CD) tool is used to automate the deployment of software. Typically, software is deployed to different environments, such as testing, quality assurance, or production. A combination of automated and manual gates is used to ensure quality before moving to the next environment.

Finally, when a piece of software is running in production, the operations phase can start. This phase involves the maintaining, monitoring, and supporting of an application in production. The end goal is to operate an application reliably with as little downtime as possible. Any issues are to be identified as proactively as possible. Bugs in the software need to be tracked in the backlog.

The DevOps process is an iterative process. A single team is never in a single phase of the process. The whole team is continuously planning, developing, delivering, and operating software.

Multiple tools exist to implement DevOps practices. There are point solutions for a single phase, such as Jira for planning or Jenkins for CI and CD, as well as complete DevOps platforms, such as GitLab. Microsoft operates two solutions that enable customers to adopt DevOps practices: Azure DevOps and GitHub. Azure DevOps is a suite of services to support all phases of the DevOps process. GitHub is a separate platform that enables DevOps software development. GitHub is known as the leading open-source software development platform, hosting over 40 million open-source projects.

Both microservices and DevOps are commonly used in combination with Docker and Kubernetes. After this introduction to microservices and DevOps, we'll continue this first chapter with the fundamentals of Docker and containers and then the fundamentals of Kubernetes.

Fundamentals of Docker containers

A form of container technology has existed in the Linux kernel since the 1970s. The technology powering today's containers, called cgroups, was introduced into the Linux kernel in 2006 by Google. The Docker company popularized the technology in 2013 by introducing an easy developer workflow. The company gave its name to the technology, so the name Docker can refer to both the company as well as the technology. Most commonly though, we use Docker to refer to the technology.

Docker as a technology is both a packaging format and a container runtime. We refer to packaging as an architecture that allows an application to be packaged together with its dependencies, such as binaries and runtime. The runtime points at the actual process of running the container images.

You can experiment with Docker by creating a free Docker account at Docker Hub (https://hub.docker.com/) and using that login to open Docker Labs (https://labs.play-with-docker.com/). This will give you access to an environment with Docker pre-installed that is valid for 4 hours. We will be using Docker Labs in this section as we build our own container and image.

Note

Although we are using the browser-based Docker Labs in this chapter to introduce Docker, you can also install Docker on your local desktop or server. For workstations, Docker has a product called Docker Desktop (https://www.docker.com/products/docker-desktop) that is available for Windows and Mac to create Docker containers locally. On servers – both Windows and Linux – Docker is also available as a runtime for containers.

Docker images

Docker uses an image to start a new container. An image contains all the software you need to run within your container. Container images can be stored locally on your machine, as well as in a container registry. There are public registries, such as the public Docker Hub (https://hub.docker.com/), or private registries, such as Azure Container Registry (ACR). When you, as a user, don't have an image locally on your PC, you will pull an image from a registry using the docker pull command.

In the following example, we will pull an image from the public Docker Hub repository and run the actual container. You can run this example in Docker Labs by following these instructions:

#First we will pull an image

docker pull docker/whalesay

#We can then look at which images we have locally

docker images

#Then we will run our container

docker run docker/whalesay cowsay boo

The output of these commands will look similar to Figure 1.1:

Figure 1.1: Example of running Docker in Docker Labs

What happened here is that Docker first pulled your image in multiple parts and stored it locally on the machine it was running on. When we ran the actual application, it used that local image to start a container. If we look at the commands in detail, you will see that docker pull took in a single parameter, docker/whalesay. If you don't provide a private container registry, Docker will look in the public Docker Hub for images, which is where Docker pulled our image from. The docker run command took in a couple of arguments. The first argument was docker/whalesay, which is the reference to the image. The next two arguments, cowsay boo, are commands that were passed to the running container to execute.

In the previous example, we learned that it is possible to run a container without building an image first. It is, however, very common that you will want to build your own images. To do this, you use a Dockerfile. A Dockerfile contains steps that Docker will follow to start from a base image and build your image. These instructions can range from adding files to installing software or setting up networking. An example of a Dockerfile is provided in the following code snippet, which we'll create in our Docker playground:

FROM docker/whalesay:latest

RUN apt-get -y -qq update && apt-get install -qq -y fortunes

CMD /usr/games/fortune -a | cowsay

There are three lines in this Dockerfile. The first one will instruct Docker which image to use as a source image for this new image. The next step is a command that is run to add new functionality to our image. In this case, updating our apt repository and installing an application called fortunes. Finally, the CMD command tells Docker which command to execute when a container based on this image is run.

You typically save a Dockerfile in a file called Dockerfile, without an extension. To build our image, you need to execute the docker build command and point it to the Dockerfile you created. In building the Docker image, the process will read the Dockerfile and execute the different steps in the Dockerfile. This command will also output the steps it took to run a container and build your image. Let's walk through a demo of building our own image.

In order to create this Dockerfile, open up a text editor via the vi Dockerfile command. vi is an advanced text editor in the Linux command line. If you are not familiar with it, let's walk through how you would enter the text in there:

After you've opened vi, hit the i key to enter insert mode.Then, either copy-paste or type the three code lines.Afterward, hit the Esc key, and type :wq! to write (w) your file and quit (q) the text editor.

The next step is to execute docker build to build our image. We will add a final bit to that command, namely adding a tag to our image so we can call it by a useful name. To build your image, you will use the docker build -t smartwhale . command (don't forget to add the final dot here).

You will now see Docker execute a number of steps – three in our case – in order to build our image. After your image is built, you can run your application. To run your container, you would run docker run smartwhale, and you should see an output similar to Figure 1.2. However, you will probably see a different smart quote. This is due to the fortunes application generating different quotes. If you run the container multiple times, you will see different quotes appear, as shown in Figure 1.2:

Figure 1.2: Example of running a custom container

That concludes our overview and demo of Docker. In this section, you started with an existing container image and launched that on Docker Labs. Afterward, you took that a step further and built your own container image and started containers using your own image. You have now learned what it takes to build and run a container. In the next section, we will cover Kubernetes. Kubernetes allows you to run multiple containers at scale.

Kubernetes as a container orchestration platform

Building and running a single container seems easy enough. However, things can get complicated when you need to run multiple containers across multiple servers. This is where a container orchestrator can help. A container orchestrator takes care of scheduling containers to be run on servers, restarting containers when they fail, moving containers to a new host when that host becomes unhealthy, and much more.

The current leading orchestration platform is Kubernetes (https://kubernetes.io/). Kubernetes was inspired by the Borg project in Google, which, by itself, was running millions of containers in production.

Kubernetes takes a declarative approach to orchestration; that is, you specify what you need and Kubernetes takes care of deploying the workload you specified. You don't need to start these containers manually yourself anymore, as Kubernetes will launch the Docker containers you specified.

Note

Although Kubernetes supports multiple container runtimes, Docker is the most popular runtime.

Throughout the book, we will build multiple examples that run containers in Kubernetes, and you will learn more about the different objects in Kubernetes. In this introductory chapter, we'll introduce three elementary objects in Kubernetes that you will likely see in every application: a Pod, a Deployment, and a Service.

Pods in Kubernetes

A Pod in Kubernetes is the essential scheduling block. A Pod is a group of one or more containers. This means a Pod contains either a single container or multiple containers. When creating a Pod with a single container, you can use the terms container and Pod interchangeably. However, the term Pod is still preferred.

When a Pod contains multiple containers, these containers share the same filesystem and the same network namespace. This means that when a container that is part of a Pod writes a file, other containers in that same Pod can read that file. This also means that all containers in a Pod can communicate with each other using localhost networking.

In terms of design, you should only put containers that need to be tightly integrated in the same pod. Imagine the following situation: you have an old web application that does not support HTTPS. You want to upgrade that application to support HTTPS. You could create a Pod that contains your old web application and includes another container that would do SSL offloading for that application as described in Figure 1.3. Your users would connect to your application using HTTPS, while the container in the middle converts HTTPS traffic to HTTP:

Figure 1.3: Example of a multi-container Pod that does HTTPS offloading

Note

This design principle is known as a sidecar. Microsoft has a free e-book available that describes multiple multi-container Pod designs and designing distributed systems (https://azure.microsoft.com/resources/designing-distributed-systems/).

A Pod, whether it be a single or a multi-container Pod, is an ephemeral resource. This means that a Pod can be terminated at any point and restarted on another node. When this happens, the state that was stored in that Pod will be lost. If you need to store state in your application, you either need to store that in a StatefulSet, which we'll touch on in Chapter 3, Application deployment on AKS, or store the state outside of Kubernetes in an external database.

Deployments in Kubernetes

A Deployment in Kubernetes provides a layer of functionality around Pods. It allows you to create multiple Pods from the same definition and to easily perform updates to your deployed Pods. A Deployment also helps with scaling your application, and potentially even autoscaling your application.

Under the covers, a Deployment creates a ReplicaSet, which in turn will create the Pod you requested. A ReplicaSet is another object in Kubernetes. The purpose of a ReplicaSet is to maintain a stable set of Pods running at any given time. If you perform updates to your Deployment, Kubernetes will create a new ReplicaSet that will contain the updated Pods. By default, Kubernetes will do a rolling upgrade to the new version. This means that it will start a few new Pods. If those are running correctly, then it will terminate a few old Pods and continue this loop until only new Pods are running.

Figure1.4: Relationship between Deployment, ReplicaSet, and Pods

Services in Kubernetes

A Service in Kubernetes is a network-level abstraction. This allows you to expose the multiple Pods you have in your Deployment under a single IP address and a single DNS name.

Each Pod in Kubernetes has its own private IP address. You could theoretically connect your applications using this private IP address. However, as mentioned before, Kubernetes Pods are ephemeral, meaning they can be terminated and moved, which would impact their IP address. By using a Service, you can connect your applications together using a single IP address. When a Pod moves from one node to another, the Service will ensure traffic is routed to the correct endpoint.

In this section, we have introduced Kubernetes and three essential objects with Kubernetes. In the next section, we'll introduce AKS.

Azure Kubernetes Service

Azure Kubernetes Service (AKS) makes creating and managing Kubernetes clusters easier.

A typical Kubernetes cluster consists of a number of master nodes and a number of worker nodes. A node within Kubernetes is equivalent to a virtual machine (VM). The master nodes contain the Kubernetes API and a database that contains the cluster state. The worker nodes are the VMs that run your actual workload.