25,19 €
Leverage Kubernetes and container architecture to successfully run production-ready workloads
Kubernetes is a popular open source orchestration platform for managing containers in a cluster environment. With this Kubernetes cookbook, you’ll learn how to implement Kubernetes using a recipe-based approach. The book will prepare you to create highly available Kubernetes clusters on multiple clouds such as Amazon Web Services (AWS), Google Cloud Platform (GCP), Azure, Alibaba, and on-premises data centers.
Starting with recipes for installing and configuring Kubernetes instances, you’ll discover how to work with Kubernetes clients, services, and key metadata. You’ll then learn how to build continuous integration/continuous delivery (CI/CD) pipelines for your applications, and understand various methods to manage containers. As you advance, you’ll delve into Kubernetes' integration with Docker and Jenkins, and even perform a batch process and configure data volumes. You’ll get to grips with methods for scaling, security, monitoring, logging, and troubleshooting. Additionally, this book will take you through the latest updates in Kubernetes, including volume snapshots, creating high availability clusters with kops, running workload operators, new inclusions around kubectl and more.
By the end of this book, you’ll have developed the skills required to implement Kubernetes in production and manage containers proficiently.
This Kubernetes book is for developers, IT professionals, and DevOps engineers and teams who want to use Kubernetes to manage, scale, and orchestrate applications in their organization. Basic understanding of Kubernetes and containerization is necessary.
Murat Karslioglu is a distinguished technologist with years of experience in the Agile and DevOps methodologies. Murat is currently a VP of Product at MayaData, a start-up building a data agility platform for stateful applications, and a maintainer of open source projects, namely OpenEBS and Litmus. In his free time, Murat is busy writing practical articles about DevOps best practices, CI/CD, Kubernetes, and running stateful applications on popular Kubernetes platforms on his blog, Containerized Me. Murat also runs a cloud-native news curator site, The Containerized Today, where he regularly publishes updates on the Kubernetes ecosystem.Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 450
Veröffentlichungsjahr: 2020
Copyright © 2020 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Commissioning Editor: Vijin BorichaAcquisition Editor:Meeta RajaniContent Development Editor:Alokita AmannaSenior Editor: Rahul DsouzaTechnical Editor: Dinesh PawarCopy Editor: Safis EditingProject Coordinator: Neil DmelloProofreader: Safis EditingIndexer:Priyanka DhadkeProduction Designer:Deepika Naik
First published: March 2020
Production reference: 1130320
Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.
ISBN 978-1-83882-804-2
www.packt.com
Packt.com
Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
Improve your learning with Skill Plans built especially for you
Get a free eBook or video every month
Fully searchable for easy access to vital information
Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
Murat Karslioglu is a distinguished technologist with years of experience in the Agile and DevOps methodologies. Murat is currently a VP of Product at MayaData, a start-up building a data agility platform for stateful applications, and a maintainer of open source projects, namely OpenEBS and Litmus. In his free time, Murat is busy writing practical articles about DevOps best practices, CI/CD, Kubernetes, and running stateful applications on popular Kubernetes platforms on his blog, Containerized Me. Murat also runs a cloud-native news curator site, The Containerized Today, where he regularly publishes updates on the Kubernetes ecosystem.
Scott Surovich, CKA, CKAD, Mirantis MKP, (New Google Certification) is the container engineering lead for a G-SIFI global bank where he is focused on global design and standards for Kubernetes on-premises clusters. An evangelist for containers and Kubernetes, he has presented GKE networking in the enterprise at Google Next and multi-tenant Kubernetes clusters in the enterprise at Kubecon. He is an active member of the CNCF's Financial Services working group, he worked with the Kubernetes multi-tenancy working group, and he has been a developer advocate for Tremolo Security's OIDC provider, OpenUnison. Recently, he also achieved the Google Cloud Certified Fellow: Hybrid Multi-Cloud certification.
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
Title Page
Copyright and Credits
Kubernetes A Complete DevOps Cookbook
Dedication
About Packt
Why subscribe?
Contributors
About the author
About the reviewer
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the example code files
Download the color images
Code in Action
Conventions used
Sections
Getting ready
How to do it…
How it works…
There's more…
See also
Get in touch
Reviews
Building Production-Ready Kubernetes Clusters
Technical requirements
Configuring a Kubernetes cluster on Amazon Web Services 
Getting ready
How to do it…
Installing the command-line tools to configure AWS services
Installing kops to provision a Kubernetes cluster
Provisioning a Kubernetes cluster on Amazon EC2
Provisioning a managed Kubernetes cluster on Amazon EKS
How it works...
There's more…
Using the AWS Shell
Using a gossip-based cluster
Using different regions for an S3 bucket
Editing the cluster configuration
Deleting your cluster
Provisioning an EKS cluster using the Amazon EKS Management Console
Deploying Kubernetes Dashboard
See also
Configuring a Kubernetes cluster on Google Cloud Platform
Getting ready
How to do it…
Installing the command-line tools to configure GCP services
Provisioning a managed Kubernetes cluster on GKE
Connecting to Google Kubernetes Engine (GKE) clusters
How it works…
There's more…
Using Google Cloud Shell
Deploying with a custom network configuration
Deleting your cluster
Viewing the Workloads dashboard
See also
Configuring a Kubernetes cluster on Microsoft Azure
Getting ready
How to do it…
Installing the command-line tools to configure Azure services
Provisioning a managed Kubernetes cluster on AKS
Connecting to AKS clusters
How it works…
There's more…
Deleting your cluster
Viewing Kubernetes Dashboard
See also
Configuring a Kubernetes cluster on Alibaba Cloud
Getting ready
How to do it…
Installing the command-line tools to configure Alibaba Cloud services
Provisioning a highly available Kubernetes cluster on Alibaba Cloud
Connecting to Alibaba Container Service clusters
How it works…
There's more…
Configuring and managing Kubernetes clusters with Rancher
Getting ready
How to do it…
Installing Rancher Server
Deploying a Kubernetes cluster
Importing an existing cluster
Enabling cluster and node providers
How it works…
There's more…
Bind mounting a host volume to keep data
Keeping user volumes persistent
Running Rancher on the same Kubernetes nodes
See also
Configuring Red Hat OpenShift 
Getting ready
How to do it…
Downloading OpenShift binaries
Provisioning an OpenShift cluster
Connecting to OpenShift clusters
How it works…
There's more…
Deleting your cluster
See also
Configuring a Kubernetes cluster using Ansible
Getting ready
How to do it…
Installing Ansible
Provisioning a Kubernetes cluster using an Ansible playbook
Connecting to the Kubernetes cluster
See also
Troubleshooting installation issues
How to do it…
How it works…
There's more…
Setting log levels
See also
Operating Applications on Kubernetes
Technical requirements
Deploying workloads using YAML files
Getting ready
How to do it…
Creating a Deployment
Verifying a Deployment
Editing a Deployment
Rolling back a deployment
Deleting a Deployment
How it works...
See also
Deploying workloads using Kustomize
Getting ready
How to do it…
Validating the Kubernetes cluster version
Generating Kubernetes resources from files
Creating a base for a development and production Deployment
How it works...
See also
Deploying workloads using Helm charts
Getting ready
How to do it…
Installing Helm 2.x
Installing an application using Helm charts
Searching for an application in Helm repositories
Upgrading an application using Helm
Rolling back an application using Helm
Deleting an application using Helm
Adding new Helm repositories
Building a Helm chart
How it works...
See also
Deploying and operating applications using Kubernetes operators
Getting ready
How to do it…
Installing KUDO and the KUDO kubectl plugin
Installing the Apache Kafka Operator using KUDO
Installing Operator Lifecycle Manager
Installing the Zalando PostgreSQL Operator
See also
Deploying and managing the life cycle of Jenkins X
Getting ready
How to do it...
Installing the Jenkins X CLI
Creating a Jenkins X Kubernetes cluster
Verifying Jenkins X components
Switching Kubernetes clusters
Validating cluster conformance
How it works...
There's more…
Importing an application
Upgrading a Jenkins X application
Deleting a Jenkins X Kubernetes cluster
See also
Deploying and managing the life cycle of GitLab
Getting ready
How to do it...
Installing GitLab using Helm
Connecting to the GitLab dashboard
Creating the first GitLab user
Upgrading GitLab
How it works...
There's more…
Using your own wildcard certificate
Using autogenerated self-signed certificates
Enabling the GitLab Operator
Deleting GitLab
See also
Building CI/CD Pipelines
Technical requirements
Creating a CI/CD pipeline in Jenkins X
Getting ready
How to do it…
Connecting to Jenkins Pipeline Console
Importing an application as a pipeline
Checking application status
Promoting an application to production
Creating a pipeline using a QuickStart application
How it works...
Creating a CI/CD pipeline in GitLab
Getting ready
How to do it…
Creating a project using templates
Importing an existing project from GitHub
Enabling Auto DevOps
Enabling Kubernetes cluster integration
Creating a pipeline using Auto DevOps
Incrementally rolling out applications to production
How it works...
There's more...
GitLab Web IDE
Monitoring environments
See also
Creating a CI/CD pipeline in CircleCI
Getting ready
How to do it...
Getting started with CircleCI
Deploying changes to a Kubernetes cluster on EKS
How it works...
See also
Setting up a CI/CD pipeline using GitHub Actions
Getting ready
How to do it...
Creating a workflow file
Creating a basic Docker build workflow
Building and publishing images to Docker Registry
Adding a workflow status badge
See also
Setting up a CI/CD pipeline on Amazon Web Services
Getting ready
How to do it...
Creating an AWS CodeCommit code repository
Building projects with AWS CodeBuild
Creating an AWS CodeDeploy deployment
Building a pipeline with AWS CodePipeline
How it works...
See also
Setting up a CI/CD pipeline with Spinnaker on Google Cloud Build
Getting ready
How to do it...
Installing and configuring the Spin CLI
Configuring a service account for the CI/CD
Configuring events to trigger a pipeline
Deploying Spinnaker using Helm
Creating a Google Cloud Source code repository
Building projects with Google Cloud Build
Configuring a Spinnaker pipeline
Rolling out an application to production
See also
Setting up a CI/CD pipeline on Azure DevOps
Getting ready
How to do it...
Getting started with Azure DevOps
Configuring Azure Pipelines
Deploying changes to an AKS cluster
How it works...
See also
Automating Tests in DevOps
Technical requirements
Building event-driven automation with StackStorm
Getting ready
How to do it…
Installing StackStorm
Accessing the StackStorm UI
Using the st2 CLI
Defining a rule
Deploying a rule
See also
Automating tests with the Litmus framework
Getting ready
How to do it…
Installing the Litmus Operator
Using Chaos Charts for Kubernetes
Creating a pod deletion chaos experiment
Reviewing chaos experiment results
Viewing chaos experiment logs
How it works...
See also
Automating Chaos Engineering with Gremlin
Getting ready
How to do it…
Setting up Gremlin credentials
Installing Gremlin on Kubernetes
Creating a CPU attack against a Kubernetes worker
Creating a node shutdown attack against a Kubernetes worker
Running predefined scenario-based attacks
Deleting Gremlin from your cluster
How it works...
See also
Automating your code review with Codacy
Getting ready
How to do it…
Accessing the Project Dashboard
Reviewing commits and PRs
Viewing issues by category
Adding a Codacy badge to your repository
See also
Detecting bugs and anti-patterns with SonarQube
Getting ready
How to do it…
Installing SonarQube using Helm
Accessing the SonarQube Dashboard
Creating a new user and tokens
Enabling quality profiles
Adding a project
Reviewing a project's quality
Adding marketplace plugins
Deleting SonarQube from your cluster
How it works...
See also
Detecting license compliance issues with FOSSA
Getting ready
How to do it…
Adding projects to FOSSA
Triaging licensing issues
Adding a FOSSA badge to your project
Preparing for Stateful Workloads
Technical requirements
Managing Amazon EBS volumes in Kubernetes
Getting ready
How to do it…
Creating an EBS storage class
Changing the default storage class
Using EBS volumes for persistent storage
Using EBS storage classes to dynamically create persistent volumes
Deleting EBS persistent volumes
Installing the EBS CSI driver to manage EBS volumes
See also
Managing GCE PD volumes in Kubernetes
Getting ready
How to do it…
Creating a GCE persistent disk storage class
Changing the default storage class
Using GCE PD volumes for persistent storage
Using GCE PD storage classes to create dynamic persistent volumes
Deleting GCE PD persistent volumes
Installing the GCP Compute PD CSI driver to manage PD volumes
How it works...
See also
Managing Azure Disk volumes in Kubernetes
Getting ready
How to do it…
Creating an Azure Disk storage class
Changing the default storage class to ZRS
Using Azure Disk storage classes to create dynamic PVs
Deleting Azure Disk persistent volumes
Installing the Azure Disk CSI driver
See also
Configuring and managing persistent storage using Rook
Getting ready
How to do it…
Installing a Ceph provider using Rook
Creating a Ceph cluster
Verifying a Ceph cluster's health
Create a Ceph block storage class
Using a Ceph block storage class to create dynamic PVs
See also
Configuring and managing persistent storage using OpenEBS
Getting ready
How to do it…
Installing iSCSI client prerequisites
Installing OpenEBS
Using ephemeral storage to create persistent volumes
Creating storage pools
Creating OpenEBS storage classes
Using an OpenEBS storage class to create dynamic PVs
How it works...
See also
Setting up NFS for shared storage on Kubernetes
Getting ready
How to do it…
Installing NFS prerequisites
Installing an NFS provider using a Rook NFS operator
Using a Rook NFS operator storage class to create dynamic NFS PVs
Installing an NFS provisioner using OpenEBS
Using the OpenEBS NFS provisioner storage class to create dynamic NFS PVs
See also
Troubleshooting storage issues
Getting ready
How to do it…
Persistent volumes in the pending state
A PV is stuck once a PVC has been deleted
Disaster Recovery and Backup
Technical requirements
Configuring and managing S3 object storage using MinIO
Getting ready
How to do it…
Creating a deployment YAML manifest
Creating a MinIO S3 service
Accessing the MinIO web user interface
How it works...
See also
Managing Kubernetes Volume Snapshots and restore
Getting ready
How to do it…
Enabling feature gates
Creating a volume snapshot via CSI
Restoring a volume from a snapshot via CSI
Cloning a volume via CSI
How it works...
See also
Application backup and recovery using Velero
Getting ready
How to do it…
Installing Velero
Backing up an application
Restoring an application
Creating a scheduled backup
Taking a backup of an entire namespace
Viewing backups with MinIO
Deleting backups and schedules
How it works...
See also
Application backup and recovery using Kasten
Getting ready
How to do it…
Installing Kasten
Accessing the Kasten Dashboard
Backing up an application
Restoring an application
How it works...
See also
Cross-cloud application migration
Getting ready
How to do it…
Creating an export profile in Kasten
Exporting a restore point in Kasten
Creating an import profile in Kasten
Migrating an application in Kasten
Importing clusters into OpenEBS Director
Migrating an application in OpenEBS Director
See also
Scaling and Upgrading Applications
Technical requirements
Scaling applications on Kubernetes
Getting ready
How to do it…
Validating the installation of Metrics Server
Manually scaling an application
Autoscaling applications using a Horizontal Pod Autoscaler
How it works...
See also
Assigning applications to nodes
Getting ready
How to do it…
Labeling nodes
Assigning pods to nodes using nodeSelector
Assigning pods to nodes using node and inter-pod Affinity
How it works...
See also
Creating an external load balancer
Getting ready
How to do it…
Creating an external cloud load balancer
Finding the external address of the service
How it works...
See also
Creating an ingress service and service mesh using Istio
Getting ready
How to do it…
Installing Istio using Helm
Verifying the installation
Creating an ingress gateway
How it works...
There's more…
Deleting Istio
See also
Creating an ingress service and service mesh using Linkerd
Getting ready
How to do it…
Installing the Linkerd CLI
Installing Linkerd
Verifying a Linkerd deployment
Adding Linkerd to a service
There's more…
Accessing the dashboard
Deleting Linkerd
See also
Auto-healing pods in Kubernetes
Getting ready
How to do it…
Testing self-healing pods
Adding liveness probes to pods
How it works...
See also
Managing upgrades through blue/green deployments
Getting ready
How to do it…
Creating the blue deployment
Creating the green deployment
Switching traffic from blue to green
See also
Observability and Monitoring on Kubernetes
Technical requirements
Monitoring in Kubernetes
Getting ready
How to do it…
Adding metrics using Kubernetes Metrics Server
Monitoring metrics using the CLI
Monitoring metrics using Kubernetes Dashboard
Monitoring node health
See also
Inspecting containers
Getting ready
How to do it…
Inspecting pods in Pending status
Inspecting pods in ImagePullBackOff status
Inspecting pods in CrashLoopBackOff status
See also
Monitoring using Amazon CloudWatch
Getting ready
How to do it…
Enabling Webhook authorization mode
Installing Container Insights Agents for Amazon EKS
Viewing Container Insights metrics
See also
Monitoring using Google Stackdriver
Getting ready
How to do it…
Installing Stackdriver Kubernetes Engine Monitoring support for GKE
Configuring a workspace on Stackdriver
Monitoring GKE metrics using Stackdriver
See also
Monitoring using Azure Monitor
Getting ready
How to do it…
Enabling Azure Monitor support for AKS using the CLI
Monitoring AKS performance metrics using Azure Monitor
Viewing live logs using Azure Monitor
See also
Monitoring Kubernetes using Prometheus and Grafana
Getting ready
How to do it…
Deploying Prometheus using Helm charts
Monitoring metrics using Grafana dashboards
Adding a Grafana dashboard to monitor applications
See also
Monitoring and performance analysis using Sysdig
Getting ready
How to do it…
Installing the Sysdig agent
Analyzing application performance
See also
Managing the cost of resources using Kubecost
Getting ready
How to do it…
Installing Kubecost
Accessing the Kubecost dashboard
Monitoring Kubernetes resource cost allocation
See also
Securing Applications and Clusters
Technical requirements
Using RBAC to harden cluster security
Getting ready
How to do it…
Viewing the default Roles
Creating user accounts
Creating Roles and RoleBindings
Testing the RBAC rules
How it works...
See also
Configuring Pod Security Policies
Getting ready
How to do it…
Enabling PSPs on EKS
Enabling PSPs on GKE
Enabling PodSecurityPolicy on AKS
Creating a restricted PSPs
There's more…
Restricting pods to access certain volume types
Using Kubernetes PodSecurityPolicy advisor
See also
Using Kubernetes CIS Benchmark for security auditing
Getting ready
How to do it…
Running kube-bench on Kubernetes
Running kube-bench on managed Kubernetes services
Running kube-bench on OpenShift
How it works...
See also
Building DevSecOps into the pipeline using Aqua Security
Getting ready
How to do it…
Scanning images using Trivy
Building vulnerability scanning into GitLab
Building vulnerability scanning into CircleCI
See also
Monitoring suspicious application activities using Falco
Getting ready
How to do it…
Installing Falco on Kubernetes
Detecting anomalies using Falco
Defining custom rules
How it works...
See also
Securing credentials using HashiCorp Vault
Getting ready
How to do it…
Installing Vault on Kubernetes
Accessing the Vault UI
Storing credentials on Vault
See also
Logging with Kubernetes
Technical requirements
Accessing Kubernetes logs locally
Getting ready
How to do it…
Accessing logs through Kubernetes
Debugging services locally using Telepresence
How it works...
See also
Accessing application-specific logs
Getting ready
How to do it…
Getting shell access in a container
Accessing PostgreSQL logs inside a container
Building centralized logging in Kubernetes using the EFK stack
Getting ready
How to do it…
Deploying Elasticsearch Operator
Requesting the Elasticsearch endpoint
Deploying Kibana
Aggregating logs with Fluent Bit
Accessing Kubernetes logs on Kibana
See also
Logging Kubernetes using Google Stackdriver
Getting ready
How to do it…
Installing Stackdriver Kubernetes Engine Monitoring support for GKE
Viewing GKE logs using Stackdriver
See also
Using a managed Kubernetes logging service
Getting ready
How to do it…
Connecting clusters to Director Online
Accessing logs using Director Online
Logging for your Jenkins CI/CD environment
Getting ready
How to do it…
Installing the Fluentd plugin
Streaming Jenkins logs to Elasticsearch using Fluentd
There's more…
Installing the Logstash plugin
Streaming Jenkins logs to Elasticsearch using Logstash
See also
Other Books You May Enjoy
Leave a review - let other readers know what you think
Kubernetes is an open source container orchestration platform originally developed by Google and made available to the public in 2014. It has made the deployment of container-based, complex, distributed systems simpler for developers. Since its inception, the community has built a large ecosystem around Kubernetes with many open source projects. This book is specially designed to quickly help Kubernetes administrators and site reliability engineers (SREs) to find the right tools and get up to speed with Kubernetes. The book covers everything from getting Kubernetes clusters up on most popular cloud and on-premises solutions to recipes that help you automate testing and move your applications out to production environments.
Kubernetes – A Complete DevOps Cookbook gives you clear, step-by-step instructions to install and run your private Kubernetes clusters successfully. It is full of practical and applicable recipes that enable you to use the latest capabilities of Kubernetes, as well as other third-party solutions, and implement them.
This book targets developers, IT professionals, SREs, and DevOps teams and engineers looking to manage, scale, and orchestrate applications in their organizations using Kubernetes. A basic understanding of Linux, Kubernetes, and containerization is required.
Chapter 1, Building Production-Ready Kubernetes Clusters, teaches you how to configure Kubernetes services on different public clouds or on-premises using the popular options available today.
Chapter 2, Operating Applications on Kubernetes, teaches you how to deploy DevOps tools and continuous integration/continuous deployment (CI/CD) infrastructure on Kubernetes using the most popular life cycle management options.
Chapter 3, Building CI/CD Pipelines, teaches you how to build, push, and deploy applications from development to production and also ways to detect bugs, anti-patterns, and license concerns during the process.
Chapter 4, Automating Tests in DevOps, teaches you how to automate testing in a DevOps workflow to accelerate time to production, reduce loss-of-delivery risks, and detect service anomalies using known test automation tools in Kubernetes.
Chapter 5, Preparing for Stateful Workloads, teaches you how to protect the state of applications from node or application failures, as well as how to share data and reattach volumes.
Chapter 6, Disaster Recovery and Backup, teaches you how to handle backup and disaster recovery scenarios to keep applications in production highly available and quickly recover service during cloud-provider or basic Kubernetes node failures.
Chapter 7, Scaling and Upgrading Applications, teaches you how to dynamically scale containerized services on running on Kubernetes to handle the changing traffic needs of your service.
Chapter 8, Observability and Monitoring on Kubernetes, teaches you how to monitor metrics for performance analysis and also how to monitor and manage the real-time cost of Kubernetes resources.
Chapter 9, Securing Applications and Clusters, teaches you how to build DevSecOps into CI/CD pipelines, detect metrics for performance analysis, and securely manage secrets and credentials.
Chapter 10, Logging on Kubernetes, teaches you how to set up a cluster to ingest logs, as well as how to view them using both self-managed and hosted solutions.
To use this book, you will need access to computers, servers, or cloud-provider services where you can provision virtual machine instances. To set up the lab environments, you may also need larger cloud instances that will require you to enable billing.
We assume that you are using an Ubuntu host (18.04, codename Bionic Beaver at the time of writing); the book provides steps for Ubuntu environments.
Software/Hardware covered in the book
OS Requirements
GitLab, Jenkins X, OpenShift, Rancher, kops, cURL, Python, Vim or Nano, kubectl, helm
Ubuntu/Windows/macOS
You will need AWS, GCP, and Azure credentials to perform some of the recipes in this book.
If you are using the digital version of this book, we advise you to type the code yourself or access the code via the GitHub repository (link available in the next section). Doing so will help you avoid any potential errors related to copy/pasting of code.
You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
Log in or register at
www.packt.com
.
Select the
Support
tab.
Click on
Code Downloads
.
Enter the name of the book in the
Search
box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR/7-Zip for Windows
Zipeg/iZip/UnRarX for Mac
7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub athttps://github.com/k8sdevopscookbook/src and https://github.com/PacktPublishing/Kubernetes-A-Complete-DevOps-Cookbook. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: http://www.packtpub.com/sites/default/files/downloads/9781838828042_ColorImages.pdf.
Visit the following link to check out videos of the code being run:http://bit.ly/2U0Cm8x
There are a number of text conventions used throughout this book.
CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "Mount the downloaded WebStorm-10*.dmg disk image file as another disk in your system."
A block of code is set as follows:
html, body, #map { height: 100%; margin: 0; padding: 0}
When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
[default]exten => s,1,Dial(Zap/1|30)exten => s,2,Voicemail(u100)
exten => s,102,Voicemail(b100)
exten => i,1,Voicemail(s0)
Any command-line input or output is written as follows:
$ mkdir css
$ cd css
Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Select System info from the Administration panel."
In this book, you will find several headings that appear frequently (Getting ready, How to do it..., How it works..., There's more..., and See also).
To give clear instructions on how to complete a recipe, use these sections as follows.
This section tells you what to expect in the recipe and describes how to set up any software or any preliminary settings required for the recipe.
This section contains the steps required to follow the recipe.
This section usually consists of a detailed explanation of what happened in the previous section.
This section consists of additional information about the recipe in order to make you more knowledgeable about the recipe.
This section provides helpful links to other useful information for the recipe.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packt.com.
This chapter proposes the most common deployment methods that are used on popular cloud services as well as on-premises, although you will certainly find a number of other tutorials on the internet explaining other approaches. This chapter explains the differences between managed/hosted cloud services versus self-managed cloud or on-premises Kubernetes deployments and the advantages of one vendor over another.
In this chapter, we will be covering the following recipes:
Configuring a Kubernetes cluster on Amazon Web Services
Configuring a Kubernetes cluster on Google Cloud Platform
Configuring a Kubernetes cluster on Microsoft Azure
Configuring a Kubernetes cluster on Alibaba Cloud
Configuring and managing Kubernetes clusters with Rancher
Configuring Red Hat OpenShift
Configuring a Kubernetes cluster using Ansible
Troubleshooting installation issues
It is recommended that you have a fundamental knowledge of Linux containers and Kubernetes in general. For preparing your Kubernetes clusters, using a Linux host is recommended. If your workstation is Windows-based, then we recommend that you use Windows Subsystem for Linux (WSL). WSL gives you a Linux command line on Windows and lets you run ELF64 Linux binaries on Windows.
It's always good practice to develop using the same environment (which means the same distribution and the same version) as the one that will be used in production. This will avoid unexpected surprises such as It Worked on My Machine (IWOMM). If your workstation is using a different OS, another good approach is to set up a virtual machine on your workstation. VirtualBox (https://www.virtualbox.org/) is a free and open source hypervisor that runs on Windows, Linux, and macOS.
In this chapter, we'll assume that you are using an Ubuntu host (18.04, code name Bionic Beaver at the time of writing). There are no specific hardware requirements since all the recipes in this chapter will be deployed and run on cloud instances. Here is the list of software packages that will be required on your localhost to complete the recipes:
cURL
Python
Vim or Nano (or your favorite text editor)
The recipes in this section will take you through how to get a fully functional Kubernetes cluster with a fully customizable master and worker nodes that you can use for the recipes in the following chapters or in production.
In this section, we will cover both Amazon EC2 and Amazon EKS recipes so that we can run Kubernetes on Amazon Web Services (AWS).
All the operations mentioned here require an AWS account and an AWS user with a policy that has permission to use the related services. If you don't have one, go to https://aws.amazon.com/account/ and create one.
AWS provides two main options when it comes to running Kubernetes on it. You can consider using the Amazon Elastic Compute Cloud (Amazon EC2) if you'd like to manage your deployment completely and have specific powerful instance requirements. Otherwise, it's highly recommended to consider using managed services such as Amazon Elastic Container Service for Kubernetes (Amazon EKS).
Depending on whether you want to use AWS EC2 service or EKS, you can follow the following recipes to get your cluster up and running using either kops or eksctl tools:
Installing the command-line tools to configure AWS services
Installing kops to provision a Kubernetes cluster
Provisioning a Kubernetes cluster on Amazon EC2
Provisioning a managed Kubernetes cluster on Amazon EKS
In this recipe, we will get the AWS Command-Line Interface (CLI) awscli and the Amazon EKS CLI eksctl to access and configure AWS services.
Let's perform the following steps:
Install awscli on your workstation:
$ sudo apt-get update && sudo apt-get install awscli
Configure the AWS CLI so that it uses your access key ID and secret access key:
$ aws configure
Download and install the Amazon EKS command-line interface, eksctl:
$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
Verify its version and make sure eksctl is installed:
$ eksctl version
To be able to perform the following recipes, the eksctl version should be 0.13.0 or later.
In this recipe, we will get the Kubernetes Operations tool, kops, and Kubernetes command-line tool, kubectl, installed in order to provision and manage Kubernetes clusters.
Let's perform the following steps:
Download and install the Kubernetes Operations tool,
kops
:
$ curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
$
chmod +x kops-linux-amd64 &&
sudo mv kops-linux-amd64 /usr/local/bin/kops
Run the following command to make sure
kops
is installed and confirm
that the version is
1.15.0
or later
:
$ kops version
Download and install the Kubernetes command-line tool,
kubectl
:
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
$ chmod +x ./kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
Verify its version and make sure
kubectl
is installed:
$ kubectl version --short
To be able to perform the following recipes, the kubectl version should be v1.15 or later.
This recipe will take you through how to get a fully functional Kubernetes cluster with fully customizable master and worker nodes that you can use for the recipes in the following chapters or in production.
Let's perform the following steps:
Create a domain for your cluster.
As an example, I will use the k8s.containerized.me subdomain as our hosted zone. Also, if your domain is registered with a registrar other than Amazon Route 53, you must update the name servers with your registrar and add Route 53 NS records for the hosted zone to your registrar's DNS records:
$ aws route53 create-hosted-zone --name k8s.containerized.me \
--caller-reference k8s-devops-cookbook \
--hosted-zone-config Comment="Hosted Zone for my K8s Cluster"
Create an S3 bucket to store the Kubernetes configuration and the state of the cluster. In
our example, we will use
s3.k8s.containerized.me
as our bucket name
:
$ aws s3api create-bucket --bucket s3.k8s.containerized.me \
--region us-east-1
Confirm your S3 bucket by listing the available bucket:
$ aws s3 ls
2019-07-21 22:02:58 s3.k8s.containerized.me
Enable bucket versioning:
$ aws s3api put-bucket-versioning --bucket s3.k8s.containerized.me \
--versioning-configuration Status=Enabled
Set environmental parameters for
kops
so that you can
use the locations by default:
$ export KOPS_CLUSTER_NAME=useast1.k8s.containerized.me
$ export KOPS_STATE_STORE=s3://s3.k8s.containerized.me
Create an SSH key if you haven't done so already:
$ ssh-keygen -t rsa
Create
the cluster configuration with the list of zones where you want your master nodes to run:
$ kops create cluster --node-count=6 --node-size=t3.large \
--zones=us-east-1a,us-east-1b,us-east-1c \
--master-size=t3.large \
--master-zones=us-east-1a,us-east-1b,us-east-1c
Create the cluster:
$ kops update cluster --name ${KOPS_CLUSTER_NAME} --yes
Wait a couple of minutes for the nodes to launch and validate:
$ kops validate cluster
Now, you can use
kubectl
to manage your cluster:
$ kubectl cluster-info
By default, kops creates and exports the Kubernetes configuration under ~/.kube/config. Therefore, no additional steps are required to connect your clusters using kubectl.
Perform the following steps to get your managed Kubernetes-as-a-service cluster up and running on Amazon EKS usingeksctl:
Create a cluster using the default settings:
$ eksctl create cluster
...
[√] EKS cluster "great-outfit-123" in "us-west-2" region is ready
By default, eksctl deploys a cluster with workers on two m5.large instances using the AWS EKS AMI in the us-west-2 region. eksctl creates and exports the Kubernetes configuration under ~/.kube/config. Therefore, no additional steps are required to connect your clusters using kubectl.
Confirm the cluster information and workers:
$ kubectl cluster-info && kubectl get nodes
Kubernetes master is running at https://gr7.us-west-2.eks.amazonaws.com
CoreDNS is running at https://gr7.us-west-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
NAME STATUS ROLES AGE VERSION
ip-1-2-3-4.us-west-2.compute.internal Ready <none> 5m42s v1.13.8-eks-cd3eb0
ip-1-2-3-4.us-west-2.compute.internal Ready <none> 5m40s v1.13.8-eks-cd3eb0
Now, you have a two-node Amazon EKS cluster up and running.
The first recipe on Amazon EC2 showed you how to provision multiple copies of master nodes that can survive a master node failure as well as single AZ outages. Although it is similar to what you get with the second recipe on Amazon EKS with Multi-AZ support, clusters on EC2 give you higher flexibility. When you run Amazon EKS instead, it runs a single-tenant Kubernetes control plane for each cluster, and the control plane consists of at least two API server nodes and three etcd nodes that run across three AZs within a region.
Let's take a look at the cluster options we used in step 7with the kops create cluster command:
--node-count=3
sets the number of nodes to create. In our example, this is
6
.
This configuration will deploy two nodes per zone defined with
--zones=us-east-1a,us-east-1b,us-east-1c
, with a total of three master nodes and six worker nodes.
--node-size
and
--master-size
set the instance size for the worker and master nodes. I
n our example,
t2.medium
is used for worker nodes and
t2.large
is used for master nodes.
For larger clusters,
t2.large
is recommended for a worker.
--zones
and
--master-zones
set the zones that the cluster will run in. In our example, we have used three zones
called
us-east-1a
,
us-east-1b
, and
us-east-1c
.
For additional zone information, check the AWS Global Infrastructure link in the See also section.
When deploying multi-master clusters, an odd number of master instances should be created. Also, remember that Kubernetes relies on etcd, a distributed key/value store. etcd quorum requires more than 51% of the nodes to be available at any time. Therefore, with three master nodes, our control plane can only survive a single master node or AZ outages. If you need to handle more than that, you need to consider increasing the number of master instances.
It is also useful to have knowledge of the following information:
Using the AWS Shell
Using a gossip-based cluster
Using different regions for an S3 bucket
Editing cluster configuration
Deleting your cluster
Provisioning an EKS cluster using the Amazon EKS dashboard
Deploying Kubernetes Dashboard
Another useful tool worth mentioning here is aws-shell. It is an integrated shell that works with the AWS CLI. It uses the AWS CLI configuration and improves productivity with an autocomplete feature.
Install aws-shell using the following command and run it:
$ sudo apt-get install aws-shell && aws-shell
You will see the following output:
You can use AWS commands with aws-shell with less typing. Press theF10 key to exit the shell.
In this recipe, we created a domain (either purchased from Amazon or another registrar) and a hosted zone, because kops uses DNS for discovery. Although it needs to be a valid DNS name, starting with kops 1.6.2, DNS configuration became optional. Instead of an actual domain or subdomain, a gossip-based cluster can be easily created. By using a registered domain name, we make our clusters easier to share and accessible by others for production use.
If, for any reason, you prefer a gossip-based cluster, you can skip hosted zone creation and use a cluster name that ends with k8s.local :
$ export KOPS_CLUSTER_NAME=devopscookbook.k8s.local
$ export KOPS_STATE_STORE=s3://devops-cookbook-state-store
Setting the environmental parameters for kops is optional but highly recommended since it shortens your CLI commands.
In order for kops to store cluster configuration, a dedicated S3 bucket is required.
An example for the eu-west-1 region would look as follows:
$ aws s3api create-bucket --bucket s3.k8s.containerized.me \
--region eu-west-1 --create-bucket-configuration \
LocationConstraint=eu-west-1
This S3 bucket will become the source of truth for our Kubernetes cluster configuration. For simplicity, it is recommended to use the us-east-1 region; otherwise, an appropriate LocationConstraint needs be specified in order to create the bucket in the desired region.
The kops create cluster command, which we used to create the cluster configuration, doesn't actually create the cluster itself and launch the EC2 instances; instead, it creates the configuration file in our S3 bucket.
After creating the configuration file, you can make changes to the configuration using the kops edit cluster command.
You can separately edit your node instance groups using the following command:
$ kops edit ig nodes
$ kops edit ig master-us-east-1a
The config file is called from the S3 bucket's state store location. If you prefer a different editor you can, for example, set $KUBE_EDITOR=nano to change it.
To delete your cluster, use the following command:
$ kops delete cluster --name ${KOPS_CLUSTER_NAME} --yes
This process may take a few minutes and, when finished, you will get a confirmation.
In the Provisioning a managed Kubernetes cluster on Amazon EKS recipe, we used eksctl to deploy a cluster. As an alternative, you can also use the AWS Management Console web user interface to deploy an EKS cluster.
Perform the following steps to get your cluster up and running on Amazon EKS:
Open your browser and go to the Amazon EKS console at
https://console.aws.amazon.com/eks/home#/clusters
.
Enter a cluster name and click on the
Next Step
button.
On the
Create Cluster
page, select
Kubernetes Version
,
Role name
, at least two or more availability zones from the subnets list, and
Security groups
.
Click on
Create
.
Cluster creation with EKS takes around 20 minutes. Refresh the page in 15-20 minutes and check its status.
Use the following
command to update your
kubectl
configuration:
$ aws eks --region
us-east-1
update-kubeconfig \
--name
K8s-DevOps-Cookbook
Now, use
kubectl
to manage your cluster:
$ kubectl get nodes
Now that your cluster has been configured, you can configure kubectl to manage it.
Last but not least, to deploy the Kubernetes Dashboard application on an AWS cluster, you need to follow these steps:
At the time I wrote this recipe, Kubernetes Dashboard v.2.0.0 was still in beta. Since v.1.x version will be obsolete soon, I highly recommend that you install the latest version, that is, v.2.0.0. The new version brings a lot of functionality and support for Kubernetes v.1.16 and later versions. Before you deploy Dashboard, make sure to remove the previous version if you have a previous version. Check the latest release by following the link in the following information box and deploy it using the latest release, similar to doing the following:
$ kubectl delete ns kubernetes-dashboard
# Use the latest version link from https://github.com/kubernetes/dashboard/releases
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml
By default, the
kubernetes-dashboard
service is exposed using the
ClusterIP
type. If you want to access it from outside, edit the service using the following command and replace the
ClusterIP
type with
LoadBalancer
; otherwise, use port forwarding to access it:
$ kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
Get the external IP of your dashboard from the
kubernetes-dashboard
service:
$ kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard LoadBalancer 100.66.234.228 myaddress.us-east-1.elb.amazonaws.com 443:30221/TCP 5m46s
Open the external IP link in your browser. In our example, it is
https://myaddress.us-east-1.elb.amazonaws.com
.
We will use the token option to access Kubernetes Dashboard. Now, let's find the token in our cluster using the following command. In this example, the command returns
kubernetes-dashboard-token-bc2w5
as the token name:
$ kubectl get secrets -A | grep dashboard-token
kubernetes-dashboard kubernetes-dashboard-token-bc2w5 kubernetes.io/service-account-token 3 17m
Replace the secret name with yours from the output of the previous command. Get the token details from the description of the Secret:
$ kubectl describe secrets kubernetes-dashboard-token-bc2w5 -nkubernetes-dashboard
Copy the token section from the output of the preceding command and paste it into
Kubernetes Dashboard
to sign in to Dashboard:
Now, you have access to Kubernetes Dashboard to manage your cluster.
Kops documentation for the latest version and additional
create cluster
parameters:
https://github.com/kubernetes/kops/blob/master/docs/aws.md
https://github.com/kubernetes/kops/blob/master/docs/cli/kops_create_cluster.md
AWS Command Reference S3 Create Bucket API:
https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html
AWS Global Infrastructure Map:
https://aws.amazon.com/about-aws/global-infrastructure/
Amazon EKS FAQ:
https://aws.amazon.com/eks/faqs/
The AWS Fargate product, another AWS service, if you would prefer to run containers without managing servers or clusters:
https://aws.amazon.com/fargate/
A complete list of CNCF-certified Kubernetes installers:
https://landscape.cncf.io/category=certified-kubernetes-installer&format=card-mode&grouping=category
.
Other recommended tools for getting highly available clusters on AWS:
Konvoy:
https://d2iq.com/solutions/ksphere/konvoy
KubeAdm:
https://github.com/kubernetes/kubeadm
KubeOne:
https://github.com/kubermatic/kubeone
KubeSpray:
https://github.com/kubernetes-sigs/kubespray
This section will take you through step-by-step instructions to configure Kubernetes clusters on GCP. You will learn how to run a hosted Kubernetes cluster without needing to provision or manage master and etcd instances using GKE.
All the operations mentioned here require a GCP account with billing enabled. If you don't have one already, go to https://console.cloud.google.com and create an account.
On Google Cloud Platform (GCP), you have two main options when it comes to running Kubernetes. You can consider using Google Compute Engine (GCE) if you'd like to manage your deployment completely and have specific powerful instance requirements. Otherwise, it's highly recommended to use the managed Google Kubernetes Engine (GKE).
This section is further divided into the following subsections to make this process easier to follow:
Installing the command-line tools to configure GCP services
Provisioning a managed Kubernetes cluster on GKE
Connecting to GKE clusters
In this recipe, we will get the primary CLI for Google Cloud Platform, gcloud, installed so that we can configure GCP services:
Run the following command to download the gcloud CLI:
$ curl https://sdk.cloud.google.com | bash
Initialize the SDK and follow the instructions given:
$ gcloud init
During the initialization, when asked, select either an existing project that you have permissions for or create a new project.
Enable the Compute Engine APIs for the project:
$ gcloud services enable compute.googleapis.com
Operation "operations/acf.07e3e23a-77a0-4fb3-8d30-ef20adb2986a" finished successfully.
Set a default zone:
$ gcloud config set compute/zone us-central1-a
Make sure you can start up a GCE instance from the command line:
$ gcloud compute instances create "devops-cookbook" \
--zone "us-central1-a" --machine-type "f1-micro"
Delete the test VM:
$ gcloud compute instances delete "devops-cookbook"
If all the commands are successful, you can provision your GKE cluster.
Let's perform the following steps:
Create a cluster:
$ gcloud container clusters create k8s-devops-cookbook-1 \
--cluster-version
latest
--machine-type n1-standard-2 \
--image-type UBUNTU --disk-type pd-standard --disk-size 100 \
--no-enable-basic-auth --metadata disable-legacy-endpoints=true \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--num-nodes "3"
--enable-stackdriver-kubernetes \
--no-enable-ip-alias --enable-autoscaling --min-nodes 1 \
--max-nodes 5 --enable-network-policy \
--addons HorizontalPodAutoscaling,HttpLoadBalancing \
--enable-autoupgrade --enable-autorepair --maintenance-window "10:00"
Cluster creation will take 5 minutes or more to complete.
To get access to your GKE cluster, you need to follow these steps:
Configure
kubectl
to access your
k8s-devops-cookbook-1
cluster:
$ gcloud container clusters get-credentials k8s-devops-cookbook-1
Verify your Kubernetes cluster:
$ kubectl get nodes
Now, you have a three-node GKE cluster up and running.
This recipe showed you how to quickly provision a GKE cluster using some default parameters.
In Step 1, we created a cluster with some default parameters. While all of the parameters are very important, I want to explain some of them here.
--cluster-version sets the Kubernetes version to use for the master and nodes. Only use it if you want to use a version that's different from the default. To get the available version information, you can use the gcloud container get-server-config command.
We set the instance type by using the --machine-type parameter. If it's not set, the default isn1-standard-1
