IoT Edge Computing with MicroK8s - Karthikeyan Shanmugam - E-Book

IoT Edge Computing with MicroK8s E-Book

Karthikeyan Shanmugam

0,0
33,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Are you facing challenges with developing, deploying, monitoring, clustering, storing, securing, and managing Kubernetes in production environments as you're not familiar with infrastructure technologies? MicroK8s - a zero-ops, lightweight, and CNCF-compliant Kubernetes with a small footprint is the apt solution for you.
This book gets you up and running with production-grade, highly available (HA) Kubernetes clusters on MicroK8s using best practices and examples based on IoT and edge computing.
Beginning with an introduction to Kubernetes, MicroK8s, and IoT and edge computing architectures, this book shows you how to install, deploy sample apps, and enable add-ons (like DNS and dashboard) on the MicroK8s platform. You’ll work with multi-node Kubernetes clusters on Raspberry Pi and networking plugins (such as Calico and Cilium) and implement service mesh, load balancing with MetalLB and Ingress, and AI/ML workloads on MicroK8s. You’ll also understand how to secure containers, monitor infrastructure and apps with Prometheus, Grafana, and the ELK stack, manage storage replication with OpenEBS, resist component failure using a HA cluster, and more, as well as take a sneak peek into future trends.
By the end of this book, you’ll be able to use MicroK8 to build and implement scenarios for IoT and edge computing workloads in a production environment.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 419

Veröffentlichungsjahr: 2022

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



IoT Edge Computing with MicroK8s

A hands-on approach to building, deploying, and distributing production-ready Kubernetes on IoT and Edge platforms

Karthikeyan Shanmugam

BIRMINGHAM—MUMBAI

IoT Edge Computing with MicroK8s

Copyright © 2022 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Rahul Nair

Publishing Product Manager: Surbhi Suman

Senior Editor: Shazeen Iqbal

Content Development Editor: Sujata Tripathi

Technical Editor: Rajat Sharma

Copy Editor: Safis Editing

Project Coordinator: Ashwin Dinesh Kharwa

Proofreader: Safis Editing

Indexer: Subalakshmi Govindhan

Production Designer: Aparna Bhagat

Marketing Coordinator: Nimisha Dua

First published: September 2022

Production reference: 2280922

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80323-063-4

www.packt.com

To God Almighty, for this wonderful opportunity and for allowing me to complete it successfully. To my wife, Ramya, and to my daughters, Nethra and Kanishka, for being loving and supportive.

To my parents, Shanmugam and Jayabarathi.

Contributors

About the author

Karthikeyan Shanmugam is an experienced solutions architect professional, with about 20+ years of experience in the design and development of enterprise applications across various industries. Currently, he is working as a senior solutions architect at Amazon Web Services, where he is responsible for designing scalable, adaptable, and resilient architectures that solve client business challenges. Prior to that, he worked for companies such as Ramco Systems, Infosys, Cognizant, and HCL Technologies.

He specializes in cloud, cloud-native, containers, and container orchestration tools, such as Kubernetes, IoT, digital twin, and microservices domains, and has obtained multiple certifications from various cloud providers.

He is also contributing author in leading journals such as InfoQ, Container Journal, DevOps.com, The New Stack, and the Cloud Native Computing Foundation (CNCF.io) blog.

His articles on emerging technologies (including the cloud, Docker, Kubernetes, microservices, and cloud-native development) can be read on his blog at upnxtblog.com.

About the reviewers

Alex Chalkias is a senior product manager working with Kubernetes and cloud-native technologies, currently at Elastic. He was always most drawn to the intersection of business and technology, specifically aspiring to build amazing products and solve interesting problems using open source software. His professional background also includes Canonical, Amadeus, and Nokia, where he occupied the roles of software engineer, scrum master, business analyst, and product owner. Alex holds a master’s degree in electrical engineering and computer science from the University of Patras. During his studies, he focused on programming and new technologies, such as augmented reality and human-computer interaction. In his spare time, he is an avid tennis, music, and TV series fan.

Jimmy Song is a developer advocate at Tetrate, a CNCF ambassador, and a cloud-native community (China) founder. He mainly focuses on cloud-native fields, including Kubernetes and service meshes. He is one of the authors of the books Deeper Understanding of Istio and Future Architecture.

Meha Bhalodiya is a final-year computer science engineering student. A Google Summer of Code 2022 scholar, she started contributing to Keptn’s integration in automating deployment states after the state has been synced. In the spring of 2022, she was an LFX mentee for the CNCF-K8s Gateway API, where she contributed to assessing the project documentation, the contributor documentation, and the website. She was involved with Kubernetes 1.24 and 1.23 release team as a documentation shadow. She is also qualified as a Linux Foundation Training (LiFT) scholar. Additionally, at Kubernetes Community Days Bengaluru 2022, she got selected as a speaker and delivered a session on running local Kubernetes clusters using minikube, KinD, and MicroK8s.

Preface

The idea for this book was born when one of my customers wanted to implement a minimal container orchestration engine for their apps on their resource-constrained edge device. Deploying the entirety of Kubernetes was not the solution, but then I encountered the realm of minimal Kubernetes distributions, and after much experimentation with several providers, I chose MicroK8s to successfully build various edge computing use cases and scenarios for them.

Canonical’s MicroK8s Kubernetes distribution is small, lightweight, and fully conformant. It’s a minimalistic distribution with a focus on performance and simplicity. MicroK8s can be easily deployed in IoT and edge devices due to its small footprint. By the end of this book, you will know how to effectively implement the following use cases and scenarios for edge computing using MicroK8s:

Getting your Kubernetes cluster up and runningEnabling core Kubernetes add-ons such as Domain Name System (DNS) and dashboardsCreating, scaling, and performing rolling updates on multi-node Kubernetes clustersWorking with various container networking options, such as Calico, Flannel, and CiliumSetting up MetalLB and Ingress options for load balancingUsing OpenEBS storage replication for stateful applicationConfiguring Kubeflow and running AI/ML use casesConfiguring service mesh integration with Istio and LinkerdRunning serverless applications using Knative and OpenFaaSConfiguring logging and monitoring options (Prometheus, Grafana, Elastic, Fluentd, and Kibana)Configuring a multi-node, highly available Kubernetes clusterConfiguring Kata for secured containersConfiguring strict confinement for running in isolation

According to Canonical’s 2022 Kubernetes and cloud native operations report (https://juju.is/cloud-native-kubernetes-usage-report-2022), 48 percent of respondents indicated the biggest barriers to migrating to or using Kubernetes and containers are a lack of in-house capabilities and limited staff.

As indicated in the report, there is a skills deficit as well as a knowledge gap, which I believe this book will solve by covering crucial areas that are required to bring you up to speed in no time.

Who this book is for

The book is intended for DevOps and cloud engineers, Kubernetes Site Reliability Engineers(SREs), and application developers who desire to implement efficient techniques for deploying their software solutions. It will be also useful for technical architects and technology leaders who are looking to adopt cloud-native technologies. A basic understanding of container-based application design and development, virtual machines, networking, databases, and programming will be helpful to get the most out of this book.

What this book covers

Chapter 1, Getting Started with Kubernetes, introduces Kubernetes and the various components of the Kubernetes system as well as the abstractions.

Chapter 2, Introducing MicroK8s, introduces MicroK8s and shows how to install it, how to verify its installation status, and how to monitor and manage a Kubernetes cluster. We will also learn how to use some of the add-ons and deploy a sample application.

Chapter 3, Essentials of IoT and Edge Computing, delves into how Kubernetes, edge computing, and the cloud can collaborate to drive intelligent business decisions. This chapter gives an overview of the Internet of Things (IoT), the Edge, and how they are related, as well as the advantages of edge computing.

Chapter 4, Handling the Kubernetes Platform for IoT and Edge Computing, examines how Kubernetes for edge computing offers a compelling value proposition and different architectural approaches that demonstrate how Kubernetes can be used for edge workloads, as well as support for architecture that meets an enterprise application’s requirements – low latency, resource-constrained, data privacy, and bandwidth scalability.

Chapter 5, Creating and Implementing on Updates Multi-Node Raspberry Pi Kubernetes Clusters, explores how to set up a MicroK8s Raspberry Pi multi-node cluster, deploy a sample application, and execute rolling updates on the deployed application. We will also understand ways to scale the deployed application. We will also touch upon some of the recommended practices for building a scalable, secure, and highly optimized Kubernetes cluster model.

Chapter 6, Configuring Connectivity for Containers, looks at how networking is handled in a Kubernetes cluster. Furthermore, we will understand how to use Calico, Cilium, and Flannel CNI plugins to network the cluster. We will go through the most important factors to consider when choosing a CNI service.

Chapter 7, Setting Up MetalLB and Ingress for Load Balancing, delves into techniques (MetalLB and Ingress) for exposing services outside a cluster.

Chapter 8, Monitoring the Health of Infrastructure and Applications, examines various choices for monitoring, logging, and alerting your cluster, and provides detailed steps on how to configure them. We will also go through the essential metrics that should be watched in order to successfully manage your infrastructure and apps.

Chapter 9, Using Kubeflow to Run AI/MLOps Workloads, covers how to develop and deploy a sample ML model using the Kubeflow ML platform. We will also go through some of the best practices for running AI/ML workloads on Kubernetes.

Chapter 10, Going Serverless with Knative and OpenFaaS Frameworks, examines two of the most popular serverless frameworks included with MicroK8s, Knative and OpenFaaS, both of which are Kubernetes-based platforms for developing, deploying, and managing modern serverless workloads. We will also go through the best practices for developing and deploying serverless applications.

Chapter 11, Managing Storage Replication with OpenEBS, looks at how to use OpenEBS to implement storage replication that synchronizes data across several nodes. We will go through the steps involved in configuring and implementing a PostgreSQL stateful application utilizing the OpenEBS Jiva storage engine. We will also look at the Kubernetes storage best practices as well as recommendations for data engines.

Chapter 12, Implementing Service Mesh for Cross-Cutting Concerns, walks you through the steps of deploying Istio and Linkerd service meshes. You will also learn how to deploy and run a sample application, as well as how to configure and access dashboards.

Chapter 13, Resting Component Failure Using HA Clusters, walks you through the steps involved in setting up a highly available cluster that can withstand a component failure and continue to serve workloads without interruption. We will also discuss some of the best practices for implementing Kubernetes applications on your production-ready cluster.

Chapter 14, Hardware Virtualization for Securing Containers, looks at how to use Kata Containers, a secure container runtime, to provide stronger workload isolation, leveraging hardware virtualization technology. We also discuss the best practices for establishing container security on your production-grade cluster.

Chapter 15, Implementing Strict Confinement for Isolated Containers, shows you how to install the MicroK8s snap with a strict confinement option, monitor the installation’s progress, and manage a Kubernetes cluster running on Ubuntu Core. We will also deploy a sample application and examine whether the application is able to run on a strict confinement-enabled Kubernetes cluster.

Chapter 16, Diving into the Future, looks at how Kubernetes and MicroK8s are uniquely positioned for accelerating IoT and edge deployments, and also the key trends that are shaping our new future.

Frequently Asked Questions About MicroK8s

To get the most out of this book

A basic understanding of container-based application design and development, virtual machines, networking, databases, and programming will be helpful to get the most out of this book. The following are the prerequisites for building a MicroK8s Kubernetes cluster:

Software/hardware covered in the book

Operating system requirements

A microSD card (4 GB minimum, with 8 GB recommended)A computer with a microSD card driveA Raspberry Pi 2, 3, or 4 (3 or more)A micro-USB power cable (USB-C for the Pi 4)A Wi-Fi network or an Ethernet cable with an internet connection(Optional) A monitor with an HDMI interface(Optional) An HDMI cable for the Pi 2 and 3 and a micro-HDMI cable for the Pi 4(Optional) A USB keyboardAn SSH client such as PuTTYA hypervisor such as Oracle VM VirtualBox 6.1 to create virtual machines

Windows or Linux to run Ubuntu virtual machines

If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book. Doing so will help you avoid any potential errors related to the copying and pasting of code.

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/IoT-Edge-Computing-with-MicroK8s. If there’s an update to the code, it will be updated in the GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://packt.link/HprZX.

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “To check a list of available and installed add-ons, use the status command.”

A block of code is set as follows:

apiVersion: v1 kind: Service metadata:   name: metallb-load-balancer spec:   selector:     app: whoami   ports:     - protocol: TCP       port: 80       targetPort: 80   type: LoadBalancer

Any command-line input or output is written as follows:

kubectl apply -f loadbalancer.yaml

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “Navigate to Monitoring under Namespaces on the Kubernetes dashboard, and then click Services.”

Tips or Important Notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share Your Thoughts

Once you’ve read IoT Edge Computing with MicroK8s, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Part 1: Foundations of Kubernetes and MicroK8s

In this part, you will be introduced to MicroK8s and its ecosystem. You will also learn how to install a MicroK8s Kubernetes cluster and get it up and running.

This part of the book comprises the following chapters:

Chapter 1, Getting Started with KubernetesChapter 2, Introducing MicroK8s

1

Getting Started with Kubernetes

Kubernetes is an open source container orchestration engine that automates how container applications are deployed, scaled, and managed. Since it was first released 7 years ago, it has made great strides in a short period. It has previously had to compete with and outperform container orchestration engines such as Cloud Foundry Diego, CoreOS's Fleet, Docker Swarm, Kontena, HashiCorp's Nomad, Apache Mesos, Rancher's Cattle, Amazon ECS, and more. Kubernetes is now operating in an entirely different landscape. This indicates that developers only need to master one container orchestration engine so that they can be employed for 90% of container-related jobs.

The Kubernetes container orchestration framework is a ready-for-production open source platform built on Google's 15+ years of experience running production workloads, as well as community-contributed best-of-breed principles and concepts. Kubernetes divides an application's containers into logical units for easier administration and discovery. Containers (cgroups) have been around since early 2007 when they were first included in the mainline Linux kernel. A container's small size and portability allows it to host an exponentially higher number of containers than VMs, lowering infrastructure costs and allowing more programs to be deployed faster. However, until Docker (2013) came along, it didn't generate significant interest due to usability concerns.

Docker is different from standard virtualization; it is based on operating-system-level virtualization. Containers, unlike hypervisor virtualization, which uses an intermediation layer (hypervisor) to run virtual machines on physical hardware, run in user space on top of the kernel of an operating system. As a result, they're incredibly light and fast. This can be seen in the following diagram:

Figure 1.1 – Virtual machines versus containers

The Kubernetes container orchestration framework automates much of the operational effort that's necessary to run containerized workloads and services. This covers provisioning, deployment, scaling (up and down), networking, load balancing, and other tasks that software teams must perform to manage a container's life cycle. Some of the key benefits that Kubernetes brings to developers are as follows:

Declarative Application Topology: This describes how each service should be implemented, as well as their reliance on other services and resource requirements. Because we have all of this data in an executable format, we can test the application's deployment parts early on in development and treat it like programmable application infrastructure:

Figure 1.2 – Declarative application topology

Declarative Service Deployments: The update and rollback process for a set of containers is encapsulated, making it a repetitive and automated procedure.Dynamically Placed Applications: This allows applications to be deployed in a predictable sequence on the cluster, based on application requirements, resources available, and governing policies.Flexible scheduler: There is a lot of flexibility in terms of defining conditions for assigning pods to a specific or a set of worker nodes that meet those conditions.Application Resilience: Containers and management platforms help applications be more robust in a variety of ways, as follows:Resource consumption policies such as CPU and memory quotasHandling the failures using a circuit breaker, timeout, retry, and so onFailover and service discoveryAutoscaling and self-healingSelf-Service Environments: These allow teams and individuals to create secluded environments for CI/CD, experimentation, and testing purposes from the cluster in real time.Service Discovery, Load Balancing, and Circuit Breaker: Without the use of application agents, services can discover and consume other services. There's more to this than what is listed here.

In this chapter, we're going to cover the following main topics:

The evolution of containersKubernetes overview – understanding Kubernetes componentsUnderstanding podsUnderstanding deploymentsUnderstanding StatefulSets and DaemonSetsUnderstanding jobs and CronJobsUnderstanding services

The evolution of containers

Container technology is a means of packaging an application so that it may run with separated dependencies, and its compartmentalization of a computer system has radically transformed software development today. In this section, we'll look at some of the key aspects, including where this technology originated and the background behind the container technology:

Figure 1.3 – A brief history of container technology

Early containers (chroot systems with Unix version 7), developed in the 1970s, offered an isolated environment in which services and applications could operate without interfering with other processes, thereby creating a sandbox for testing programs, services, and other processes. The original concept was to separate the workload of the container from that of production systems, allowing developers to test their apps and procedures on production hardware without disrupting other services. Containers have improved their abilities to isolate users, data, networking, and more throughout time.

With the release of Free BSD Jails in the 2000s, container technology finally gained traction. "Jails" are computer partitions that can have several jails/partitions on the same system. This jail architecture was developed in 2001 with Linux VServer, which included resource partitioning and was later linked to the Linux kernel with OpenVZ in 2005. Jails were merged with boundary separation to become Solaris Containers in 2004.

Container technology advanced substantially after the introduction of control groups in 2006. Control groups, or cgroups, were created to track and isolate resource utilization, such as CPU and memory. They were quickly adopted and improved upon in Linux Containers (LXC) in 2008, which was the most full and stable version of any container technology at the time since it functioned without changes having to be made to the Linux kernel. Many new technologies have sprung up because of LXC's reliability and stability, the first of which was Warden in 2011 and, more importantly, Docker in 2013.

Containers have gained a lot of usage since 2013 due to a slew of Linux distributions releasing new deployment and management tools. Containers running on Linux systems have been transformed into virtualization solutions at the operating system level, aiming to provide several isolated Linux environments on a single Linux host. Linux containers don't need their own guest operating systems; instead, they share the kernel of the host operating system. Containers spin up significantly faster than virtual machines since they don't require a specialized operating system.

Containers can employ Linux kernel technologies such as namespaces, Apparmor, SELinux profiles, chroot, and cgroups to create an isolated operational environment, while Linux security modules offer an extra degree of protection, ensuring that containers can't access the host machine or kernel. Containerization in terms of Linux provided even more versatility by allowing containers to run various Linux distributions from their host operating system if both were running on the same CPU architecture.

Linux containers provided us with a way to build container images based on a variety of Linux distributions, as well as an API for managing the containers' lifespan. Linux distributions also included client tools for dealing with the API, as well as snapshot features and support for moving container instances from one container host to another.

However, while containers running on a Linux platform broadened their applicability, they still faced several fundamental hurdles, including unified management, real portability, compatibility, and scaling control.

The emergence of Apache Mesos, Google Borg, and Facebook Tupperware, all of which provided varying degrees of container orchestration and cluster management capabilities, marked a significant advancement in the use of containers on Linux platforms. These platforms allowed hundreds of containers to be created instantly, and also provided support for automated failover and other mission-critical features that are required for container management at scale. However, it wasn't until Docker, a variation of containers, that the container revolution began in earnest.

Because of Docker's popularity, several management platforms have emerged, including Marathon, Kubernetes, Docker Swarm, and, more broadly, the DC/OS environment that Mesosphere built on top of Mesos to manage not only containers but also a wide range of legacy applications and data services written in, for example, Java. Even though each platform has its unique approach to orchestration and administration, they all share one goal: to make containers more mainstream in the workplace.

The momentum of container technology accelerated in 2017 with the launch of Kubernetes, a highly effective container orchestration solution. Kubernetes became the industry norm after being adopted by CNCF and receiving backing from Docker. Thus, using a combination of Kubernetes and other container tools became the industry standard.

With the release of cgroups v2 (Linux version 4.5), several new features have been added, including rootless containers, enhanced management, and, most crucially, the simplicity of cgroup controllers.

Container usage has exploded in the last few years (https://juju.is/cloud-native-kubernetes-usage-report-2021) in both emerging "cloud-native" apps and situations where IT organizations wish to "containerize" an existing legacy program to make it easier to lift and shift onto the cloud. Containers have now become the de facto standard for application delivery as acceptance of cloud-native development approaches mature.

We'll dive more into Kubernetes components in the next section.

Kubernetes overview – understanding Kubernetes components

In this section, we'll go through the various components of the Kubernetes system, as well as their abstractions.

The following diagram depicts the various components that are required for a fully functional Kubernetes cluster:

Figure 1.4 – A Kubernetes system and its abstractions

Let's describe the components of a Kubernetes cluster:

Nodes, which are worker machines that run containerized work units, make up a Kubernetes cluster. Every cluster has at least one worker node.There is an API layer (Kubernetes API) that can communicate with Kubernetes clusters, which may be accessed via a command-line interface called kubectl.

There are two types of resources in a Kubernetes cluster (as shown in the preceding diagram):

The control plane, which controls and manages the clusterThe nodes, which are the workers' nodes that run applications

All the operations in your cluster are coordinated by the control plane, including application scheduling, maintaining the intended state of applications, scaling applications, and deploying new updates.

A cluster's nodes might be virtual machines (VMs) or physical computers that serve as worker machines. A kubelet is a node-managing agent that connects each of the nodes to Kubernetes control plane. Container management tools, such as Docker, should be present on the node as well.

The control plane executes a command to start the application containers whenever an application needs to be started on Kubernetes. Containers are scheduled to run on the cluster's nodes by the control plane.

The nodes connect to the control plane using the Kubernetes API that the control plane provides. The Kubernetes API allows end users to interface directly with the cluster. The master components offer the cluster's control plane capabilities.

API Server, Controller-Manager, and Scheduler are the three processes that make up the Kubernetes control plane. The Kubernetes API is exposed through the API Server. It is the Kubernetes control plane's frontend. Controller-Manager is in charge of the cluster's controllers, which are responsible for handling everyday activities. The Scheduler keeps an eye out for new pods that don't have a node assigned to them and assigns them one. Each worker node in the cluster is responsible for the following processes:

Kubelet: This handles all the communication with the Kubernetes MasterControl plane.kube-proxy: This handles all the networking proxy services on each node.The container runtime, such as Docker.

Control plane components are in charge of making global cluster decisions (such as application scheduling), as well as monitoring and responding to cluster events. For clusters, there is a web-based Kubernetes dashboard. This allows users to administer and debug cluster-based applications, as well as the cluster itself. Kubernetes clusters may run on a wide range of platforms, including your laptop, cloud-hosted virtual machines, and bare-metal servers.

MicroK8s is a simplistic streamlined Kubernetes implementation that builds a Kubernetes cluster on your local workstation and deploys all the Kubernetes services on a tiny cluster that only includes one node. It can be used to experiment with your local Kubernetes setup. MicroK8s is compatible with Linux, macOS X, Raspberry Pi, and Windows and can be used to experiment with local Kubernetes setups or for edge production use cases. Start, stop, status, and delete are all basic bootstrapping procedures that are provided by the MicroK8s CLI for working with your cluster. We'll learn how to install MicroK8s, check the status of the installation, monitor and control the Kubernetes cluster, and deploy sample applications and add-ons in the next chapter.

Other objects that indicate the state of the system exist in addition to the components listed in Figure 1.4. The following are some of the most fundamental Kubernetes objects:

PodsDeploymentsStatefulSets and DaemonSetsJobs and CronJobsServices

In the Kubernetes system, Kubernetes objects are persistent entities. These entities are used by Kubernetes to represent the state of your cluster. It will operate indefinitely to verify that the object exists once it has been created. You're simply telling the Kubernetes framework how your cluster's workloads should look by building an object; this is your cluster's ideal state. You must use the Kubernetes API to interact with Kubernetes objects, whether you want to create, update, or delete them. The CLI handles all Kubernetes API queries when you use the kubectl command-line interface, for example. You can also directly access the Kubernetes API in your apps by using any of the client libraries. The following diagram illustrates the various Kubernetes objects:

Figure 1.5 – Overview of Kubernetes objects

Kubernetes provides the preceding set of objects (such as pods, services, and controllers) to satisfy our application's requirements and drive its architecture. The guiding design principles and design patterns we employ to build any new services are determined by these new primitives and platform abilities. A deployment object, for example, is a Kubernetes object that can represent an application running on your cluster. When you build the deployment, you can indicate that three replicas of the application should be running in the deployment specification. The Kubernetes system parses the deployment specification and deploys three instances of your desired application, altering its status as needed. If any of those instances fail for whatever reason, the Kubernetes framework responds to the discrepancy between the specification and the status by correcting it – in this case, by establishing a new instance.

Understanding how Kubernetes works is essential, but understanding how to communicate with Kubernetes is just as important. We'll go over some of the ways to interact with a Kubernetes cluster in the next section.

Interacting with a Kubernetes cluster

In this section, we'll look at different ways to interface with a Kubernetes cluster.

Kubernetes Dashboard is a user interface that can be accessed via the web. It can be used to deploy containerized applications to a Kubernetes cluster, troubleshoot them, and control the cluster's resources. This dashboard can be used for a variety of purposes, including the following:

All the nodes and persistent storage volumes are listed in the Admin overview, along with aggregated metrics for each node.The Workloads view displays a list of all running applications by namespace, as well as current pod memory utilization and the number of pods in a deployment that are currently ready.The Discover view displays a list of services that have been made public and have enabled cluster discovery.You can drill down through logs from containers that belong to a single pod using the Logs view. For each clustered application and all the Kubernetes resources running in the cluster, the Storage view identifies any persistent volume claims.

Figure 1.6 – Kubernetes Dashboard

With the help of the Kubernetes command-line tool, kubectl, you can perform commands against Kubernetes clusters. kubectl is a command-line tool for deploying applications, inspecting and managing cluster resources, and viewing logs. kubectl can be installed on a variety of Linux, macOS, and Windows platforms.

The basic syntax for kubectl looks as follows:

kubectl [command] [type] [name] [flags]

Let's look at command, type, name, and flags in more detail:

command: This defines the action you wanted to obtain on one or more resources, such as create, get, delete, and describe.type: This defines the types of your resources, such as pods and jobs.name: This defines the name of the resource. Names are case-sensitive. If the name is omitted, details for all the resources are displayed; for example, kubectl get pods.flags: This defines optional flags.

We'll take a closer look at each of these Kubernetes objects in the upcoming sections.

Understanding pods

Pods are the minimal deployable computing units that can be built and managed in Kubernetes. They are made up of one or more containers that share storage and network resources, as well as running instructions. Pods have the following components:

An exclusive IP address that enables them to converse with one anotherPersistent storage volumes based on the application's needsConfiguration information that determines how a container should run

The following diagram shows the various components of a pod:

Figure 1.7 – The components of a pod

Workload resources known as controllers create pods and oversee the rollout, replication, and health of pods in the cluster.

The most common types of controllers are as follows:

Jobs for batch-type jobs that are short-lived and will run a task to completionDeployments for applications that are stateless and persistent, such as web servers StatefulSets for applications that are both stateful and persistent, such as databases

These controllers build pods using configuration information from a pod template, and they guarantee that the operating pods meet the deployment specification provided in the pod template by creating replicas in the number of instances specified in the deployment.

As we mentioned previously, the Kubectl command-line interface includes various commands that allow users to build pods, deploy them, check on the status of operating pods, and delete pods that are no longer needed.

The following are the most commonly used kubectl commands concerning pods:

The create command creates the pod:

kubectl create -f FILENAME.

For example, the kubectl create -f ./mypod.yaml command will create a new pod from the mypod YAML file.

The get pod/pods command will display information about one or more resources. Information can be filtered using the respective label selectors:

kubectl get pod pod1

The delete command deletes the pod:

kubectl delete -f FILENAME.

For example, the kubectl delete -f ./mypod.yaml command will delete the mypod pod from the cluster.

With that, we've learned that a pod is the smallest unit of a Kubernetes application and is made up of one or more Linux containers. In the next section, we will look at deployments.

Understanding deployments

Deployment allows you to make declarative changes to pods and ReplicaSets. You can provide a desired state for the deployment, and the deployment controller will incrementally change the actual state to the desired state.

Deployments can be used to create new ReplicaSets or to replace existing deployments with new deployments. When a new version is ready to go live in production, the deployment can easily handle the upgrade with no downtime by using predefined rules. The following diagram shows an example of a deployment:

Figure 1.8 – A deployment

The following is an example of a deployment. It creates a ReplicaSet to bring up three nginx pods:

apiVersion: apps/v1 kind: Deployment metadata:   name: nginx-sample-deployment   labels:     app: nginx spec:   replicas: 3   selector:     matchLabels:       app: nginx   template:     metadata:       labels:         app: nginx     spec:       containers:       - name: nginx         image: nginx:1:21         ports:         - containerPort: 80

In the preceding example, the following occurred:

A deployment called nginx-sample-deployment is created, as indicated by the metadata.name field.The image for this deployment is set by the Spec.containers.image field (nginx:latest).The deployment creates three replicated pods, as indicated by the replicas field.

The most commonly used kubectl commands concerning deployment are as follows:

The apply command creates the pod:

kubectl apply -f FILENAME.

For example, the kubectl apply -f ./nginx-deployment.yaml command will create a new deployment from the nginx-deployment.yaml YAML file.

The get deployments command checks the status of the deployment:

kubectl get deployments

This will produce the following output:

NAME               READY   UP-TO-DATE   AVAILABLE   AGE

nginx-sample-deployment   3/3     0            0           1s

The following fields are displayed:

NAME indicates the names of the deployments in the namespace.READY shows how many replicas of the application are available.UP-TO-DATE shows the number of replicas that have been updated to achieve the desired state.AVAILABLE shows the number of available replicas.AGE indicates the length of time the application has been running.The describe deployments command indicates the details of the deployment:

kubectl describe deployments

The delete command removes the deployment that was made by the apply command:

kubectl delete -f FILENAME.

With that, we have learned that deployments are used to define the life cycle of an application, including which container images to use, how many pods you should have, and how they should be updated. In the next section, we will look at StatefulSets and DaemonSets.

Understanding StatefulSets and DaemonSets

In this section, we'll go over two distinct approaches to deploying our application on Kubernetes: using StatefulSets and DaemonSets.

StatefulSets

The StatefulSet API object is used to handle stateful applications. A StatefulSet, like a deployment, handles pods that have the same container specification. A StatefulSet, unlike a deployment, continues using a persistent identity for each of its pods. These pods are generated for identical specifications, but they can't be exchanged: each has a unique identity that it keeps throughout any rescheduling.

The following example demonstrates the components of a StatefulSet:

apiVersion: v1 kind: Service metadata:   name: nginx   labels:     app: nginx spec:   ports:   - port: 80     name: web   clusterIP: None   selector:     app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata:   name: web spec:   selector:     matchLabels:       app: nginx   serviceName: "nginx"   replicas: 3   template:     metadata:       labels:         app: nginx     spec:       containers:       - name: nginx         image: nginx:latest         ports:         - containerPort: 80           name: web         volumeMounts:         - name: www_volume           mountPath: /usr/share/nginx/html   volumeClaimTemplates:   - metadata:       name: www_volume     spec:       accessModes: [ "ReadWriteOnce" ]       storageClassName: "my-storage-class"       resources:         requests:           storage: 10Gi

In the preceding example, we have the following:

nginx is the headless service that is used to control the network domain.web is the StatefulSet that has a specification that indicates that three replicas from the nginx container will be launched in unique pods.volumeClaimTemplates will use PersistentVolumes provisioned by a PersistentVolume provisioner to offer stable storage.

Now, let's move on to DaemonSets.

DaemonSets

A DaemonSet guarantees that all (or some) nodes have a copy of a pod running. As nodes are added to the cluster, pods are added to them. As nodes are removed from the cluster, garbage is collected in pods. When you delete a DaemonSet, the pods it produced are also deleted.

The following are some example use cases regarding DaemonSets:

Run a daemon for cluster storage on each node, such as glusterd and ceph.Run a daemon for logs to be collected on each node, such as Fluentd or FluentBit and logstash.Run a daemon for monitoring on every node, such as Prometheus Node Exporter, collectd, or Datadog agent.

The following code shows a DaemonSet that's running the fluent-bit Docker image:

apiVersion: apps/v1 kind: DaemonSet metadata:   name: fluent-bit   namespace: kube-system   labels:     k8s-app: fluent-bit spec:   selector:     matchLabels:       name: fluent-bit   template:     metadata:       labels:         name: fluent-bit     spec:       tolerations:       - key: node-role.kubernetes.io/master         operator: Exists         effect: NoSchedule       containers:       - name: fluent-bit         image: fluent/fluent-bit:latest         resources:           limits:             memory: 200Mi           requests:             cpu: 100m             memory: 200Mi

In the preceding example, the fluent-bit DaemonSet has a specification that tells fluent-bit to run on all the nodes.

The most commonly used kubectl commands concerning DaemonSets are as follows:

The create or apply command creates the DaemonSet:

kubectl apply -f FILENAME.

For example, the kubectl apply -f ./daemonset-deployment.yaml command will create a new DaemonSet from the daemonset-deployment.yaml YAML file.

The get daemonset command is used to monitor the status of the DaemonSet:

kubectl get daemonset

This will produce the following output:

NAME               READY   UP-TO-DATE   AVAILABLE   AGE

daemonset-deployment   3/3     0            0           1s

The following fields are displayed:

NAME indicates the names of the DaemonSets in the namespace.READY shows how many replicas of the application are available.UP-TO-DATE shows the number of replicas that have been updated to achieve the desired state.AVAILABLE shows how many replicas of the application are available.AGE indicates the length of time the application has been running. The describe daemonset command indicates the details of the DaemonSets:

kubectl describe daemonset

The delete command removes the deployment that was made by the apply command:

kubectl delete <<daemonset>>

With that, we've learned that a DaemonSet ensures that all or a set of nodes run a copy of a pod, while a StatefulSet is used to manage stateful applications. In the next section, we will look at jobs and CronJobs.

Understanding jobs and CronJobs

In this section, we will learn how to use Kubernetes jobs to build temporary pods that do certain tasks. CronJobs are similar to jobs, but they run tasks according to a set schedule.

Jobs

A job launches one or more pods and continues to try executing them until a specific number of them succeed. The job keeps track of how many pods have been completed successfully. The task (that is, the job) is completed when a certain number of successful completions is met.

When you delete a job, it also deletes all the pods it created. Suspending a job causes all the current pods to be deleted until the job is resumed. The following code shows a job config that runs every minute and prints example Job Pod is Running as its output:

apiVersion: batch/v1 kind: Job metadata:   name: example-job spec: template:     spec:       containers:       - name: example-job         image: busybox         command: ['echo', 'echo example Job Pod is Running']       restartPolicy: OnFailure       backoffLimit: 4

The most commonly used kubectl commands concerning jobs are as follows:

The create or apply command creates the pod:

kubectl apply -f FILENAME.

For example, the kubectl apply -f ./jobs-deployment.yaml command will create new jobs from the jobs-deployment.yaml YAML file.

The describe jobs command indicates the details of the jobs:

kubectl describe jobs <<job name>>

CronJob

A CronJob is a job that is created regularly. It is equivalent to a single line in a crontab (cron table) file. It executes a job that is written in Cron format regularly.

CronJobs are used to automate common processes such as backups and report generation. You can decide when the work should begin within that period by setting each of those jobs to repeat indefinitely (for example, once a day, week, or month).

The following is an example of a CronJob that prints the example-cronjob Pod is Running output every minute:

apiVersion: batch/v1 kind: CronJob metadata:   name: example-cronjob spec:   schedule: "*/1 * * * *"   jobTemplate:     spec:       template:         spec:           containers:           - name: example-cronjob             image: busybox             imagePullPolicy: IfNotPresent             command:             - /bin/sh             - -c             - date; echo example-cronjob Pod is Running ; sleep 5           restartPolicy: OnFailure

Here, schedule: /1 * indicates that the crontab syntax is used in Linux systems.

Jobs and CronJobs are critical components of Kubernetes, particularly for performing batch processes and other critical ad hoc tasks. We'll examine service abstraction in the next section.

Understanding services

In Kubernetes, a service is an abstraction that defines a logical set of pods, as well as a policy for accessing them. An example service definition is shown in the following code block, which includes a collection of pods that each listen on TCP port 9876 with the app=exampleApp label:

apiVersion: v1 kind: Service metadata:   name: example-service spec:   selector:     app: exampleApp   ports:     - protocol: TCP       port: 80       targetPort: 9876

In the preceding example, a new Service object named example-service was created that routes TCP port 9876 to any pod with the app=exampleApp label. This service is given an IP address by Kubernetes, which is utilized by the service proxies. A Kubernetes service, in simple terms, connects a group of pods to an abstracted service name and IP address. Discovery and routing between pods are provided by services. Services, for example, connect an application's frontend to its backend, which are both deployed in different cluster deployments. Labels and selectors are used by services to match pods with other applications.

The core attributes of a Kubernetes service are as follows:

A label selector that locates podsThe cluster IP address and the assigned port numberPort definitions(Optional) Mapping for incoming ports to a targetPort

Kubernetes