End-to-End Automation with Kubernetes and Crossplane - Arun Ramakani - E-Book

End-to-End Automation with Kubernetes and Crossplane E-Book

Arun Ramakani

0,0
37,19 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

In the last few years, countless organizations have taken advantage of the disruptive application deployment operating model provided by Kubernetes. With Crossplane, the same benefits are coming to the world of infrastructure provisioning and management. The limitations of Infrastructure as Code with respect to drift management, role-based access control, team collaboration, and weak contract make people move towards a control-plane-based infrastructure automation, but setting it up requires a lot of know-how and effort.
This book will cover a detailed journey to building a control-plane-based infrastructure automation platform with Kubernetes and Crossplane. The cloud-native landscape has an overwhelming list of configuration management tools that can make it difficult to analyze and choose. This book will guide cloud-native practitioners to select the right tools for Kubernetes configuration management that best suit the use case. You'll learn about configuration management with hands-on modules built on popular configuration management tools such as Helm, Kustomize, Argo, and KubeVela. The hands-on examples will be patterns that one can directly use in their work.
By the end of this book, you'll be well-versed with building a modern infrastructure automation platform to unify application and infrastructure automation.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 258

Veröffentlichungsjahr: 2022

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



End-to-End Automation with Kubernetes and Crossplane

Develop a control plane-based platform for unified infrastructure, services, and application automation

Arun Ramakani

BIRMINGHAM—MUMBAI

End-to-End Automation with Kubernetes and Crossplane

Copyright © 2022 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Rahul Nair

Publishing Product Manager: Niranjan Naikwadi

Senior Editor: Athikho Sapuni Rishana

Technical Editor: Rajat Sharma

Copy Editor: Safis Editing

Project Coordinator: Ajesh Devavaram

Proofreader: Safis Editing

Indexer: Rekha Nair

Production Designer: Shyam Sundar Korumilli

Marketing Coordinator: Nimisha Dua

Senior Marketing Coordinator: Sanjana Gupta

First published: July 2022

Production reference: 1060722

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80181-154-5

www.packt.com

To my parents, for their commitment to guiding me on the right career path at every little step. To my wife and son, for being very supportive and sacrificing their needs to assist me on this book journey. Finally, to my close friend and buddy, Prabu Soundarrajan, who helped shape my technical perception for this book and beyond.

– Arun Ramakani

Contributors

About the author

Arun Ramakani is a passionate distributed platform development and technology blogging expert living in Dubai, a dynamic city where many cultures meet. He is currently working as a technology architect at PwC, specializing in evolutionary architecture practices, Kubernetes DevOps, cloud-native apps, and microservices. He has over a decade of experience in working with a variety of different technologies, domains, and teams. He has been part of many digital transformation journeys in the last few years. This book is inspired by one of his recent works. He is enthusiastic about learning in public and committed to helping individuals in their cloud-native learning journeys.

I want to thank the people who have been close to me and supported me, especially my wife, son, and parents. I would like to thank Kelsey Hightower, who inspired me to do some significant work around the Kubernetes ecosystem and mentoring me during the office hours. Finally, I want to thank everyone in the Crossplane community for providing me with inspiration for lots of the ideas covered in the book.

About the reviewer

Werner Dijkerman is a freelancing cloud, (certified) Kubernetes, and DevOps engineer, currently focused and working on cloud-native solutions and tools such as AWS, Ansible, Kubernetes, and Terraform. His focus is on infrastructure as code and monitoring the correct “thing” with tools such as Zabbix, Prometheus, and the ELK stack. Big thanks, hugs, and shoutouts to Anca Borodi, Theo Punter, and the rest of the team at COERA!

Table of Contents

Preface

Part 1: The Kubernetes Disruption

Chapter 1: Introducing the New Operating Model

The Kubernetes journey

Characteristics of the new operating model

Team collaboration and workflows

Control theory

Interoperability

Extensibility

Architecture focus

Open source, community, and governance

The next Kubernetes use case

Summary

Chapter 2: Examining the State of Infrastructure Automation

The history of infrastructure automation

The need for the next evolution

The limitations of IaC

A Kubernetes operating model for automation

Multi-cloud automation requirements

Crossplane as a cloud control plane

A universal control plane

Open standards for infrastructure vendors

Wider participation

The cloud provider partnerships

Other similar projects

Summary

Part 2: Building a Modern Infrastructure Platform

Chapter 3: Automating Infrastructure with Crossplane

Understanding Custom Resource Definitions and custom controllers

Adding a new CRD

Working with the CRD

Understanding the Crossplane architecture

Managed resources

Providers

Composite resources

Crossplane core

Installing Crossplane

Installing and configuring providers

Setting up a cloud account

Installing a provider

Configuring the provider

Multiple provider configuration

An example of POSTGRES provisioning

Summary

Chapter 4: Composing Infrastructure with Crossplane

Feeling like an API developer

How do XRs work?

XRD

Composition

Claim

Postprovisioning of an XR

Readiness check

Patch status

Propagating credentials back

Preprovisioned resources

Building an XR

The infrastructure API requirement

Creating the XRD

Providing implementation

Provisioning the resources with a claim

Troubleshooting

Summary

Chapter 5: Exploring Infrastructure Platform Patterns

Evolving the APIs

API implementation change

Hands-on journey with composition revision

API contract changes

Non-breaking changes

Version upgrade

Version upgrade with breaking changes

Nested and multi-resource XRs

PatchSets

XRD detailed

Naming the versions

The openAPIV3Schema structure

The additional parameter of an attribute

Printer columns

Managing external software resources

Unifying the automation

Summary

Chapter 6: More Crossplane Patterns

AWS provider setup

Creating an AWS account and IAM user

Creating the Kubernetes secret

AWS provider and ProviderConfig setup

Managing dependencies

Resource reference within and nested XR

Referring to an outside resource

Secret propagation hands-on

Helm provider hands-on

Defining API boundaries

Alerts and monitoring

Enabling Prometheus to scrape metrics

Setting up monitoring alerts

Enabling the Grafana dashboard

More troubleshooting patterns

Summary

Chapter 7: Extending and Scaling Crossplane

Building a new provider

XRM detailed

Configuration fidelity

Spec and status configuration

Naming the custom and external resource

Configuration ownership

Sensitive input and output fields

Framework to build a provider

Packaging and distribution of XR/Claim

Packaging and distribution

Installing and using the configuration

Testing the configurations

Installing KUTTL

KUTTL test setup

TDD

Multi-tenant control plane patterns

Multi-tenancy with a single cluster

Multi-tenancy with multiple clusters

Summary

Part 3:Configuration Management Tools and Recipes

Chapter 8: Knowing the Trade-offs

Unified automation scope

Complexity clock, requirements, and patterns

The configuration complexity clock

Configuration management requirements

Patterns and trade-off

Open Application Model

KubeVela, the OAM implementation

Specialized and extendable abstraction

Specialized abstraction

Extendable abstraction

Impact of change frequency

XRM change frequency

Summary

Chapter 9: Using Helm, Kustomize, and KubeVela

Application configuration management capabilities

Using Helm for application deployment

Working with an existing chart

Hands-on chart development

Chart generation

Customizing configurations with Kustomize

Deploying application workloads with KubeVela

Anatomy of a KubeVela application definition

Summary

Chapter 10: Onboarding Applications with Crossplane

The automation requirements

The solution

Preparing the control plane

The GCP provider

The GitLab provider

Helm and Kubernetes provider setup

Automating the application deployment environment

The repository and CI setup

GitLab configuration

The onboarding XR/claim API

The deployment dependencies

API boundary analysis

Summary

Chapter 11: Driving the Platform Adoption

Why we need an infrastructure platform as a product

Understanding customers’ needs

Product development practices

Self-service

Collaborative backlog management

The platform product life cycle and team interaction

The OAM personas

Summary

Why subscribe?

Other Books You May Enjoy

Preface

In the last few years, countless organizations have taken advantage of the disruptive application deployment operating model provided by Kubernetes. With the launch of Crossplane, the same benefits are coming to the world of infrastructure provisioning and management. The limitations of infrastructure as code, with respect to drift management, role-based access control, team collaboration, and weak contracts, have made people move toward control plane-based infrastructure automation, but setting it up requires a lot of know-how and effort.

This book will cover a detailed journey to building a control plane-based infrastructure automation platform with Kubernetes and Crossplane. The cloud-native landscape has an overwhelming list of configuration management tools, which can make it difficult to analyze and choose the right one. This book will guide cloud-native practitioners to select the right tools for Kubernetes configuration management that best suit the use case. You’ll learn about configuration management with hands-on modules built on popular configuration management tools, such as Helm, Kustomize, Argo, and KubeVela. The hands-on examples will be guides that you can directly use in your day-to-day work.

By the end of this DevOps book, you’ll have be well versed in building a modern infrastructure automation platform to unify application and infrastructure automation.

Who this book is for

This book is for cloud architects, platform engineers, infrastructure or application operators, and Kubernetes enthusiasts interested in simplifying infrastructure and application automation. A basic understanding of Kubernetes and its building blocks, such as Pod, Deployment, Service, and namespace, is needed before you can get started with this book.

What this book covers

Chapter 1, Introducing the New Operating Model, discusses how, for many people, Kubernetes is all about container orchestration. But Kubernetes is much more than that. Understanding the deciding factors on why Kubernetes disrupted the day 1 and day 2 IT operations is key to successful adoption and optimum usage.

Chapter 2, Examining the State of Infrastructure Automation, exposes the limitations of infrastructure as code and proposes control plane-based infrastructure automation as the new-age automation concept using Crossplane and Kubernetes.

Chapter 3, Automating Infrastructure with Crossplane, helps us to understand how to set up a Crossplane cluster, discusses its architecture, and explains how to use it as a vanilla flavor for infrastructure automation.

Chapter 4, Composing Infrastructure with Crossplane, helps us to understand composing, a powerful construct of Crossplane that can help us to create new infrastructure abstractions. These abstractions can be our custom Kubernetes-based cloud APIs with the organization policies, compliance requirements, and recipes baked into them.

Chapter 5, Exploring Infrastructure Platform Patterns, looks at how the success of running an infrastructure platform product within an organization requires a few key patterns that we can use with Crossplane. This chapter will explore these patterns in detail.

Chapter 6, More Crossplane Patterns, explores more Crossplane patterns that are useful for our day-to-day work. We will learn about most of these patterns with a hands-on journey.

Chapter 7, Extending and Scaling Crossplane, covers two unique aspects that make Crossplane extendable and scalable. The first part will deep dive into the Crossplane providers, and the second part will cover how Crossplane can work in a multi-tenant ecosystem.

Chapter 8, Knowing the Trade-Offs, discusses how managing configuration has many nuances to it. Understanding the configuration clock will help us to categorize tools and understand the trade-offs applicable for each category.

Chapter 9, Using Helm, Kustomize, and KubeVela, concentrates on explaining how to use different configuration management tools that are popular today, such as Helm, Kustomize, and KubeVela.

Chapter 10, Onboarding Applications with Crossplane, looks at how infrastructure provisioning and application onboarding involve a few cross-cutting concerns, such as setting up the source code repositories, the continuous integration workflow, and continuous deployment. This chapter will look at ways to approach application, services, and infrastructure automation with Crossplane in a unified way.

Chapter 11, Driving the Platform Adoption, explains that many organizations fail with their technology platform projects because they don’t apply the needed product development practices and team topology. This chapter aims to help understand the aspects required to build and adopt a successful infrastructure platform.

To get the most out of this book

Please go through the documentation at https://kubernetes.io/docs/concepts/overview/ to understand the basic concepts. All code examples are tested using the Kind Kubernetes cluster (https://kind.sigs.k8s.io/ - v1.21.1) and Crossplane version 1.5.0 as the control plane. However, they should work with future version releases too.

Note that for Crossplane installation, you should have a minimum Kubernetes version of v1.16.0.

If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/End-to-End-Automation-with-Kubernetes-and-Crossplane. If there’s an update to the code, it will be updated in the GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://packt.link/1j9JK.

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “Resources such as Pods, Deployments, Jobs, and StatefulSets belong to the workload category.”

A block of code is set as follows:

# List all resources kubectl api-resources# List resources in the "apps" API group kubectl api-resources --api-group=apps# List resources in the "networking.k8s.io" API groupkubectl api-resources --api-group=networking.k8s.io

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

apiVersion: "book.imarunrk.com/v1"kind: "CloudDB"metadata: name: "aws_RDS"spec: type: "sql" cloud : "aws"

Any command-line input or output is written as follows:

% kubectl get all -n crossplane-system

helm delete crossplane --namespace crossplane-system

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “Go to the IAM section in the AWS web console and click Add a user.”

Tips or Important Notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share Your Thoughts

Once you’ve read End-to-End Automation with Kubernetes and Crossplane, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Part 1: The Kubernetes Disruption

This part of the book will cover the context of why Kubernetes won the war of application deployment automation and how it is evolving into a new trend in infrastructure automation.

This part comprises the following chapters:

Chapter 1, Introducing the New Operating ModelChapter 2, Examining the State of Infrastructure Automation

Chapter 1: Introducing the New Operating Model

Many think that Kubernetes won the container orchestration war because of its outstanding ability to manage containers. But Kubernetes is much more than that. In addition to handling container orchestration at scale, Kubernetes introduced a new IT operating model. There is always a trap with anything new. We tend to use a new tool the old way because of our tendencies. Understanding how Kubernetes disrupted IT operations is critical for not falling into these traps and achieving successful adoption. This chapter will dive deep into the significant aspects of the new operating model.

We will cover the following topics in this chapter:

The Kubernetes journeyCharacteristics of the new operating modelThe next Kubernetes use case

The Kubernetes journey

The Kubernetes journey to become the leading container orchestration platform has seen many fascinating moments. Kubernetes was an open source initiative by a few Google engineers based on an internal project called Borg. From day one, Kubernetes had the advantage of heavy production usage at Google and more than a decade of active development as Borg. Soon, it became more than a small set of Google engineers, with overwhelming community support. The container orchestration war was a tough fight between Docker, Mesosphere DC/OS, Kubernetes, Cloud Foundry, and AWS Elastic Container Service (ECS) from 2015. Kubernetes was outperforming its peers slowly and steadily.

Initially, Docker, Mesosphere, and Cloud Foundry announced native support for Kubernetes. Finally, in 2017, AWS announced ECS for Kubernetes. Eventually, all the cloud providers came up with a managed Kubernetes offering. The rivals had no choice other than to provide native support for Kubernetes because of its efficacy and adoption. These were the winning moments for Kubernetes in the container orchestration war. Furthermore, it continued to grow to become the core of the cloud-native ecosystem, with many tools and patterns evolving around it. The following diagram illustrates the container orchestration war:

Figure 1.1 – The container orchestration war

Next, let's learn about the characteristics of the new operating model.

Characteristics of the new operating model

Understanding how Kubernetes can positively impact IT operations will provide a solid base for the efficient adoption of DevOps in application and infrastructure automation. The following are some of the significant characteristics of the Kubernetes operating model:

Team collaboration and workflowsControl theoryInteroperabilityExtensibilityNew architecture focusOpen source, community, and governance

Let's look at these characteristics in detail in the following sections.

Important Note

Before we dive deep, it's critical to understand that you are expected to have a basic prior understanding of Kubernetes architecture and its building block resources, such as Pods, Deployments, Services, and namespaces. New to Kubernetes? Looking for a guide to understand the basic concepts? Please go through the documentation at https://kubernetes.io/docs/concepts/overview/.

Team collaboration and workflows

All Kubernetes resources, such as Pods, volumes, Services, Deployments, and Secrets are persistent entities stored in etcd. Kubernetes has well-modeled RESTful APIs to perform CRUD operations over these resources. The Create, Update, and Deletion operations to the etcd persistence store is a state change request. The state change is realized asynchronously with the Kubernetes control plane. There are a couple of characteristics of these Kubernetes APIs that are very useful for efficient team collaboration and workflows:

Declarative configuration managementMulti-persona collaboration

Declarative configuration management

We express our automation intent to the Kubernetes API as data points, known as the record of intent. The record does not carry any information about the steps to achieve the intention. This model enables a pure declarative configuration to automate workloads. It is easier to manage automation configuration as data points in Git than code. Also, expressing the automation intension as data is less prone to bugs, and easy to read and maintain. Provided we have a clear Git history, a simple intent expression, and release management, collaboration over the configuration is easy. The following is a simple record of intent for an NGINX Pod deployment:

apiVersion: v1_kind: Podmetadata:  name: proxyspec:  containers:    - name: proxy-image      image: Nginx      ports:        - name: proxy-port          containerPort: 80          protocol: TCP

Even though many new-age automation tools are primarily declarative, they are weak in collaboration because of missing well-modeled RESTful APIs. The following multi-persona collaboration section will discuss this aspect more. The combination of declarative configuration and multi-persona collaboration makes Kubernetes a unique proposition.

Multi-persona collaboration

With Kubernetes or other automation tools, we abstract the data center fully into a single window. Kubernetes has a separate API mapping to each infrastructure concern, unlike other automation tools. Kubernetes groups these concerns under the construct called API groups, of which there are around 20. API groups break the monolith infrastructure resources into minor responsibilities, providing segregation for different personas to operate an infrastructure based on responsibility. To simplify, we can logically divide the APIs into five sections:

Workloads are objects that can help us to manage and run containers in the Kubernetes cluster. Resources such as Pods, Deployments, Jobs, and StatefulSets belong to the workload category. These resources mainly come under the apps and core API groups.Discovery and load balancers is a set of resources that helps us stitch workloads with load balancers. People responsible for traffic management can have access to these sets of APIs. Resources such as Services, NetworkPolicy, and Ingress appear under this category. They fall under the core and networking.k8s.io API groups.Config and storage are resources helpful to manage initialization and dependencies for our workloads, such as ConfigMaps, Secrets, and volumes. They fall under the core and storage.k8s.io API groups. The application operators can have access to these APIs.Cluster resources help us to manage the Kubernetes cluster configuration itself. Resources such as Nodes, Roles, RoleBinding, CertificateSigningCertificate, ServiceAccount, and namespaces fall under this category, and cluster operators should access these APIs. These resources come under many API groups, such as core, rbac, rbac.authorization.k8s.io, and certificates.k8s.io.Metadata resources are helpful to specify the behavior of a workload and other resources within the cluster. A HorizontalPodAutoScaler is a typical example of metadata resources defining workload behavior under different load conditions. These resources can fall under the core, autoscaling, and policy API groups. People responsible for application policies or automating architecture characteristics can access these APIs.

Note that the core API group holds resources from all the preceding categories. Explore all the Kubernetes resources yourself with the help of the kubectl comments. A few comment examples are as follows:

# List all resources kubectl api-resources# List resources in the "apps" API group kubectl api-resources --api-group=apps# List resources in the "networking.k8s.io" API groupkubectl api-resources --api-group=networking.k8s.io

The following screenshots give you a quick glimpse of resources under the apps and networking.k8s.io API groups, but I would highly recommend playing around to look at all resources and their API groups:

Figure 1.2 – Resources under the apps API group

The following are the resources under the network.k8s.io API group:

Figure 1.3 – Resources under the network.k8s.io API group

We can assign RBAC for teams based on individual resources or API groups. The following diagram represents the developers, application operators, and cluster operators collaborating over different concerns:

Figure 1.4 – Team collaboration

This representation may vary for you, based on an organization's structure, roles, and responsibilities. Traditional automation tools are template-based, and it's difficult for teams to collaborate. It leads to situations where policies are determined and implemented by two different teams. Kubernetes changed this operating model by enabling different personas to collaborate directly by bringing down the friction in collaboration.

Control theory

Control theory is a concept from engineering and mathematics, where we maintain the desired state in a dynamic system. The state of a dynamic system changes over time with the environmental changes. Control theory executes a continuous feedback loop to observe the output state, calculate the divergence, and then control input to maintain the system's desired state. Many engineering systems around us work using control theory. An air conditioning system with a continuous feedback loop to maintain temperature is a typical example. The following illustration provides a simplistic view of control theory flow:

Figure 1.5 – Control theory flow

Kubernetes has a state-of-the-art implementation of control theory. We submit our intention of the application's desired state to the API. The rest of the automation flow is handled by Kubernetes, marking an end to the human workflow once the API is submitted. Kubernetes controllers run a continuous reconciliation loop asynchronously to ensure that the desired state is maintained across all Kubernetes resources, such as Pods, Nodes, Services, Deployments, and Jobs. The controllers are the central brain of Kubernetes, with a collection of controllers responsible for managing different Kubernetes resources. Observe, analyze, and react are the three main functions of an individual controller:

Observe: Events relevant to the controller's resources are received by the observer. For example, a deployment controller will receive all the deployment resource's create, delete, and update events.Analyze: Once the observer receives the event, the analyzer jumps in to compare the current and desired state to find the delta.React: Performs the needed action to bring the resources back into the desired state.

The control theory implementation in Kubernetes changed the way IT performs in day one and day two operations. Once we express our intention as data points, the human workflow is over. The machine takes over the operations in asynchronous mode. Drift management is no longer part of the human workflow. In addition to the existing controllers, we can extend Kubernetes with new controllers. We can easily encode any operational knowledge required to manage our workload into a custom controller (operators) and hand over the custom day two operations to machines:

Figure 1.6 – The Kubernetes controller flow

Interoperability

The Kubernetes API is more than just an interface for our interaction with the cluster. It is the glue holding all the pieces together. kubectl, the schedulers, kubelet, and the controllers create and maintain resources with the help of kube-apiserver. kube-apiserver is the only component that talks to the etcd state store. kube-apiserver implements a well-defined API interface, providing state observability from any Kubernetes component and outside the cluster. This architecture of kube-apiserver makes it interoperable with the ecosystem. Other infrastructure automation tools such as Terraform, Ansible, and Puppet do not have a well-defined API to observe the state.

Take observability as an example. Many observability tools evolved around Kubernetes because of the interoperable characteristic of kube-apiserver. For contemporary digital organizations, continuous observability of state and a feedback loop based on it is critical. End-to-end visibility in the infrastructure and applications from the perspective of different stakeholders provides a way to realize operational excellence. Another example of interoperability is using various configuration management tools, such as Helm as an alternative to kubectl. As the record of intent is pure YAML or JSON data points, we can easily interchange one tool with another. The following diagram provides a view of kube-apiserver interactions with other Kubernetes components:

Figure 1.7 – Kubernetes API interactions

Interoperability means many things to IT operations. Some of the benefits are as follows:

Easy co-existence with the organization ecosystem.Kubernetes itself will evolve and be around for longer.Leveraging an existing skill set by choosing known ecosystem tools. For example, we can use Terraform for Kubernetes configuration management to take advantage of a team's knowledge in Terraform.Hypothetically keeping the option open for migrating away from Kubernetes in the future. (Kubernetes APIs are highly modular, and we can interchange the underlying components easily. Also, a pure declarative config is easy to migrate away from Kubernetes if required.)

Extensibility

Kubernetes' ability to add new functionalities is remarkable. We can look at the extensibility in three different ways:

Augmenting Kubernetes core componentsInterchangeability of componentsAdding new resource types

Augmented Kubernetes core components

This extending model will either add additional functionality to the core components or alter core component functionality. We will look at a few examples of these extensions:

kubectl plugins are a way to attach sub-commands to the kubectl CLI. They are executables added to an operator's computer in a specific format without changing the kubectl source in any form. These extensions can combine a process that takes several steps into a single sub-command to increase productivity.Custom schedulers are a concept that allows us to modify Kubernetes' resource scheduling behavior. We can even register multiple schedulers to run parallel to each other and configure them for different workloads. The default scheduler can cover most of the general use cases. Custom schedulers are needed if we have a workload with a unique scheduling behavior not available in the default scheduler.Infrastructure plugins are concepts that help to extend underlying hardware. The device, storage, and network are the three different infrastructure plugins. Let's say a device supports GPU processing – we require a mechanism to advertise the GPU usage details to schedule workload based on GPU.

Interchangeability of components

The interoperability characteristics of Kubernetes provide the ability to interchange one core component with another. These types of extensions bring new capabilities to Kubernetes. For example, let's pick up the virtual kubelet project (https://github.com/virtual-kubelet/virtual-kubelet). Kubelet is the interface between the Kubernetes control plane and the virtual machine nodes where the workloads are scheduled. Virtual kubelet mimics a node in the Kubernetes cluster to enable resource management with infrastructure other than a virtual machine node such as Azure Container Instances or AWS Fargate. Replacing the Docker runtime with another container runtime environment such as Rocket is another example of interchangeability.

Adding new resource types

We can expand the scope of the Kubernetes API and controller to create a new custom resource, also known as CustomResourceDefinition (CRD). It is one of the powerful constructs used for extending Kubernetes to manage resources other than containers. Crossplane, a platform for cloud resource management, falls under this category, which we will dive deep into in the upcoming chapters. Another use case is to automate our custom IT day one and day two processes, also known as the operator pattern. For example, tasks such as deploying, upgrading, and responding to failure can be encoded into a new Kubernetes operator.

People call Kubernetes a platform to build platforms because of its extensive extendibility. They generally support new use cases or make Kubernetes fit into a specific ecosystem. Kubernetes presents itself to IT operations as a universal abstraction by extending and supporting every complex deployment environment.

Architecture focus

One of the focuses of architecture work is to make the application deployment architecture robust to various conditions such as virtual machine failures, data center failures, and diverse traffic conditions. Also, resource utilization should be optimum without any wastage of cost in over-provisioned infrastructure. Kubernetes makes it simple and unifies how to achieve architecture characteristics such as reliability, scalability, availability, efficiency, and elasticity. It relieves architects from focusing on infrastructure. Architects can now focus on building the required characters into the application, as achieving them at the infrastructure level is not complex anymore. It is a significant shift in the way traditional IT operates. Designing for failure, observability, and chaos engineering practices are becoming more popular as areas for architects to concentrate onin the world of containers.

Portability is another architecture characteristic Kubernetes provides to workloads. Container workloads are generally portable, but dependencies are not. We tend to introduce dependencies with other cloud components. Building portability into application dependencies is another architecture trend in recent times. It's visible with the 2021 InfoQ architecture trends (https://www.infoq.com/articles/architecture-trends-2021/). In the trend chart, design for portability, Dapar, the Open Application Model, and design for sustainability are some of the trends relevant to workload portability. We are slowly moving in the direction of portable cloud providers.

With the deployment of workloads into Kubernetes, our focus on architecture in the new IT organization has changed forever.

Open source, community, and governance

Kubernetes almost relieves people from working with machines. Investing in such a high-level abstraction requires caution, and we will see whether the change will be long-lasting. Any high-level abstraction becoming a meaningful and long-lasting change requires a few characteristics. Being