29,99 €
Securing Secrets in containerized apps poses a significant challenge for Kubernetes IT professionals. This book tackles the critical task of safeguarding sensitive data, addressing the limitations of Kubernetes encryption, and establishing a robust Secrets management system for heightened security for Kubernetes.
Starting with the fundamental Kubernetes architecture principles and how they apply to the design of Secrets management, this book delves into advanced Kubernetes concepts such as hands-on security, compliance, risk mitigation, disaster recovery, and backup strategies. With the help of practical, real-world guidance, you’ll learn how to mitigate risks and establish robust Secrets management as you explore different types of external secret stores, configure them in Kubernetes, and integrate them with existing Secrets management solutions.
Further, you'll design, implement, and operate a secure method of managing sensitive payload by leveraging real use cases in an iterative process to enhance skills, practices, and analytical thinking, progressively strengthening the security posture with each solution.
By the end of this book, you'll have a rock-solid Secrets management solution to run your business-critical applications in a hybrid multi-cloud scenario, addressing operational risks, compliance, and controls.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 384
Veröffentlichungsjahr: 2024
Kubernetes Secrets Handbook
Design, implement, and maintain production-grade Kubernetes Secrets management solutions
Emmanouil Gkatziouras
Rom Adams
Chen Xi
Copyright © 2024 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Preet Ahuja
Publishing Product Manager: Suwarna Rajput
Senior Editor: Arun Nadar
Technical Editor: Irfa Ansari
Copy Editor: Safis Editing
Project Coordinator: Uma Devi
Proofreader: Safis Editing
Indexer: Tejal Daruwale Soni
Production Designer: Shankar Kalbhor
Marketing Coordinator: Rohan Dobhal
First published: January 2024
Production reference: 1120124
Published by
Packt Publishing Ltd.
Grosvenor House
11 St Paul’s Square
Birmingham
B3 1RB
ISBN 978-1-80512-322-4
www.packtpub.com
To my father. A mentor for life and the best teacher I had. At every milestone reached, you have your own share of credit.
– Emmanouil Gkatziouras
To my grandmother for her kindness, my grandfather for his wisdom, and my partner and best friend, Mercedes Adams, for her love, patience, and continuous support.
– Rom Adams
To my wife. A beacon of love and strength in my life. Your support and care have shaped every success I’ve achieved. In every moment, your presence is a blessing beyond measure.
– Chen Xi
In today’s digital landscape, the orchestration of containers has revolutionized how we build, deploy, manage, monitor, and scale cloud-native applications. Among the myriad tools available, Kubernetes has emerged as the de facto platform for container orchestration, empowering teams to streamline development and deployment processes like never before.
However, as we venture deeper into this realm of agility and efficiency, the critical aspect of security often becomes a concern relegated to the background. The management of Secrets – those sensitive pieces of information ranging from credentials, API keys, and other sensitive data – is a paramount challenge to organizations. Mismanagement of these Secrets can lead to substantial cyberattacks that jeopardize not just an organization’s data but also its reputation and trust. Even the accidental mismanagement of Secrets, such as Secrets being mistakenly stored in a code repository such as GitHub, can greatly increase the attack vector on both Kubernetes platforms and the applications that they host.
This book stands as a beacon in the sea of Kubernetes knowledge, guiding practitioners and enthusiasts alike through the intricate landscape of security and Secrets management within Kubernetes. It is a comprehensive guide that not only illuminates the potential vulnerabilities but also offers robust strategies and best practices to fortify your cloud-native applications and Kubernetes platforms.
With a meticulous approach, the authors delve into the core concepts of Kubernetes security, dissecting every layer of its architecture to unveil potential vulnerabilities and common pitfalls. Furthermore, they navigate the complex terrain of Secrets management, presenting battle-tested methodologies and tools to safeguard these invaluable assets.
From encryption in transit and encryption at rest to Secrets integration with CI/CD pipelines and mechanisms for identity and access management, this book thoroughly details the arsenal of security features Kubernetes offers, empowering you to craft and deliver a robust security strategy. It will arm you with practical insights and real-world examples, providing a hands-on approach to managing your Kubernetes Secrets against ever-evolving cyber threats.
As cloud-native application development continues its rapid evolution, the importance of securing our digital environments and artifacts cannot be overstated. This book is an indispensable companion, a guiding light for anyone navigating the Kubernetes ecosystem, ensuring that security and Secrets management remain at the forefront of their endeavors. It will cover Secrets management across multiple cloud providers and secure integration with other third-party vendors.
Prepare to embark on a journey that not only enhances your knowledge but also empowers you to fortify the foundation of your digital endeavors. When it comes to Kubernetes Secrets management, security should be built in, not bolt-on, and this book will arm you with the tools, techniques, and processes to ensure that your Secrets remain just that…secret!
Chris Jenkins, Principal Chief Architect, Global CTO Organization, Red Hat Inc.
Emmanouil Gkatziouras started his career in software as a Java developer. Since 2015, he has worked daily with cloud providers such as GCP, AWS, and Azure, and container orchestration tools such as Kubernetes. He has fulfilled many roles, either in lead positions or as an individual contributor. He enjoys being a versatile engineer and collaborating with development, platform, and architecture teams. He loves to give back to the developer community by contributing to open source projects and blogging on various software topics. He is committed to continuous learning and is a holder of certifications such as CKA, CCDAK, PSM, CKAD, and PSO. He is the author of A Developer’s Essential Guide to Docker Compose.
Rom Adams (né Romuald Vandepoel) is an open source and C-Suite advisor with 20 years of experience in the IT industry. He is a cloud-native expert who helps organizations to modernize and transform with open source solutions. He is advising companies and lawmakers on their open and inner-source strategies. He has previously worked as a principal architect at Ondat, a cloud-native storage company acquired by Akamai, where he designed products and hybrid cloud solutions. He has also held roles at Tyco, NetApp, and Red Hat, becoming a subject matter expert in hybrid cloud. He has been a moderator and speaker for several events, sharing his insights on culture, process, and technology adoption, as well as his passion for open innovation.
Chen Xi is a highly skilled Uber platform engineer. As a tech leader, he contributed to the secret and key management platform service, leading and delivering Secrets as a service with a 99.99% SLA for thousands of Uber container services across hybrid environments. His cloud infrastructure prowess is evident from his work on Google Kubernetes Engine (GKE) and the integration of Spire-based PKI systems. Prior to joining Uber, he worked at VMware, where he developed microservices for VMware’s Hybrid Kubernetes management platform (Tanzu Mission Control) and VMware Kubernetes Engine for multi-cloud (Cloud PKS). Chen is also a contributing author to the Certified Kubernetes Security Specialist (CKS) exam.
Brad Blackard is an industry veteran with nearly 20 years of experience at companies such as Uber, Microsoft, and Boeing. At Uber, Brad led multiple technical initiatives as a leader in the Core Security organization, including Secrets management at scale. Most recently, Brad has served as head of engineering for DevZero, a start-up focused on securely improving developer experience and productivity, and he continues to serve there as an advisor.
Ethan Walton is a staff security engineer with a background in Kubernetes, DevOps, and cloud security. He has been active in the space since 2019, with work spanning platform engineering, cloud infrastructure consulting at Google, and leading cloud security initiatives within growing engineering organizations. Ethan is certified as a Google Cloud Professional Cloud Network Engineer and is an avid technology enthusiast. Outside of work, Ethan is also heavily invested in Venture Capital and helping to discover transformational technology start-up companies that will help shape the future.
I’d like to thank my family and especially my mother, father, and better half, Alexandra, for understanding the time and commitment it takes to continue pursuing my passion in the ever-changing world of technology. Day in and day out, this would not have been possible without them every step of the way. Thank you, and thanks to all the great technology trailblazers who continue to make every day an exciting day to work in this field.
James Skliros, a seasoned lead engineer, has shaped the digital landscape for over two decades, and he is renowned for spearheading projects and showcasing exceptional expertise in DevOps, the cloud, and Kubernetes. His adeptness at developing innovative initiatives and enhancing operational efficiency in DevOps is evident throughout his career. Evolving from a system administration background, he now focuses on architecture and solution design, emphasizing a passion for cloud security. Beyond his professional endeavors, he remains dedicated to technology, contributing insightful blogs and articles to his employer and personal platform.
I want to extend my deepest gratitude to my incredible wife, who has been my unwavering support during both the highs and lows of my career journey. Her steadfast encouragement has allowed me to persist in achieving my goals. Additionally, I appreciate Innablr for providing a growth-oriented workplace. Their support has played a key role in my career progression, and I am sincerely thankful for the opportunities they’ve offered.
Kubernetes Secrets management is a combination of practices and tools that help users to securely store and manage sensitive information, such as passwords, tokens, and certificates, within a Kubernetes cluster and keep them safe and secure. Securing Secrets such as passwords, API keys, and other sensitive information is critical for protecting applications and data from unauthorized access. Developers who understand Kubernetes Secrets management can help ensure that Secrets are managed securely and effectively, reducing the risk of security breaches. Many industries and regulatory frameworks have specific requirements for managing sensitive data. By learning Kubernetes Secrets management practices, developers can ensure that their applications comply with these requirements and avoid potential legal or financial penalties.
This book is for software and DevOps engineers and system administrators looking to deploy and manage Secrets on Kubernetes. Specifically, it is aimed at the following:
Developers who are already familiar with Kubernetes and are looking to understand how to manage Secrets effectively. This could include individuals who are already using Kubernetes for application deployment, as well as those who are new to the platform and looking to learn more about its capabilities.Security professionals who are interested in learning how to securely manage Secrets within a Kubernetes environment. This could include individuals who are responsible for securing applications, infrastructure, or networks, as well as those who are responsible for compliance and regulatory requirements.Anyone who is interested in using Kubernetes to deploy and manage applications securely, and who wants to understand how to effectively manage Secrets within that environment.Chapter 1, Understanding Kubernetes Secrets Management, introduces you to Kubernetes and the importance of Secrets management in applications deployed on Kubernetes. It gives an overview of the challenges and risks associated with managing Secrets, the objectives, and the scope of the book.
Chapter 2, Walking through Kubernetes Secrets Management Concepts, covers the basics of Kubernetes Secrets management, including the different types of Secrets; their usage scenarios; how to create, modify, and delete Secrets in Kubernetes; and secure storage and access control. It also covers how to securely access Secrets with RBAC and Pod Security Standards, as well as auditing and monitoring secret usage.
Chapter 3, Encrypting Secrets the Kubernetes-Native Way, teaches you how to encrypt Secrets in transit and at rest in etcd, as well as key management and rotation in Kubernetes.
Chapter 4, Debugging and Troubleshooting Kubernetes Secrets, provides guidance on identifying and addressing common issues that arise when managing Secrets in Kubernetes. It covers best practices for debugging and troubleshooting Secrets, including the usage of monitoring and logging tools, ensuring the security and reliability of Kubernetes-based applications.
Chapter 5, Security, Auditing, and Compliance, focuses on the importance of compliance and security while managing Secrets in Kubernetes. It covers how to comply with security standards and regulations, mitigating security vulnerabilities, and ensuring secure Kubernetes Secrets management.
Chapter 6, Disaster Recovery and Backups, provides you with an understanding of disaster recovery and backups for Kubernetes Secrets. It also covers backup strategies and disaster recovery plans.
Chapter 7, Challenges and Risks in Managing Secrets, focuses on the challenges and risks associated with managing Secrets in hybrid and multi-cloud environments. It also covers strategies for mitigating security risks in Kubernetes Secrets management, guidelines for ensuring secure Kubernetes Secrets management, and the tools and technologies available for Kubernetes Secrets management.
Chapter 8, Exploring Cloud Secret Store on AWS, introduces you to AWS Secrets Manager and KMS and how they can be integrated with Kubernetes. It also covers monitoring and logging operations on Kubernetes Secrets with AWS CloudWatch.
Chapter 9, Exploring Cloud Secret Store on Azure, teaches you how to integrate Kubernetes with Azure Key Vault for secret storage, as well as the encryption of Secrets stored on etcd. It also covers monitoring and logging operations on Kubernetes Secrets through Azure’s observability tools.
Chapter 10, Exploring Cloud Secret Store on GCP, introduces you to GCP Secret Manager and GCP KMS and how they can be integrated with Kubernetes. It also covers monitoring and logging operations on Kubernetes Secrets with GCP monitoring and logs.
Chapter 11, Exploring External Secret Stores, explores different types of third-party external secret stores, such as HashiCorp Vault and CyberArk Secrets Manager. It teaches you how to use external secret stores to store sensitive data and the best practices for doing so. Additionally, the chapter also covers the security implications of using external secret stores and how they impact the overall security of a Kubernetes cluster.
Chapter 12, Integrating with Secret Stores, teaches you how to integrate third-party Secrets management tools with Kubernetes. It covers external secret stores in Kubernetes and the different types of external secret stores that can be used. You will also gain an understanding of the security implications of using external secret stores and how to use them to store sensitive data using different approaches such as init containers, sidecars, CSI drivers, operators, and sealed Secrets. The chapter also covers the best practices for using external secret stores and how they can impact the overall security of a Kubernetes cluster.
Chapter 13, Case Studies and Real-World Examples, covers real-world examples of how Kubernetes Secrets are used in production environments. It covers case studies of organizations that have implemented Secrets management in Kubernetes and lessons learned from real-world deployments. Additionally, you will learn about managing Secrets in CI/CD pipelines and integrating Secrets management into the CI/CD process. This chapter also covers Kubernetes tools to manage Secrets in pipelines and the best practices for secure CI/CD Secrets management.
Chapter 14, Conclusion and the Future of Kubernetes Secrets Management, gives an overview of the current state of Kubernetes Secrets management and future trends and developments in the field. It also covers how to stay up to date with the latest trends and best practices in Kubernetes Secrets management.
You should understand Bash scripting, containerization, and how Docker works. You should also understand Kubernetes and basic concepts of security. Knowledge of Terraform and cloud providers will also be beneficial.
Software covered in the book
Operating system requirements
Docker
Windows, macOS, or Linux
Shell scripting
Podman and Podman Desktop
minikube
Helm
Terraform
GCP
Azure
AWS
OKD and Red Hat OpenShift
StackRox and Red Hat Advanced Cluster Security
Trivy from Aqua
HashiCorp Vault
If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.
You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Kubernetes-Secrets-Handbook. If there’s an update to the code, it will be updated in the GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “The kms provider plugin connects kube-apiserver with an external KMS to leverage an envelope encryption principle.”
A block of code is set as follows:
apiVersion: apiserver.config.k8s.io/v1 kind: EncryptionConfiguration resources: - resources: - secrets providers: - aesgcm: keys: - name: key-20230616 secret: DlZbD9Vc9ADLjAxKBaWxoevlKdsMMIY68DxQZVabJM8= - identity: {}When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
apiVersion: v1 kind: ServiceAccount metadata: annotations: eks.amazonaws.com/role-arn: "arn:aws:iam::11111:role/eks-secret-reader" name: service-token-reader namespace: defaultAny command-line input or output is written as follows:
$ kubectl get events ... 11m Normal Pulled pod/webpage Container image "nginx:stable" already present on machinBold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “Another notable tool provided by GCP to improve the security posture of a GKE cluster is the GKE security posture dashboard.”
Tips or important notes
Appears like this.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.
Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Once you’ve read Kubernetes Secrets Handbook, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.
Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.
Thanks for purchasing this book!
Do you like to read on the go but are unable to carry your print books everywhere?
Is your eBook purchase not compatible with the device of your choice?
Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.
Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application.
The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily
Follow these simple steps to get the benefits:
Scan the QR code or visit the link belowhttps://packt.link/free-ebook/9781805123224
Submit your proof of purchaseThat’s it! We’ll send your free PDF and other benefits to your email directlyIn this part, you will be provided with a foundational understanding of Kubernetes Secrets and their importance in managing sensitive data in applications deployed on Kubernetes. By the end of this part, you will have learned the basics of the purpose, function, and usage of Kubernetes Secrets with real-world examples.
This part has the following chapters:
Chapter 1, Understanding Kubernetes Secrets ManagementChapter 2, Walking through Kubernetes Secrets Management ConceptsChapter 3, Encrypting Secrets the Kubernetes-Native WayChapter 4, Debugging and Troubleshooting Kubernetes SecretsThis chapter will provide you with a refresher about containers, as well as a comprehensive overview of Kubernetes and its Secrets management implementation. By the end of this first walk-through, all personas (developers, platform, and security engineers) will know how to design and implement these topics with a set of hands-on examples. While going through these examples, we will highlight the respective security concerns that this book will address by covering a series of use cases that will lead to a production-grade solution for hybrid multi-cloud scenarios, including the business continuity perspective.
In this chapter, we will cover the following topics:
Understanding Kubernetes’ origins and design principlesSetting up our first Kubernetes testing environmentExploring Kubernetes Secret and ConfigMap objectsAnalyzing why Kubernetes Secrets are importantUnveiling the challenges and risks associated with Kubernetes Secrets managementMapping the objectives and scope of this bookTo complete the hands-on parts of this chapter, we will be leveraging a series of tools and platforms that are commonly used to interact with containers, Kubernetes, and Secrets management. For this first chapter, we will be setting up this environment together and ramping up with a friendly desktop graphical solution for the first set of examples. Don’t worry – we have you covered with our Code in Action and GitHub repository, which contains the macOS installation example. Here is the list of required tools:
Docker (https://docker.com) or Podman (https://podman.io) as a container engine. Both are OK, although I do have a personal preference for Podman as it offers benefits such as being daemonless for easy installation, rootless for added security, fully Open Container Initiative (OCI)-compliant, Kubernetes ready, and has the ability to integrate with systemd at the user level to autostart containers/Pods.Podman Desktop (https://podman-desktop.io) is an open source software that provides a graphical user interface for building, starting, and debugging containers, running local Kubernetes instances, easing the migration from containers to Pods, and even connecting with remote platforms such as Red Hat OpenShift, Azure Kubernetes Engine, and more.Golang (https://go.dev) or Go is a programming language that will be used within our examples. Note that Kubernetes and most of its third-party components are written in Go.Git (https://git-scm.com) is a version control system that we will be using to cover this book’s examples but will also leverage in our discovery of Secrets management solutions.This book’s GitHub repository contains the digital material linked to this book: https://github.com/PacktPublishing/Kubernetes-Secrets-Handbook.
While the evolution from one platform to another might be obvious, the compelling event and inner mechanics might not be. To safely handle sensitive data within Kubernetes, we have to understand both its historical and architectural evolutions. This will help us implement a secure production-grade environment for our critical business applications.
The next few sections will describe a series of concepts, explore and practice them with a simple container runtime and Kubernetes cluster, and establish their direct relationships with security concerns that this handbook will address.
Important note
While we expect you to perform the hands-on examples while reading along, we understand that you might not have the opportunity to do so. As such, we have provided briefings and debriefings for each hands-on example.
Four decades ago, deploying applications was done on a physical server, usually referred to as a bare metal installation. This approach allowed workloads to have direct access to physical resources with the best native performance possible. Due to out-of-the-box limitations for resource management from a software perspective, deploying more than one application on a physical server has always been an operational challenge that has resulted in a suboptimal model with the following root causes:
Physical resource utilization: A reduced set of applications is deployed on a physical machine to limit the potential degradation of services due to the lack of proper resource management capabilities that would have helped address applications hogging all the compute resources.Scalability, flexibility, and time to market: The lead time in weeks or even months to procure, rack and stack, provision the physical machine, and have the application installed, which impacts business growth.The total cost of ownership (TCO) versus innovation: The procurement, integration, operations, and life cycle of physical servers, along with underutilized resources with limited prototyping due to high costs and lead time, slows down the organization’s innovation capabilities.Then, in the early 2000s, virtualization or hypervisors became available for commoditized open systems. A hypervisor is a piece of software that’s merged into the operating system, installed on bare metal, that allows the IT department to create virtual machines. With this, operations teams were able to create and tailor these virtual machines to the application’s precise requirements with the ability to adapt the compute resources during the application’s life cycle and their usage by the business. Thanks to proper resource management and isolation, multiple virtual machines could run on a single server without having noisy neighbors causing potential service degradations.
This model provided tremendous optimizations that helped accelerate the digitalization of services and introduce a new market aside from the traditional data center business – cloud computing. However, the virtualization model created a new set of challenges:
The never-ending increase of virtual machines thanks to continuous innovation. This exponential growth of assets amplifies the operational burden to maintain and secure operating systems, libraries, and applications.The increasing need for automation to perform daily Create, Read, Update, and Delete (CRUD) operations at a large scale involving complex infrastructure and security components.The need for a well-thought governance that’s enforced to address the life cycle, security, and business continuity for thousands of services to support the business continuity of the organization’s critical applications.Finally, containers made their way as the next layer of optimization. Although the construct of containers was not new, as with virtualization, it required a major player to invest in the commoditized open systems to organically make it the next (r)evolution.
Let’s think about a container as a lightweight virtual machine but without the need for a full operating system, which reduces the overall footprint and operational burden related to the software development life cycle and security management. Instead, multiple applications, as containers, share the underlying physical host from a software and hardware level without the overhead of the hypervisor benefiting from nearly machine-native performance. The container provides you with the following benefits:
A well-defined standard by the OCI (https://opencontainers.org) to ease with building, (re)distributing, and deploying containers to any platform that’s compliant with the specifications of the OCIA highly efficient, predictable, and immutable medium that’s application-centric and only includes the necessary libraries and the application runtimeApplication portability thanks to an infrastructure and platform-agnostic solutionAn organic separation of concerns between the developers and platform engineers as there is no need to access the physical or virtual host operating system to develop, build, test, and deploy applicationsEmbracing an automation-first approach and DevOps practices to address the infrastructure, application, and security managementNot mentioning a few challenges would be wrong, so here are some:
Most IT organizations have difficulties embracing a new paradigm from both an architectural and management perspectiveConsidering the organic serparation of concerns between the developers and platform engineers as a support to silosThere’s an overhype around microservices, which leads to potential suboptimal application architecture with no performance optimization but added complexityThe following diagram shows the bottom-up stack, which shows the potential application density per physical server with their respective deployment type:
Figure 1.1 – Layer comparison between bare metal, virtual machines, and containers
We’ve already cited a series of benefits, and yet, we should emphasize additional ones that help with rapid prototyping, faster deployment, easy live functional testing, and so on:
A smaller code base to maintain and enrich per microservice with easier rollout/rollbackThe capability to run in a degraded mode when one of the microservices fails but not the othersThe ability to troubleshoot misbehaving microservices without impacting the entire applicationIt’s faster to recover from failure as only the related microservice must be rescheduledGranular compute resource allocation and scalabilityNot only do microservices help decouple large monolithic applications but they also introduce new design patterns to accelerate innovation.
This sounds fantastic, doesn’t it? It does, but we still have a major missing element here: container runtimes such as Docker or Podman do not provide any resiliency in case of failures. To do so, a container runtime requires an additional software layer providing the applications with high availability capabilities. Managing hundreds of microservices at scale demands a robust and highly resilient orchestrator to ensure the business continuity of the applications while guaranteeing a high level of automation and abstraction toward the underlying infrastructure. This will lead to frictionless build, deploy, and run operations, improving the day-to-day responsibilities of the IT staff involved with the workloads that are deployed on the application platforms.
This is a big ask and a challenge that many IT departments are facing and trying to solve, even more so with legacy patterns. The answer to this complex equation is Kubernetes, a container platform or, as we should call it, an application platform.
There are no better words to describe what Kubernetes is all about than the words from the Kubernetes project maintainers: “Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?
That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more.” (https://kubernetes.io/docs/concepts/overview/#why-you-need-kubernetes-and-what-can-it-do)
The same page lists the following benefits of Kubernetes:
Service discovery and load balancingStorage orchestrationAutomated rollouts and rollbacksAutomatic bin packingSelf-healingSecret and configuration managementWhile reading through this handbook, we will explore and practice all of these benefits while designing a production-grade Secrets management solution for critical workloads.
We have established the context regarding the evolution and adoption of containers with the need for Kubernetes to support our applications with resiliency, scalability, and deployment patterns in mind. But how is Kubernetes capable of such a frictionless experience?
Here is my attempt to answer this question based on having experience as a former cloud architect within the Red Hat Professional Services organization:
From a workload perspective, every infrastructure requirement that an application will consume is simply defined in a declarative way without the need for there to be a domain specialist in networking, storage, security, and so on. The YAML manifest describing the desired state of Pod, Service, and Deployment objects is then handled by Kubernetes as a service broker for every specific vendor who has a Kubernetes integration. In other words, application teams can safely write a manifest that is agnostic of the environment and Kubernetes distribution on which they will deploy the workloads.From an infrastructure perspective, every component of the stack has a corresponding Kubernetes API object. If not, the vendor can introduce their own with the standard Kubernetes API object called CustomResourceDefinition, also known as CRD. This guarantees a common standard, even when interacting with third-party software, hardware, or cloud vendors.When Kubernetes receives a request with a valid object definition, the orchestrator will apply the related CRUD operation. In other words, Kubernetes introduces native automation and orchestration. The same principles should apply to every Kubernetes component running as a container so that they benefit from self-healing, resiliency, and scalability while being agnostic of the underlying software, hardware, or cloud provider.
This approach supports the portability not only of containerized applications but of the entire application platform while reducing the need for technology domain specialists to be involved when deploying an application, maintaining the platform, and even enriching the Kubernetes project with new features or components.
The concept of a YAML manifest to define a Kubernetes API object has been floating around for a while. It is time to look at a simple example that shows the desired state of a Pod object (a logical grouping for one or multiple containers):
apiVersion: v1 kind: Pod metadata: name: hello-app spec: containers: - name: hello-world image: hello-path:0.1 ports: - containerPort: 8080This Pod object’s definition provides the necessary information for Kubernetes to do the following:
Define the desired state for a Pod object with the name hello-app.Specify that there are containers and that one of them is called hello-world and uses a container image of hello-path. For this, we want version 0.1 to be pulled from a container registry.Accept incoming traffic to the hello-world application, using port 8080 at the container level.That’s it! This is our first Pod definition. It allows us to deploy a simple containerized application with no fuzz and zero knowledge of the underlying infrastructure.
There is not much magic behind this orchestration but the work of multiple components provides a fantastic level of resilience and abstraction, as well as a frictionless experience. The following diagram provides an overview of the components that run within a Kubernetes instance:
Figure 1.2 – Kubernetes components
A Kubernetes cluster can be divided into two logical groups – the control plane (some distributions refer to this as the master node) and the (worker) nodes. Let’s drill down into each logical group and discover their respective components:
Control plane:kube-apiserver: This component is responsible for exposing the Kubernetes API and enabling CRUD operations regarding the object definitions and their state within etcd.etcd: This component is a key value store and serves as the asset management service. A corrupted etcd results in a full disaster scenario.kube-scheduler: This component tracks the desired state of Pod and will address any potential drift within the cluster. As an example, if a Pod object definition is created or modified, kube-scheduler will adjust its state so that the containers only run on a healthy node.kube-controller-manager: This component runs a series of controllers that are responsible for handling the desired state of the nodes, jobs, endpoints, and service accounts. Controllers are reconciliation loops that track the difference between the desired and current state of an object and adjust the latter so that it matches the latest object definition.cloud-controller-manager (optional): Similar to kube-controller-manager, this component, when deploying Kubernetes in the cloud, enriches the cluster with additional abstractions to interact with the related cloud provider services.Nodes (and the control plane too!):kubelet: This component interacts with kube-apiserver to verify and adjust the desired states of Pods bound to the nodekubeproxy: This component provides the basic network plumbing on each node while maintaining the networking rules to allow (or not) the internal and external network traffic to Podscontainer runtime: This component runs the containersThere are additional components that should be considered as add-ons due to their direct dependency on the Kubernetes distribution. These add-ons would be responsible for handling services such as DNS, logging, metrics, the user interface, and more.
Important note
In a dev/test environment, a single node might be deployed to act both as a control plane and a worker node on which Pods will be scheduled. However, for resiliency purposes, a production-grade environment should consider a minimum of three control planes with dedicated worker nodes to improve resilience and separation of concerns, as well as dedicate compute resources for the applications.
The main benefits of containers are their portability and being platform agnostic. Deploying the famous Hello World application within a container using Docker, Podman, or Kubernetes should not require us to modify the application code. I will even go a step further and say that we should not care about the underlying infrastructure. On the other hand, there would be a large umbrella of constraints to deal with when deploying an application with a bare metal or virtualization approach.
Before we start, we assume that you have the following:
All the technical requirements mentioned at the beginning of this chapterAccess to this book’s GitHub repository (https://github.com/PacktPublishing/Kubernetes-Secrets-Handbook)This example at hand; it is available in the ch01/example01 folderLet’s have a look at a simple example illustrating a basic software supply chain:
Building the application binary: The example is a simple Go application showcasing an HTTP service and console logging capabilitiesBuilding the container image, including the application binary: The application will be built using a Golang toolset container image; a second small footprint container image will be used to carry the application binaryRunning the containerized application using Podman: This first run will leverage the graphical interface of Podman Desktop to illustrate the rather simple process of running a containerDeploying the containerized application using Kubernetes: This first deployment will leverage the kubectl command line to showcase how to process our first YAML manifest to create a Kubernetes Pod objectNote that this example is agnostic of the CPU architecture on which the overall process will take place. This means that you can safely perform the same exercise on different CPU targets without the need to rewrite code or change any of the configuration files.
It is interesting to note that a container runtime such as Docker or Podman is used to build the application and the container image containing our application binary. This is done via a text file called a Dockerfile, which defines all the necessary steps to build our container image:
FROM registry.access.redhat.com/ubi8/go-toolset@sha256:168ac23af41e6c5a6fc75490ea2ff9ffde59702c6ee15d 8c005b3e3a3634fcc2 AS build COPY ./hello/* . RUN go mod init hello RUN go mod tidy RUN go build . FROM registry.access.redhat.com/ubi8/ubi-micro@sha256:6a56010de933f172b195a1a575855d37b70a4968be8edb 35157f6ca193969ad2 LABEL org.opencontainers.image.title "Hello from Path" LABEL org.opencontainers.inage.description "Kubernetes Secrets Handbook - Chapter 01 - Containter Build Example" COPY --from=build ./opt/app-root/src/hello . EXPOSE 8080 ENTRYPOINT ["./hello"]The Dockerfile build steps are as follows:
Fetch the go-toolset image for the build.Get all the application content in that image.Run the Go build process.Fetch the ubi-micro image as the target container.Set some container image metadata.Copy the binary from the build image to the target image.Set a port exposure for the application. Here, this is 8080.Run the application binary.That’s it! Once the application has been built and the container image has been successfully created and pushed to the registry, the container image will be available in the localhost container registry, after which the container can be started using either Docker or Podman. This can be done through one simple command line with a few parameters, though you can leverage the Podman Desktop graphical interface.
On the other hand, running this container on an application platform such as Kubernetes requires a different approach – that is, declaratively using a YAML manifest. An example was supplied earlier in this chapter and can be found in this book’s GitHub repository. This YAML manifest is submitted to kube-apiserver via a tool such as kubectl.
Here is a transactional overview of a Kubernetes Podobject’s creation:
Figure 1.3 – Kubernetes Pod creation
As we can see, the etcd record is continuously updated during the Pod object’s creation. The desired state is saved; the current status of every component involved in the process is also saved, which generates a sort of audit trail. Such a design allows for easier debugging when the desired outcome is not achieved.
As soon as the Pod object is registered within etcd, all the Kubernetes components are on a mission to converge toward the desired state, regardless of potential issues such as network partitioning, node failure, and more. This is the difference between running containers on a single machine with a local container runtime such as Docker or Podman and orchestrating containers at scale with a container platform such as Kubernetes.
Here’s some food for thought:
I wrote “running the containerized applications” and “deploying the containerized application” to illustrate the difference between a container runtime such as Docker or Podman running a containerized application and Kubernetes scheduling containers and orchestrating other resources such as networking, storage, Secrets, and more. Note that there is a Kubernetes object called Deployment that addresses release management and scalability capabilities. For more details, see https://kubernetes.io/docs/concepts/workloads/controllers/deployment/.Performing such an exercise even in a non-production environment using virtual machines could take days, even weeks.