Mastering Elastic Kubernetes Service on AWS - Malcolm Orr - E-Book

Mastering Elastic Kubernetes Service on AWS E-Book

Malcolm Orr

0,0
35,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Kubernetes has emerged as the de facto standard for container orchestration, with recent developments making it easy to deploy and handle a Kubernetes cluster. However, a few challenges such as networking, load balancing, monitoring, and security remain. To address these issues, Amazon EKS offers a managed Kubernetes service to improve the performance, scalability, reliability, and availability of AWS infrastructure and integrate with AWS networking and security services with ease.
You’ll begin by exploring the fundamentals of Docker, Kubernetes, Amazon EKS, and its architecture along with different ways to set up EKS. Next, you’ll find out how to manage Amazon EKS, encompassing security, cluster authentication, networking, and cluster version upgrades. As you advance, you’ll discover best practices and learn to deploy applications on Amazon EKS through different use cases, including pushing images to ECR and setting up storage and load balancing. With the help of several actionable practices and scenarios, you’ll gain the know-how to resolve scaling and monitoring issues. Finally, you will overcome the challenges in EKS by developing the right skill set to troubleshoot common issues with the right logic.
By the end of this Kubernetes book, you’ll be able to effectively manage your own Kubernetes clusters and other components on AWS.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 452

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Mastering Elastic Kubernetes Service on AWS

Deploy and manage EKS clusters to support cloud-native applications in AWS

Malcolm Orr

Yang-Xin Cao (Eason)

BIRMINGHAM—MUMBAI

Mastering Elastic Kubernetes Service on AWS

Copyright © 2023 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Preet Ahuja

Publishing Product Manager: Niranjan Naikwadi

Senior Editor: Sayali Pingale

Technical Editor: Nithik Cheruvakodan

Copy Editor: Safis Editing

Project Coordinator: Deeksha Thakkar

Proofreader: Safis Editing

Indexer: Pratik Shirodkar

Production Designer: Shankar Kalbhor

Marketing Coordinator: Rohan Dobhal

First published: July 2023

Production reference: 1220623

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80323-121-1

www.packtpub.com

To my wife, Alison, and son, Thomas – thanks for all the love and support during the late nights and weekends!

– Malcolm Orr

To YC, Titan, and Tim, for their guidance and mentorship, which shaped my cloud journey and revealed the true essence of excellence. To my friend, Ren-Hen; Bill; and my parents, for their encouragement, support, and inspiration.

– Yang-Xin Cao (Eason)

Contributors

About the authors

Malcolm Orr is a principal engineer at AWS, who has spent close to two decades designing, building, and deploying systems and applications at telcos, banks, and media companies. He has published a number of books, with topics ranging from automating virtual data centers to site reliability engineering on AWS, and he works extensively with AWS customers, helping them adopt modern development practices for cloud-native applications.

I want to thank the people who have been close to me and supported me, especially my wife, Alison.

Yang-Xin Cao (Eason) has over five years of experience in AWS DevOps and the container field. He holds multiple accreditations, including AWS Professional Solution Architect, AWS Professional DevOps Engineer, and CNCF Certified Kubernetes Administrator. Fascinated by cloud technology, Eason joined AWS even before graduating from college in 2017. During his time at AWS, he successfully handled numerous critical troubleshooting issues and implemented various product improvements. His expertise and contributions have earned him the title of SME in AWS services, such as ECS, EKS, and CodePipeline. Additionally, he is the creator of EasonTechTalk.com, a platform dedicated to disseminating tech knowledge to a wider audience. You can find out more about him at EasonCao.com.

I am deeply grateful to the key individuals behind this book – Malcolm, Niranjan, Tanya, Deeksha, Sangeeta, Nihar, Shagun, and everyone in the Packt team. Their efforts have made this book possible and helped spread this fascinating technology to the world.

About the reviewers

Andres Sacco has worked as a developer since 2007 in different languages, including Java, PHP, Node.js, and Android. Most of his background is in Java and the libraries or frameworks associated with this language. In most of the companies that he has worked for, he has researched new technologies to improve the performance, stability, and quality of the applications of each company. He has dictated some internal courses to different audiences, such as developers, business analysts, and commercial people.

Werner Dijkerman is a freelance platform, Kubernetes (certified), and Dev(Sec)Ops engineer. He currently focuses on, and works with, cloud-native solutions and tools, including AWS, Ansible, Kubernetes, and Terraform. He focuses on infrastructure as code and monitoring the correct “thing,” with tools such as Zabbix, Prometheus, and the ELK Stack. He has a passion for automating everything and avoiding doing anything that resembles manual work. He is an active reader of comics and self-care/psychology and IT-related books, and he is a technical reviewer of various books about DevOps, CI/CD, and Kubernetes.

Table of Contents

Preface

Part 1: Getting Started with Amazon EKS

1

The Fundamentals of Kubernetes and Containers

A brief history of Docker, containerd, and runc

A deeper dive into containers

Getting to know union filesystems

How to use Docker

What is container orchestration?

What is Kubernetes?

Key Kubernetes API resources

Understanding Kubernetes deployment architectures

Developer deployment

Non-production deployments

Self-built production environments

Managed service environments

Summary

Further reading

2

Introducing Amazon EKS

Technical requirements

What is Amazon EKS?

Why use Amazon EKS?

Self-managed Kubernetes clusters versus Amazon EKS

Understanding the EKS architecture

Understanding the EKS control plane

Understanding cluster security

Understanding your cluster through the command line

Investigating the Amazon EKS pricing model

Fixed control plane costs

Variable costs

Estimating costs for an EKS cluster

Common mistakes when using EKS

Summary

Further reading

3

Building Your First EKS Cluster

Technical requirements

Understanding the prerequisites for building an EKS cluster

Configure your AWS CLI environment with temporary root credentials

Create the EKS Admin policy

Create the EKS Admin group

Create a new user

Understanding the different configuration options for an EKS cluster

Enumerating the automation options

Which automation tool/framework should I use?

Creating your first EKS cluster

Option 1: Creating your EKS cluster with the AWS console

Option 2: Creating your EKS cluster with the AWS CLI

Option 3: Creating your EKS cluster with Terraform

Option 4: Creating your EKS cluster with eksctl

Option 5: Creating your EKS cluster with the CDK

Summary

Further reading

4

Running Your First Application on EKS

Technical requirements

Understanding the different configuration options for your application

Introducing kubectl configuration

Verifying connectivity with kubectl

Creating your first EKS application

Deploying your first Pod on Amazon EKS using the kubectl command

Deploying a Pod using a Kubernetes Deployment

Modifying your Deployment

Exposing your Deployment

Visualizing your workloads

Summary

Further reading

5

Using Helm to Manage a Kubernetes Application

Technical requirements

Understanding Helm and its architecture

The benefit of Helm

Getting to know Helm charts

Installing the Helm binary

Deploying a sample Kubernetes application with Helm

Creating, deploying, updating, and rolling back a Helm chart

Deleting an application via the Helm command

Deploying a Helm chart with Lens

Summary

Further reading

Part 2: Deep Dive into EKS

6

Securing and Accessing Clusters on EKS

Understanding key Kubernetes concepts

Understanding the default EKS authentication method

Configuring the aws-auth ConfigMap for authorization

Accessing the cluster endpoint

Configuring EKS cluster access

Configuring .kube/config

Configuring the aws-auth Config Map

Protecting EKS endpoints

Summary

Further readings

7

Networking in EKS

Understanding networking in Kubernetes

Network implementation in Kubernetes

Getting to grips with basic AWS networking

Understanding EKS networking

Non-routable secondary addresses

Prefix addressing

IPv6

Configuring EKS networking using the VPC CNI

Managing the CNI plugin

Disabling CNI source NAT

Configuring custom networking

Common networking issues

Summary

Further reading

8

Managing Worker Nodes on EKS

Technical requirements

Launching a node with Amazon Linux

Prerequisites for launching a node with Amazon Linux

Putting it all together and creating a standalone worker node

Launching self-managed Amazon Linux nodes with CloudFormation

Launching self-managed Bottlerocket nodes with eksctl

Understanding managed nodes with eksctl

Building a custom AMI for EKS

Summary

Further reading

9

Advanced Networking with EKS

Technical requirements

Using IPv6 in your EKS cluster

Pod to external IPv6 address

VPC routing

Installing and using Calico network policies

Choosing and using different CNIs in EKS

Configuring multiple network interfaces for Pods

Summary

Further reading

10

Upgrading EKS Clusters

Technical requirements

Reasons for upgrading EKS and key areas to focus on

How to do in-place upgrades of the control plane

Upgrading nodes and their critical components

Upgrading managed node groups

Upgrading self-managed node groups

Updating core components

Creating a new cluster and migrating workloads

How do you move workloads?

How do you provide consistent ingress and egress network access?

How do you manage state?

Summary

Further reading

Part 3: Deploying an Application on EKS

11

Building Applications and Pushing Them to Amazon ECR

Technical requirements

Introducing Amazon ECR

Understanding repository authentication

Accessing ECR private repositories

Building and pushing a container image to ECR

Using advanced ECR features

Pull-through-cache explained

Cross-region replication

Using an ECR image in your EKS cluster

Summary

Further reading

12

Deploying Pods with Amazon Storage

Technical requirements

Understanding Kubernetes volumes, the CSI driver, and storage on AWS

EBS

EFS

Installing and configuring the AWS CSI drivers in your cluster

Installing and configuring the EBS CSI driver

Installing and configuring the EFS CSI driver

Using EBS volumes with your application

Using EFS volumes with your application

Creating the EFS instance and mount targets

Creating your EFS cluster objects

Summary

Further reading

13

Using IAM for Granting Access to Applications

Technical requirements

Understanding IRSA

Introducing IMDSv2

How IRSA works

Using IRSA in your application

How to deploy a Pod and use IRSA credentials

How to create an IRSA role programmatically

How to troubleshoot IAM issues on EKS

Summary

Further reading

14

Setting Load Balancing for Applications on EKS

Technical requirements

Choosing the right load balancer for your needs

Concept 1 – understanding Layer 4 and Layer 7 load balancer networking

Concept 2 – understanding proxy and DSR modes

Which load balancers are available in AWS?

Choosing the right ELB

Using EKS to create and use AWS LBs

Installing the ALBC in your cluster

Using an ALB with your application

Using an NLB with your application

Reusing an existing LB

Summary

Further reading

15

Working with AWS Fargate

Technical requirements

What is AWS Fargate?

Understanding the Fargate pricing model

Creating an AWS Fargate profile in EKS

Understanding how the AWS Fargate profile works

Creating and adjusting the Fargate profile

Deploying a Pod to a Fargate instance

Troubleshooting common issues on Fargate

Summary

Further reading

16

Working with a Service Mesh

Technical requirements

Exploring a service mesh and its benefits

Understanding different data plane solution options

Understanding AWS App Mesh

Installing AWS App Mesh Controller in a cluster

Integrating your application with AWS App Mesh

Deploying our standard application

Adding the basic AWS App Mesh components

Using a virtual router in AWS App Mesh

Using a virtual gateway in AWS App Mesh

Using AWS Cloud Map with EKS

Troubleshooting the Envoy proxy

Summary

Further reading

Part 4: Advanced EKS Service Mesh and Scaling

17

EKS Observability

Technical requirements

Monitoring clusters and Pods using native AWS tools

Creating a basic CloudWatch dashboard

Looking at the control plane logs

Exploring control plane and Pod metrics

Building dashboards with Managed Service for Prometheus and Grafana

Setting up AMP and AWS Distro for OpenTelemetry (ADOT)

Setting up AMG and creating a dashboard

Tracing with OpenTelemetry

Modifying our ADOT configuration

Using machine learning with DevOps Guru

Summary

Further reading

18

Scaling Your EKS Cluster

Technical requirements

Understanding scaling in EKS

EKS scaling technology

Scaling EC2 ASGs with Cluster Autoscaler

Installing the CA in your EKS cluster

Testing Cluster Autoscaler

Scaling worker nodes with Karpenter

Installing Karpenter in your EKS cluster

Testing Karpenter autoscaling

Scaling applications with Horizontal Pod Autoscaler

Installing HPA in your EKS cluster

Testing HPA autoscaling

Autoscaling applications with custom metrics

Installing the Prometheus components in your EKS cluster

Testing HPA autoscaling with custom metrics

Scaling with Kubernetes Event-Driven Autoscaling

Installing the KEDA components in your EKS cluster

Testing KEDA autoscaling

Summary

Further reading

19

Developing on EKS

Technical requirements

Different IT personas

Using Cloud9 as your integrated development environment

Creating and configuring your Cloud9 instance

Building clusters with EKS Blueprints and Terraform

Customizing and versioning EKS Blueprints for Terraform

Using CodePipeline and CodeBuild to build clusters

Setting up the CodeBuild project

Setting up CodePipeline

Using ArgoCD, Crossplane, and GitOps to deploy workloads

Setting up our application repository

Setting up the ArgoCD application

Adding AWS infrastructure with Crossplane

Summary

Further reading

Part 5: Overcoming Common EKS Challenges

20

Troubleshooting Common Issues

Technical requirements

Common K8s tools/techniques for troubleshooting EKS

Common EKS troubleshooting tools

Common cluster access problems

You cannot access your cluster using kubectl

Common Node/compute problems

Node/Nodes can’t join the cluster

Common Pod networking problems

Common workload problems

Summary

Further reading

Index

Other Books You May Enjoy

Preface

Welcome! This is a handy book on using Elastic Kubernetes Service (EKS) to effortlessly deploy and manage your Kubernetes clusters on AWS. With EKS, running Kubernetes on AWS becomes a breeze as you no longer have to worry about the complexity of managing the underlying infrastructure. Kubernetes (K8s) is one the fastest-growing open source projects in the world and is rapidly becoming the de facto container orchestration platform for cloud-native applications.

But for those not familiar with AWS, you might be wondering, “Why is running Kubernetes on AWS challenging?” There are a few factors that can make it difficult. One of the primary issues is configuring and managing the foundational AWS infrastructure, including virtual networks and security groups. Additionally, managing the resources required for a Kubernetes cluster can pose its own set of challenges. Integrating with other AWS services, such as load balancers and storage, can also introduce complexities. However, EKS has enabled many features to make these things easier, so rest assured that with time and effort, you can become proficient in managing a Kubernetes cluster on AWS – and the rewards will be well worth it.

This book looks at the AWS managed EKS service in detail, from its basic architecture and configuration through to advanced use cases such as GitOps or Service Mesh. The book aims to take the reader from a basic understanding of K8s and the AWS platform to being able to create EKS clusters and build and deploy production workloads on them.

Throughout the book, we will dive into various techniques that enable you to optimize your EKS clusters. The coverage spans a wide range of topics, including networking, security, storage, scaling, observability, service mesh, and cluster upgrade strategies. We have structured this book to provide you with a step-by-step guide to mastering EKS on AWS. Each chapter covers a specific topic and includes practical examples, tips, and best practices to help you understand and apply the concepts in real-world scenarios.

Our intention is not only to equip you with the technical skills required for success, but also to foster a deeper understanding of the underlying concepts so that you can apply them to your own unique situations.

Who this book is for

This book is aimed at engineers and developers with minimal experience of the AWS platform and K8s, who want to understand how to use EKS to run containerized workloads in their environments and integrate them with other AWS services. It’s a practical guide with plenty of code examples, so familiarization with Linux, Python, Terraform, and YAML is recommended.

Overall, there are three main roles as the target audience who will gain practical insights from this book:

Developers and DevOps engineers: They will understand the Kubernetes environment on AWS, know how to configure the cluster to run cloud-native applications by using EKS, and learn CI/CD practices.Cloud architects: They will gain a comprehensive understanding of how to design well-architected cloud infrastructure when running Kubernetes on AWS.Kubernetes administrators: Cluster administrators will learn the practical operation methods for managing Kubernetes workloads on AWS. Additionally, they will gain a complete understanding of EKS features to enhance cluster scalability, availability, and observability.

Whether you are just getting started with cloud computing or are looking to expand your knowledge and skills, this book has something for everyone who owns an AWS account and wants to start their EKS journey.

What this book covers

Chapter 1, The Fundamentals of Kubernetes and Containers, covers an introduction to Kubernetes and container technology. It will also deep dive into the elements that constitute a container, the concept of the container orchestrator, and the Kubernetes architecture.

Chapter 2, Introducing Amazon EKS, provides a comprehensive guide to explain what Amazon EKS is, its architecture behind the scenes, its pricing model, and the common mistakes that users may have. This chapter also gives you a brief overview to compare the options for running workloads on AWS: using EKS or a self-managed Kubernetes cluster.

Chapter 3, Building Your First EKS Cluster, explores different options to create your first EKS cluster step by step and gives an overview of the automation process when building your workflow, including the AWS console, AWS CLI, eksctl, AWS CDK, and Terraform.

Chapter 4, Running Your First Application on EKS, covers the different ways you can deploy and operate a simple application on EKS, including how to implement and expose your application to make it accessible externally. It also touches on tools to visualize your workload.

Chapter 5, Using Helm to Manage a Kubernetes Application, focuses on how to install and use Helm to simplify your Kubernetes deployment experience. This chapter also covers the details of Helm charts, their architecture, and common scenarios for their use.

Chapter 6, Securing and Accessing Clusters on EKS, dives into the essential aspects of authentication and authorization in Kubernetes and how they apply to EKS. The chapter explains the significance of configuring client tools and accessing your EKS cluster securely.

Chapter 7, Networking in EKS, explains Kubernetes networking and demonstrates how EKS can be seamlessly integrated with AWS Virtual Private Cloud (VPC).

Chapter 8, Managing Worker Nodes on EKS, explores the configuration and effective management of EKS worker nodes. It highlights the benefits of using EKS-optimized images (AMIs) and managed node groups, offering insights into their advantages over self-managed alternatives.

Chapter 9, Advanced Networking with EKS, delves into advanced networking scenarios in EKS. It covers topics such as managing Pod IP addresses with IPv6, implementing network policies for traffic control, and utilizing complex network-based information systems such as Multus CNI.

Chapter 10, Upgrading EKS Clusters, focuses on the strategies for upgrading EKS clusters to leverage new features and ensure continued support. It provides guidance on key areas to consider, including in-place and blue/green upgrades of the control plane, critical components, node groups, and migrating workloads to new clusters.

Chapter 11, Building Applications and Pushing Them to Amazon ECR, examines the process of building and storing container images on Amazon ECR for EKS deployments. It covers topics such as repository authentication, pushing container images, utilizing advanced ECR features, and integrating ECR into EKS clusters.

Chapter 12, Deploying Pods with Amazon Storage, explains Kubernetes volumes, Container Storage Interface (CSI), and the need for persistent storage in Kubernetes Pods, and demonstrates the usage of EBS and EFS on EKS. It also covers the details for installing and configuring AWS CSI drivers for utilizing EBS and EFS volumes with your application.

Chapter 13, Using IAM for Granting Access to Applications, discusses Pod security with a scenario on integrating IAM with your containerized applications. It includes defining IAM permissions for Pods, utilizing IAM Roles for Service Accounts (IRSA), and troubleshooting IAM issues specific to EKS deployments.

Chapter 14, Setting Load Balancing for Applications on EKS, explores the concept of load balancing for EKS applications. It also expands the discussion of scalability and resilience and provides insights into the Elastic Load Balancer (ELB) options available in AWS.

Chapter 15, Working with AWS Fargate, introduces AWS Fargate as an alternative serverless option for hosting Pods in EKS. It examines the benefits of using Fargate, provides guidance on creating Fargate profiles, deploying Pods to Fargate environments seamlessly, and troubleshooting common issues that may arise.

Chapter 16, Working with a Service Mesh, explores the use of service mesh technology to enhance control, visibility, and security in microservices-based ecosystems on EKS. The chapter covers the installation of the AWS App Mesh Controller, integration with Pods, leveraging AWS Cloud Map, and troubleshooting the Envoy proxy.

Chapter 17, EKS Observability, describes the importance of observability in EKS deployments and provides insights into monitoring, logging, and tracing techniques. The chapter covers native AWS tools for monitoring EKS clusters and Pods, building dashboards with Managed Prometheus and Grafana, leveraging OpenTelemetry, and utilizing machine learning capabilities to capture cluster status with DevOps Guru.

Chapter 18, Scaling Your EKS Cluster, discusses the challenges of capacity planning in EKS and explores various strategies and tools for scaling your cluster to meet application demands while optimizing cost. The chapter walks through topics such as scaling node groups with Cluster Autoscaler and Karpenter, scaling applications with Horizontal Pod Autoscaler (HPA), describing the use case of custom metrics, and utilizing KEDA to optimize event-driven autoscaling.

Chapter 19, Developing on EKS, explores ways to improve efficiency for developers and DevOps engineers when building EKS clusters. The chapter focuses on different automation tools and CI/CD practices to streamline these activities, including Cloud9, EKS Blueprints, Terraform, CodePipeline, CodeBuild, ArgoCD, and GitOps for workload deployment.

Chapter 20, Troubleshooting Common Issues, provides an EKS troubleshooting checklist and discusses common problems and their solutions.

To get the most out of this book

You will need an AWS Account and an operating system to run applications as listed in the table below. To ensure a smooth reading experience, knowledge of basic AWS concepts such Virtual Private Cloud (VPC), Elastic Block Storage (EBS), EC2, Elastic Load Balancer (ELB), Identity and Access Management (IAM), and Kubernetes are recommended.

Software covered in the book

Prequisite

Amazon Elastic Kubernetes Service (EKS)

An AWS Account

Amazon Command Line Interface (AWS CLI)

Linux/macOS/Linux

kubectl

Linux/macOS/Linux

eksctl

Linux/macOS/Linux

Helm

Linux/macOS/Linux

Lens (Kubernetes IDE)

Linux/macOS/Linux

In this book, we will explore various tools and learn how to manage Kubernetes clusters on AWS. You can find the latest version and download the required software by following the following guides:

AWS CLI: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.htmlkubectl: https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.htmleksctl: https://docs.aws.amazon.com/eks/latest/userguide/eksctl.htmlHelm: https://docs.aws.amazon.com/eks/latest/userguide/helm.htmlLens: https://k8slens.dev/

We have tried to make this a practical book with plenty of code examples. To get the most out of this book, you should have basic familiarity with AWS, Linux, YAML, and K8s architecture.

Download the color images

We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://packt.link/g2oZN.

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “The MAINTAINER and CMD commands don’t generate layers.”

A block of code is set as follows:

apiVersion: v1 kind: Pod metadata:   name: nginx spec:   containers:

Any command-line input or output is written as follows:

$ docker run hello-world

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “The most common example of this is OverlayFS, which is included in the Linux kernel and used by default by Docker.”

Tips or important notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share Your Thoughts

Once you’ve read Mastering Elastic Kubernetes Service on AWS, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Download a free PDF copy of this book

Thanks for purchasing this book!

Do you like to read on the go but are unable to carry your print books everywhere?

Is your eBook purchase not compatible with the device of your choice?

Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.

Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application.

The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily

Follow these simple steps to get the benefits:

Scan the QR code or visit the link below

https://packt.link/free-ebook/9781803231211

Submit your proof of purchaseThat’s it! We’ll send your free PDF and other benefits to your email directly

Part 1: Getting Started with Amazon EKS

In this part, you will gain a comprehensive overview of Kubernetes and containers, along with insights into Amazon EKS and its architecture. You will also get your EKS cluster ready by following a step-by-step guide. By the end of this section, you will have learned the basics of deploying and operating an application on EKS, and will know how to utilize Helm to simplify your Kubernetes application deployments.

This section contains the following chapters:

Chapter 1, The Fundamentals of Kubernetes and ContainersChapter 2, Introducing Amazon EKSChapter 3, Building Your First EKS ClusterChapter 4, Running Your First Application on EKSChapter 5, Using Helm to Manage a Kubernetes Application

1

The Fundamentals of Kubernetes and Containers

As more organizations adopt agile development and modern (cloud-native) application architectures, the need for a platform that can deploy, scale, and provide reliable container services has become critical for many medium-sized and large companies. Kubernetes has become the de facto platform for hosting container workloads but can be complex to install, configure, and manage.

Elastic Kubernetes Service (EKS) is a managed service that enables users of the AWS platform to focus on using a Kubernetes cluster rather than spending time on installation and maintenance.

In this chapter, we will review the basic building blocks of Kubernetes. Specifically, however, we will be covering the following topics:

A brief history of Docker, containerd, and runcA deeper dive into containersWhat is container orchestration?What is Kubernetes?Understanding Kubernetes deployment architectures

For a deeper understanding of the chapter, it is recommended that you have some familiarity with Linux commands and architectures.

Important note

The content in this book is intended for IT professionals that have experience building and/or running Kubernetes on-premises or on other cloud platforms. We recognize that not everyone with the prerequisite experience is aware of the background of Kubernetes so this first chapter is included (but optional) to provide a consistent view of where Kubernetes has come from and the supporting technology it leverages. If you think you already have a clear understanding of the topics discussed in this chapter, feel free to skip this one and move on to the next chapter.

A brief history of Docker, containerd, and runc

The IT industry has gone through a number of changes: from large, dedicated mainframes and UNIX systems in the 1970s-80s, to the virtualization movement with Solaris Zones, VMware, and the development of cgroups and namespaces in the Linux kernel in the early 2000s. In 2008, LXC was released. It provided a way to manage cgroups and namespaces in a consistent way to allow virtualization natively in the Linux kernel. The host system has no concept of a container so LXC orchestrates the underlying technology to create an isolated set of processes, that is, the container.

Docker, launched in 2013, was initially built on top of LXC and introduced a whole ecosystem around container management including a packaging format (the Dockerfile), which leverages a union filesystem to allow developers to build lightweight container images, and a runtime environment that manages Docker containers, container storage and CPU, RAM limits, and so on, while managing and transferring images (the Docker daemon) and provides an Application Programming Interface (API) that can be consumed by the Docker CLI. Docker also provides a set of registries (Docker Hub) that allows operating systems, middleware, and application vendors to build and distribute their code in containers.

In 2016, Docker extracted these runtime capabilities into a separate engine called containerd and donated it to the Cloud Native Compute Foundation (CNCF), allowing other container ecosystems such as Kubernetes to deploy and manage containers. Kubernetes initially used Docker as its container runtime, but in Kubernetes 1.15, the Container Runtime Interface (CRI) was introduced, which allows Kubernetes to use different runtimes such as containerd.

The Open Container Initiative (OCI) was founded by Docker and the container industry to help provide a lower-level interface to manage containers. One of the first standards they developed was the OCI Runtime Specification, which adopted the Docker image format as the basis for all of its image specifications. The runc tool was developed by the OCI to implement its Runtime Specification and has been adopted by most runtime engines, such as containerd, as a low-level interface to manage containers and images.

The following diagram illustrates how all the concepts we have discussed in this section fit together:

Figure 1.1 – Container runtimes

In this section, we discussed the history of containers and the various technologies used to create and manage them. In the next section, we will dive deeper into what a container actually consists of.

A deeper dive into containers

The container is a purely logical construction and consists of a set of technologies glued together by the container runtime. This section will provide a more detailed view of the technologies used in a Linux kernel to create and manage containers. The two foundational Linux services are namespaces and control groups:

Namespaces (in the context of Linux): A namespace is a feature of the Linux kernel used to partition kernel resources, allowing processes running within the namespace to be isolated from other processes. Each namespace will have its own process IDs (PIDs), hostname, network access, and so on.Control groups: A control group (cgroup) is used to limit the usage by a process or set of processes of resources such as CPU, RAM, disk I/O, or network I/O. Originally developed by Google, this technology has been incorporated into the Linux kernel.

The combination of namespaces and control groups in Linux allows a container to be defined as a set of isolated processes (namespace) with resource limits (cgroups):

Figure 1.2 – The container as a combination of cgroup and namespace

The way the container runtime image is created is important as it has a direct bearing on how that container works and is secured. A union filesystem (UFS) is a special filesystem used in container images and will be discussed next.

Getting to know union filesystems

A UFS is a type of filesystem that can merge/overlay multiple directories/files into a single view. It also gives the appearance of a single writable filesystem, but is read-only and does allow the modification of the original content. The most common example of this is OverlayFS, which is included in the Linux kernel and used by default by Docker.

A UFS is a very efficient way to merge content for a container image. Each set of discreet content is considered a layer, and layers can be reused between container images. Docker, for example, will use the Dockerfile to create a layered file based on a base image. An example is shown in the following diagram:

Figure 1.3 – Sample Docker image

In Figure 1.3, the FROM command creates an initial layer from the ubuntu 18.04 image. The output from the two RUN commands creates discreet layers while the final step is for Docker to add a thin read/write layer where all changes to the running container are written. The MAINTAINER and CMD commands don’t generate layers.

Docker is the most prevalent container runtime environment and can be used on Windows, macOS, and Linux so it provides an easy way to learn how to build and run containers (although please note that the Windows and Linux operating systems are fundamentally different so, at present, you can’t run Windows containers on Linux). While the Docker binaries have been removed from the current version of Kubernetes, the concepts and techniques in the next section will help you understand how containers work at a fundamental level.

How to use Docker

The simplest way to get started with containers is to use Docker on your development machine. As the OCI has developed standardization for Docker images, images created locally can be used anywhere. If you have already installed Docker, the following command will run a simple container with the official hello-world sample image and show its output:

$ docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 2db29710123e: Pull complete ... Status: Downloaded newer image for hello-world:latest Hello from Docker!

This preceding message shows that your installation appears to be working correctly. You can see that the hello-world image is “pulled” from a repository. This defaults to the public Docker Hub repositories at https://hub.docker.com/. We will discuss repositories, and in particular, AWS Elastic Container Registry (ECR) in Chapter 11, Building Applications and Pushing Them to Amazon ECR.

Important note

If you would like to know how to install and run with Docker, you can refer to the Get Started guide in the Docker official documentation: https://docs.docker.com/get-started/.

Meanwhile, you can use the following command to list containers on your host:

$ docker ps -a CONTAINER ID   IMAGE  COMMAND      CREATED       STATUS  PORTS     NAMES 39bad0810900   hello-world "/hello"                  10 minutes ago   Exited (0) 10 minutes ago             distracted_tereshkova ...

Although the preceding commands are simple, they demonstrate how easy it is to build and run containers. When you use the Docker CLI (client), it will interact with the runtime engine, which is the Docker daemon. When the daemon receives the request from the CLI, the Docker daemon proceeds with the corresponding action. In the docker run example, this means creating a container from the hello-world image. If the image is stored on your machine, it will use that; otherwise, it will try and pull the image from a public Docker repository such as Docker Hub.

As discussed in the previous section, Docker now leverages containerd and runc. You can use the docker info command to view the versions of these components:

$ docker info …   buildx: Docker Buildx (Docker Inc., v0.8.1)   compose: Docker Compose (Docker Inc., v2.3.3)   scan: Docker Scan (Docker Inc., v0.17.0) …… containerd version: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc runc version: v1.0.3-0-gf46b6ba init version: de40ad0 ...

In this section, we looked at the underlying technology used in Linux to support containers. In the following sections, we will look at container orchestration and Kubernetes in more detail.

What is container orchestration?

Docker works well on a single machine, but what if you need to deploy thousands of containers across many different machines? This is what container orchestration aims to do: to schedule, deploy, and manage hundreds or thousands of containers across your environment. There are several platforms that attempt to do this:

Docker Swarm: A cluster management and orchestration solution from Docker (https://docs.docker.com/engine/swarm/).Kubernetes (K8s): An open source container orchestration system, originally designed by Google and now maintained by CNCF. Thanks to active contributions from the open source community, Kubernetes has a strong ecosystem for a series of solutions regarding deployment, scheduling, scaling, monitoring, and so on (https://kubernetes.io/).Amazon Elastic Container Service (ECS): A highly secure, reliable, and scalable container orchestration solution provided by AWS. With a similar concept as many other orchestration systems, ECS also makes it easy to run, stop, and manage containers and is integrated with other AWS services such as CloudFormation, IAM, and ELB, among others (see more at https://ecs.aws/).

The control/data plane, a common architecture for container orchestrators, is shown in the following diagram:

Figure 1.4 – An overview of container orchestration

Container orchestration usually consists of the brain or scheduler/orchestrator that decides where to put the containers (control plane), while the worker runs the actual containers (data plane). The orchestrator offers a number of additional features:

Maintains the desired state for the entire cluster systemProvisions and schedules containersReschedules containers when a worker becomes unavailableRecovery from failureScales containers in or out based on workload metrics, time, or some external event

We’ve spoken about container orchestration at the conceptual level, now let’s take a look at Kubernetes to make this concept real.

What is Kubernetes?

Kubernetes is an open source container orchestrator originally developed by Google but now seen as the de facto container platform for many organizations. Kubernetes is deployed as clusters containing a control plane that provides an API that exposes the Kubernetes operations, a scheduler that schedules containers (Pods are discussed next) across the worker nodes, a datastore to store all cluster data and state (etcd), and a controller that manages jobs, failures, and restarts.

Figure 1.5 – An overview of Kubernetes

The cluster is also composed of many worker nodes that make up the data plane. Each node runs the kubelet agent, which makes sure that containers are running on a specific node, and kube-proxy, which manages the networking for the node.

One of the major advantages of Kubernetes is that all the resources are defined as objects that can be created, read, updated, and deleted. The next section will review the major K8s objects, or “kinds” as they are called, that you will typically be working with.

Key Kubernetes API resources

Containerized applications will be deployed and launched on a worker node(s) using the API. The API provides an abstract object called a Pod, which is defined as one or more containers sharing the same Linux namespace, cgroups, network, and storage resources. Let’s look at a simple example of a Pod:

apiVersion: v1 kind: Pod metadata:   name: nginx spec:   containers:   - name: nginx     image: nginx:1.14.2     ports:     - containerPort: 80

In this example, kind defines the API object, a single Pod, and metadata contains the name of the Pod, in this case, nginx. The spec section contains one container, which will use the nginx 1.14.2 image and expose a port (80).

In most cases, you want to deploy multiple Pods across multiple nodes and maintain that number of Pods even if you have node failures. To do this, you use a Deployment, which will keep your Pods running. A Deployment is a Kubernetes kind that allows you to define the number of replicas or Pods you want, along with the Pod specification we saw previously. Let’s look at an example that builds on the nginx Pod we discussed previously:

ApiVersion: apps/v1 kind: Deployment metadata:   name: nginx-deployment   labels:     app: nginx spec:   replicas: 3   selector:     matchLabels:       app: nginx   template:     metadata:       labels:         app: nginx     spec:       containers:       - name: nginx         image: nginx:1.14.2         ports:         - containerPort: 80

Finally, you want to expose your Pods outside the clusters! This is because, by default, Pods and Deployments are only accessible from inside the cluster’s other Pods. There are various services, but let’s discuss the NodePort service here, which exposes a dynamic port on all nodes in the cluster.

To do this, you will use the kind of Service, an example of which is shown here:

kind: Service apiVersion: v1 metadata:   name: nginx-service spec:   type: NodePort   selector:     app: nginx   ports:   port: 80   nodePort: 30163

In the preceding example, Service exposes port 30163 on any host in the cluster and maps it back to any Pod that has labelapp=nginx (set in the Deployment), even if a host is not running on that Pod. It translates the port value to port 80, which is what the nginx Pod is listening on.

In this section, we’ve looked at the basic Kubernetes architecture and some basic API objects. In the final section, we will review some standard deployment architectures.

Understanding Kubernetes deployment architectures

There are a multitude of ways to deploy Kubernetes, depending on whether you are developing on your laptop/workstation, deploying to non-production or productions, or whether you are building it yourself or using a managed service such as EKS.

The following sections will discuss how Kubernetes can be deployed for different development environments such as locally on your laptop for testing or for production workloads.

Developer deployment

For local development, you may want to use a simple deployment such as minikube or Kind. These deploy a full control plane on a virtual machine (minikube) or Docker container (Kind) and allow you to deploy API resources on your local machine, which acts as both the control plane and data plane. The advantages of this approach are that everything is run on your development machine, you can easily build and test your app, and your Deployment manifests . However, you only have one worker node, which means that complex, multi-node application scenarios are not possible.

Non-production deployments

In most cases, non-production deployments have a non-resilient control plane. This typically means having a single master node hosting the control plane components (API server, etcd, and so on) and multiple worker nodes. This helps test multi-node application architectures but without the overhead of a complex control plane.

The one exception is integration and/or operational non-production environments where you want to test cluster or application operations in the case of a control plane failure. In this case, you may want to have at least two master nodes.

Self-built production environments

In production environments, you will need a resilient control plane, typically following the rule of 3, where you deploy 3, 6, or 9 control nodes to ensure an odd number of nodes are used to gain a majority during a failure event. The control plane components are mainly stateless, while configuration is stored in etcd. A load balancer can be deployed across the API controllers to provide resilience for K8s API requests; however, a key design decision is how to provide a resilient etcd layer.

In the first model, stacked etcd, etcd is deployed directly on the master nodes making the etcd and Kubernetes topologies tightly coupled (see https://d33wubrfki0l68.cloudfront.net/d1411cded83856552f37911eb4522d9887ca4e83/b94b2/images/kubeadm/kubeadm-ha-topology-stacked-etcd.svg).

This means if one node fails, both the API layer and data persistence (etcd) layers are affected. A solution to this problem is to use an external etcd cluster hosted on separate machines than the other Kubernetes components, effectively decoupling them (see https://d33wubrfki0l68.cloudfront.net/ad49fffce42d5a35ae0d0cc1186b97209d86b99c/5a6ae/images/kubeadm/kubeadm-ha-topology-external-etcd.svg).

In the case of the external etcd model, failure in either the API or etcd clusters will not impact the other. It does mean, however, that you will have twice as many machines (virtual or physical) to manage and maintain.

Managed service environments

AWS EKS is a managed service where AWS provides the control plane and you connect worker nodes to it using either self-managed or AWS-managed node groups (see Chapter 8, Managing Worker Nodes on EKS). You simply create a cluster and AWS will provision and manage at least two API servers (in two distinct Availability Zones) and a separate etcd autoscaling group spread over three Availability Zones.

The cluster supports a service level of 99.95% uptime and AWS will fix any issues with your control plane. This model means that you don’t have any flexibility in the control plane architecture but, at the same time, you won’t be required to manage it. EKS can be used for test, non-production, and production workloads, but remember there is a cost associated with each cluster (this will be discussed in Chapter 2, Introducing Amazon EKS).

Now you’ve learned about several architectures that can be implemented when building a Kubernetes cluster from development to production. In this book, you don’t have to know how to build an entire Kubernetes cluster by yourself, as we will be using EKS.

Summary

In this chapter, we explored the basic concepts of containers and Kubernetes. We discussed the core technical concepts used by Docker, containerd, and runc on Linux systems, as well as scaling deployments using a container orchestration system such as Kubernetes.

We also looked at what Kubernetes is, reviewed several components and API resources, and discussed different deployment architectures for development and production.

In the next chapter, let’s talk about the managed Kubernetes service, Amazon Elastic Kubernetes Service (Amazon EKS), in more detail and learn what its key benefits are.

Further reading

Understanding the EKS SLA

https://aws.amazon.com/eks/sla/

Understanding the Kubernetes API

https://kubernetes.io/docs/concepts/overview/kubernetes-api/

Getting started with minikube

https://minikube.sigs.k8s.io/docs/start/

Getting started with Kind

https://kind.sigs.k8s.io/docs/user/quick-start/

EKS control plane best practice

https://aws.github.io/aws-eks-best-practices/reliability/docs/controlplane/

Open Container Initiative document

https://opencontainers.org/

2

Introducing Amazon EKS

In the previous chapter, we talked about the basic concepts of a container, container orchestration, and Kubernetes. Building and managing a Kubernetes cluster by yourself can be a very complex and time-consuming task, but using a managed Kubernetes service can remove all that heavy lifting and allow users to focus on application development and deployment.

In this chapter, we are going to explore Elastic Kubernetes Service (EKS) and its technical architecture at a high level to get a good understanding of its benefits and drawbacks.

To sum up, this chapter covers the following topics:

What is Amazon EKS?Understanding the EKS architectureInvestigating the Amazon EKS pricing modelCommon mistakes when using EKS

Technical requirements

You should have some familiarity with the following:

What Kubernetes is and how it works (refer to Chapter 1, The Fundamentals of Kubernetes and Containers)AWS foundational services including Virtual Private Cloud (VPC), Elastic Computing Cloud(EC2), Elastic Block Storage (EBS), and Elastic Load Balancer (ELB)A general appreciation of standard Kubernetes deployment tools

What is Amazon EKS?

According to data from Cloud Native Computing Foundation (CNCF), at the end of 2017, nearly 57% of Kubernetes environments were running on AWS. Initially, if you wanted to run Kubernetes on AWS, you had to build the cluster by using tools such as Rancher or Kops on top of EC2 instances. You would also be required to constantly monitor and manage the cluster, deploying open source tools such as Prometheus or Grafana, and have a team of operational staff making sure the cluster was available and managing the upgrade process. Kubernetes also has a regular release cadence: three releases per year as of June 2021! This also leads to a constant operational pressure to upgrade the cluster.

As the AWS service roadmap is predominately driven by customer requirements, the effort needed to build and run Kubernetes on AWS led to the AWS service teams releasing EKS in June 2018.

Amazon EKS is Kubernetes! AWS takes the open source code, adds AWS-specific plugins for identity and networking (discussed later in this book), and allows you to deploy it in your AWS account. AWS will then manage the control plane and allow you to connect compute and storage resources to it, allowing you to run Pods and store Pod data.

Today, Amazon EKS has been adopted by many leading organizations worldwide – Snap Inc., HSBC, Delivery Hero, Fidelity Investments, and more. It simplifies the Kubernetes management process of building, securing, and following best practices on AWS, which brings benefits for organizations so they can focus on building container-based applications instead of creating Kubernetes clusters from scratch.

Cloud Native Computing Foundation

CNCF is a Linux Foundation project that was founded in 2015 and is responsible for driving Kubernetes development along with other cloud-native projects. CNCF has over 600 members including AWS, Google, Microsoft, Red Hat, SAP, Huawei, Intel, Cisco, IBM, Apple, and VMware.

Why use Amazon EKS?

The main advantage of using EKS is that you no longer have to manage the control plane; even upgrades are a single-click operation. As simple as this sounds, the operational savings of having AWS deploy, scale, fix, and upgrade your control plane cannot be underestimated for production environments or when you have many clusters.

As EKS is a managed service, it is also heavily integrated into the AWS ecosystem. This means the following:

Pods are first-class network citizens, have VPC network addresses, and can be managed and controlled like any other AWS resourcesPods can be allocated specific Identity and Access Management (IAM) roles, simplifying how Kubernetes-based applications connect and use AWS services such as DynamoDBKubernetes’ control and data plane logs and metrics can be sent to AWS CloudWatch where they can be reported on, managed, and visualized without any additional servers or software requiredOperational and development teams can mix compute (EC2 and/or Fargate) and storage services (EBS and/or EFS) to support a variety of performance, cost, and security requirements

Important note

It’s important to understand that EKS is predominantly a managed control plane. The data plane uses standard AWS services such EC2 and Fargate to provide the runtime environment for Pods. The data plane is, in most cases, managed by the operational or development teams.

In subsequent chapters, we will dive deep into these areas and illustrate how they are used and configured. But for now, let’s move on to the differences between a self-managed K8s cluster and EKS.

Self-managed Kubernetes clusters versus Amazon EKS

The following table compares the two approaches of self-built clusters versus EKS:

Self-managed Kubernetes cluster

EKS

Full control

Yes

Mostly (no direct access to underlying control plane servers)

Kubernetes Version

Community release

Community release

Version Support

The Kubernetes project maintains release branches for the most recent three minor releases. From Kubernetes 1.19 onward, releases receive approximately 1 year of patch support. Kubernetes 1.18 and older received approximately 9 months of patch support.

A Kubernetes version is supported for 14 months after first being available on Amazon EKS, even if it is no longer supported by the Kubernetes project/community.

Network Access Control

Manually set up and configure VPC controls

EKS creates standard security groups and supports public IP whitelisting.

Authentication

Manually set up and configure Kubernetes RBAC controls

Integrated with AWS IAM

Scalability

Manually setup and configure scaling

Managed control plane and standard compute/storage scaling

Security

Manually patched

Control plane patching is done by AWS

Upgrade

Manually update and replace components

Upgrade with a single click for the control plane, while managed node groups support simpler upgrades

Monitoring

Need to monitor by yourself and support the monitoring platform

EKS will do monitoring and replace unhealthy master nodes, integrated with CloudWatch

Table 2.1 – Comparing self-managed Kubernetes and EKS

In the next section, we will dive deeper into the EKS architecture so you can begin to really understand the differences between a self-managed cluster and EKS.

Understanding the EKS architecture

Every EKS cluster will have a single endpoint URL used by tools such as kubectl, the main Kubernetes client. This URL hides all the control plane servers deployed on an AWS-managed VPC across multiple Availability Zones in the region you have selected to deploy the cluster to, and the servers that make up the control plane are not accessible to the cluster users or administrators.

The data plane is typically composed of EC2 workers that are deployed across multiple Availability Zones and have the kubelet and kube-proxy agents configured to point to the cluster endpoint. The following diagram illustrates the standard EKS architecture:

Figure 2.1 – High-level overview of EKS architecture

The next sections will look into how AWS configures and secures the EKS control plane along with specific commands you can use to interact with it.

Understanding the EKS control plane

When a new cluster is created, a new control plane is created in an AWS-owned VPC in a separate account. There are a minimum of two API servers per control plane, spread across two Availability Zones for resilience, which are then exposed through a public network load balancer (NLB). The etcd servers are spread across three Availability Zones and configured in an autoscaling group, again for resilience.

The clusters administrators and/or users have no direct access to the cluster’s servers; they can only access the K8s API through the load balancer. The API servers are integrated with the worker nodes running under a different account/VPC owned by the customer by creating Elastic Network Interfaces (ENIs) in two Availability Zones. The kubelet agent running on the worker nodes uses a Route 53 private hosted zone, attached to the worker node VPC, to resolve the IP addresses associated with the ENIs. The following diagram illustrates this architecture:

Figure 2.2 – Detailed EKS architecture

Important note

One key gotcha with this architecture, as there is currently no private EKS endpoint, is that worker nodes need internet access to be able to get the cluster details through the AWS EKS DescribeCluster API. This generally means that subnets with worker nodes need either an internet/NAT gateway or a route to the internet.

Understanding cluster security

When a new cluster is created, a new security group is also created and controls access to the API server ENIs. The cluster security group must be configured to allow any network addresses that need to access the API servers. In the case of a public cluster (discussed in Chapter 7, Networking in EKS), these ENIs are only used by the worker nodes. When the cluster is private, these ENIs are also used for client (kubectl) access to the API servers; otherwise, all API connectivity is through the public endpoint.