The KCNA Book - Nigel Poulton - E-Book

The KCNA Book E-Book

Nigel Poulton

0,0
9,60 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Brought to you by Nigel Poulton, best-selling author of:


- Quick Start Kubernetes


- The Kubernetes Book


- Docker Deep Dive


- Data Storage Networking



Kubernetes and cloud native technologies are reshaping the world.
Possessing the knowledge and skills to leverage Kubernetes and cloud-native technologies is a huge career boost for you. It can get you the best roles, on the best projects, at the best organisations. It can even earn you more money.
With this in mind, the Cloud Native Computing Foundation designed the Kubernetes and Cloud Native Associate (KCNA) certification and exam as a way for you to prove your competence with these technologies.
This book covers every exam objective in one place in a well-organised and concise format. It's useful as both a revision guide and a place to start learning new technologies and concepts. For example, if you already know the basics of Kubernetes, the book will reinforce what you know and test your knowledge with its extensive quizzes and explanations. However, if you're new to any of the exam topics, the book will get you up-to-speed quickly.
Contains over 200 chapter-review questions, as well as a full 60-question sample exam.
When you've finished the book, you'll be ready to smash the KCNA exam!

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Veröffentlichungsjahr: 2022

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



The KCNA Book

Pass the Kubernetes and Cloud Native Associate Exam in style 

Nigel Poulton

© 2022 Nigel Poulton

About the author

Nigel Poulton (@nigelpoulton)

Hi, I’m Nigel. I live in the UK and I’m a techoholic. In fact, working with technologies like the cloud, containers, and WebAssembly is living the dream for me!

My early career was massively influenced by a book called Mastering Windows Server 2000 by Mark Minasi. This gave me a passion to write my own books and influence people’s lives and careers the way Mark’s book influenced mine. Since then, I’ve authored several best-selling books, including Data Storage Networking, Docker Deep Dive, and The Kubernetes Book. I feel immensely privileged to have reached so many people, and I genuinely appreciate all the feedback I receive.

I’m also the author of best-selling video training courses on Docker, Kubernetes, and WebAssembly. My videos are always entertaining, and occasionally laugh-out-loud funny (not my words).

At my website, nigelpoulton.com, you’ll find all my books, videos, blog, newsletter, and other resources to help you learn.

When I’m not working with tech, I’m dreaming about it. When I’m not dreaming about it, I’m spending time with my family. I also like American muscle cars, coaching youth soccer, and reading sci-fi.

You can find me at all the following places, and I’m always happy to connect.

•Twitter: twitter.com/nigelpoulton

•LinkedIn: linkedin.com/in/nigelpoulton/

•Mastodon: @[email protected]

•Web: nigelpoulton.com

•Email: [email protected]

Table of Contents

Getting started

Who is the book for

How is the book organised

About the author

Feedback

1: Setting the scene

Virtualisation

Containerisation

Monolithic vs microservices

Chapter summary

Exam essentials

Recap questions

2: Cloud native architecture

Defining cloud native architecture

Resiliency

Autoscaling

Serverless

Community and governance

Roles and personas

Open standards

Chapter summary

Exam essentials

Recap questions

3: Container orchestration

Primer

Container runtimes

Container orchestration fundamentals

Container security

Container networking

Service meshes

Container storage

Chapter summary

Exam essentials

Recap Questions

4: Kubernetes Fundamentals

Primer

Simple Kubernetes workflow

Containers and pods

Augmenting pods

Kubernetes architecture

Scheduling

Kubernetes namespaces

The Kubernetes API and API server

Kubernetes networking

Chapter summary

Exam essentials

Recap questions

5: Cloud native application delivery

Primer

CI/CD

GitOps

Chapter summary

Exam essentials

Recap questions

6: Cloud native observability

Primer

Telemetry and observability

Prometheus

Cost management

Chapter summary

Exam essentials

Recap questions

The exam

Exam domains and competencies

About the exam

Booking the exam

Taking the exam

Getting your result

Staying connected

8: Sample test

Appendix A: Chapter quiz answers

Chapter 1: Setting the scene

Chapter 2: Cloud native architecture

Chapter 3: Container orchestration

Chapter 4: Kubernetes fundamentals

Chapter 5: Cloud native application delivery

Chapter 6: Cloud native observability

Appendix B: Sample Test answers

What next

Other exams

Books

Video courses

Let’s connect

Guide

Begin Reading

Getting started

Kubernetes and cloud native technologies are all the rage and are shaping the world we work in. Building apps as small, specialised, single-purpose services that can self-heal, autoscale, and be regularly updated without downtime brings huge benefits. However, possessing the knowledge and skills to leverage these technologies is a huge career boost for you as an individual. For example, knowing how to design, build, and troubleshoot cloud native microservices applications running on Kubernetes can get you the best roles, on the best projects, at the best organisations. It can even earn you more money.

With all of this in mind, the Cloud Native Computing Foundation (CNCF) designed the KCNA exam and certification as a way for you to prove your competence with these technologies.

This book covers all of the exam objectives in one place in a well-organised and concise format. It’s useful as both a revision guide and a place to start learning new technologies and concepts. For example, if you already know the basics of Kubernetes, the book will reinforce everything you already know, as well as test your knowledge with its extensive quizzes and explanations. However, if you’re new to any of the topics on the exam, the book will get you up-to-speed quickly.

Who is the book for

The book is for anyone wanting to gain the KCNA certification.

As the exam tests your understanding of core technologies and concepts, it’s applicable to anyone working in technology. Examples include:

ArchitectsManagementTechnical marketingDevelopersOperationsDevOps, DevSecOps, CloudOps, SREs etc.Data engineersMore…

The book and exam are particularly useful if you come from a traditional IT background and want to learn the fundamentals of Kubernetes and cloud native.

If you’re brand new to Kubernetes, you should consider reading Quick Start Kubernetes. It’s only 100 pages long and will get you up-to-speed and 100% comfortable with the fundamentals of Kubernetes. It also has very easy hands-on examples that really help you grasp some of the concepts that might be new to you. It’s available on Amazon and Leanpub, and published in several languages, including French, Italian, Portuguese, Russian, Simplified Chinese, Spanish, and more about to be released.

How is the book organised

The technical content of the book is organised with one chapter per exam domain. There’s a chapter dedicated to preparing you to take the exam, and there’s a full practice exam with 60 questions just like the real exam.

Each technical chapter is organised as follows:

Technical contentChapter summaryExam essentialsPractice questions

The exam essentials is a recap of the major topics learned and can be used like flashcards when doing final revision and exam prep.

The practice questions test your mastery of the topics learned and are a similar style to the questions in the exam. However, they are not actual exam questions.

The chapter dedicated to preparing for, and taking the exam, explains exactly what it’s like taking the exam so you don’t have any surprises on the day.

The practice exam is a great opportunity for you to test your readiness for the real exam. Again, the questions are like the questions you’ll see in the exam, but they’re not actual exam questions.

About the author

OK, so I’m Nigel and I’ve trained over one million people on cloud and container technologies. I’ve authored several best-selling books and video training courses and dedicated my working life to helping people take their first steps with containers and Kubernetes. I’m also passionate about explaining things as clearly as possible so that you love my books and videos.

I actually wrote this entire book in draft form, took the exam, then came back and finely tuned the content to better prepare you for the exam. I considered failing the exam and re-taking it multiple times to get an even better feel for the style of questions and levels of detail being tested. However, that felt wrong and I wanted to get the book into your hands as quickly as possible.

I’d love to connect and would love to hear about your exam experience. You can reach me at all of the following:

LinkedIn: https://www.linkedin.com/in/nigelpoulton/Web: nigelpoulton.comTwitter: @nigelpoulton

Feedback

Writing books is hard, and I worked tirelessly over holiday periods to get this book into your hands. With this in mind, I’d consider it a personal favor if you took a minute or two to write an Amazon review.

Also, if you have any feedback on the book, or your exam experience, ping me an email at [email protected] and I’ll do my best to respond.

Enjoy the book, and good luck with your exam!

1: Setting the scene

This chapter doesn’t map directly to an exam objective. However, the things you’ll learn are in the exam and are important in setting the scene for why we have technologies like containers and Kubernetes. If you already know this, you can skip to the next chapter. Otherwise, stick around while we set the scene for the rest of the book.

We’ll cover all of the following at a high level.

VirtualisationContainerisationThe transition from monolithic apps to microservices

Don’t worry if you think we’re not covering things in enough detail. This is just an introductory chapter and we’ll get into the detail in later chapters.

Virtualisation

In the distant past we deployed one application per physical server. This was a huge waste of physical resources and company capital. It also delayed the rollout of applications while physical servers were procured, racked, patched into the network, and had an operating system installed.

Virtualisation technologies like VMware came along and opened the door for us to run multiple applications on a single physical server. This meant we didn’t have to buy a new server for every new app, we could deploy apps very quickly to virtual machines on existing servers and avoid all of the following:

No more waiting for finance to approve server purchasesNo more waiting for the datacenter team to rack and cable serversNo more waiting for the network team to authorise servers on the networkNo more waiting for sysadmins to install operating systems

Almost immediately we went from wasting money on over-powered physical servers that took ages to purchase and install… to a world where we could quickly provision virtual machines on existing servers.

However, the industry never sleeps, and innovation never stops.

Containerisation

In the early 2010’s Docker gave the world the gift of easy-to-use containers.

At a high level, containers are another form of virtualisation that allow us to run even more apps on less servers and deploy them even faster.

Figure 1.1 shows a side-by-side comparison of server virtualisation and container virtualisation.

Figure 1.1

As the image shows, server virtualisation slices a physical server into multiple virtual machines (VM). Each VM looks, smells, and feels like a physical server, meaning each one has virtual CPUs, virtual memory, virtual hard drives, and virtual network cards. You install an operating system (OS) on each one and then install one app per VM. If a single physical server is sliced into 10 virtual machines, there will be 10 operating systems and you can install 10 apps.

Container virtualisation slices operating systems into virtual operating systems called containers. Each container looks, smells, and feels like a normal OS. This means each container has its own process tree, root filesystem, eth0 interface and more. You then run one app per container, meaning if a single server and OS is sliced into 50 containers, you can run 50 apps.

That’s the view from 40K feet.

Containers vs VMs

Containers and virtual machines are both virtual constructs for running applications. However, containers have several important advantages.

One advantage is that containers are a lot more lightweight and efficient than virtual machines. This means businesses using containers can run even more apps on the same number of physical servers.

As an example, an organisation with 10 physical servers might be able to run 100 virtual machines and apps. However, if the same organisation chose containers instead of virtual machines, they might be able to run 500 containers and apps. One of the reasons is that every VM needs its own dedicated operating system (OS). This means a single physical server sliced in to 10 VMs requires 10 installations of Windows or Linux (other operating systems exist). Each OS consumes CPU, memory and disk space that can’t be used for apps. In the container model every container shares the OS of the host it’s running on. This means there’s only one OS consuming CPU, memory, and disk space, resulting in more resources being available to run applications.

Containers are also faster to deploy and start than VMs. This is also because every VM contains an OS and an application. Operating systems can be large, making them bigger and bulkier to package. It also means that starting a VM bootstraps a full OS before the app can start. This can be time consuming.

To recap, when packaging an application as a container, you only package the application and dependencies. You do not package a full OS. This makes container images smaller and easier to share. It also makes them faster to start – you only start the app.

The following might help if the above is a little unclear.

Application developers write applications as they always have. The application and dependencies are then packaged into a container image. Dependencies are things like shared library files. Once the container image is created, you can host it in a shared repository where it can be accessed by the required systems and teams. A container host, which is just a server running a container runtime such as Docker, grabs a copy of the image and starts a container from it. The container host has a single copy of Windows or Linux that is already up and running, and the container runtime on the host quickly creates a container and executes the app that’s inside the image.

The smaller packaging used by the container model enables other benefits such as microservices, automated pipelines and more. We’ll cover all of these later in the book.

Before moving on, it’s important to acknowledge an advantage VMs have over containers.

The fact that every VM requires a dedicated OS is a disadvantage when it comes to packaging and application start times. However, it can be an advantage when it comes to security. As a quick example, if the shared OS on a container host gets compromised, every container is also compromised. This is because every container shares the OS kernel of the container host. In the VM model, every VM has its own OS kernel, this means compromising one kernel has no impact on other VMs.

Despite this security challenge, containers are generally considered the best solution for modern business applications.

So far, we’ve focussed mainly on physical infrastructure, such as servers, and how best to utilise them. Now let’s change focus onto application development and management.

Monolithic vs microservices

In the past, we built monolithic applications that ran on dedicated physical servers.

Monolithic application is jargon for a large complicated application that does lots of things. For example, a monolithic application may have all of the following services bundled and shipped as a single program.

Web front-end AuthenticationShopping basket CatalogPersistent storeReporting

The important thing to understand is that all of these services were developed by a single team, shipped as a single program, installed as a single program, and patched and updated as a single program. This meant they were complex and difficult to work with. For example, patching, updating, or scaling the reporting service of a monolithic app meant you had to patch, update, or scale the entire app. This made almost all changes high-risk, often resulting in updates being rolled up into a single very high-risk update performed once a year over a long stressful weekend.

While this model was OK in the past, it’s not OK in the modern cloud era where businesses are entirely reliant on technology and need their applications to react in-line with fast changing markets and situations.

Enter microservices applications…

The modern answer to the problem of monolithic applications is cloud native microservices apps.

At the highest level, “cloud native” and “microservices” are two separate things. Microservices is an architecture for designing and developing applications, and cloud native is a set of application features. We’ll get into detail soon.

Consider the same application with the web front-end, authentication, shopping basket, catalog, persistent store, and reporting requirements. To make this a microservices app you develop each feature independently, ship each one independently, install each one independently, patch, update, and scale each one independently. However, they all communicate and work together to provide the same overall application experience for users and clients.

We call each of these features a “service”, and as each one is small, we call them microservices (micro is another word for small in English). With this in mind, the same application will have the following 6 microservices, each of which is its own small independent application.

Web front-end AuthenticationShopping basket CatalogPersistent storeReporting

This microservices architecture changes a lot of things, including:

Each microservice can have its own small and agile development teamEach microservice can have its own release and update cycleYou can scale any microservice without impacting othersYou can patch and update any microservice without impacting others

Generally speaking, adopting microservices architectures means you can deploy, patch, update, scale, and self-heal your business applications a lot easier and a lot more frequently than you can with monolithic apps – this is the cloud native element. Many organisations using microservices applications push multiple safe and reliable updates per day.

However, the distributed nature of microservices apps brings some challenges.

For example, the individual microservices of an application can all run on different servers and virtual machines. This means they need to communicate over the network, increasing network complexity as well as introducing security risks. It also becomes vital for the development teams of different microservices to communicate with each other and be aware of the overall app. That all said, the benefits of the microservices architecture far outweigh the challenges it brings.

In quick summary, microservices is the architecture pattern where you split application features into their own small, single-purpose apps. Cloud native is a set of features and behaviors. The features including self-healing, autoscaling, zero-downtime updates and more. The behaviors include agile development, and frequent predictable releases.

Chapter summary

In this chapter, you learned that we used to deploy one application per physical server. This was wasteful of capital, servers, and environmental resources. It also caused long delays in application rollouts while new servers had to be procured, delivered, and installed. VMware came along and let us run multiple applications per physical server. It reduced capital expenditure and allowed more efficient use of server and environmental resources. It also allowed us to ship applications a lot faster by deploying them to virtual machines on servers we already owned.

Containers are also a form of virtualisation. They virtualise at the operating system layer and each container is a virtual operating system. Containers are faster and more efficient than virtual machines, however, out-of-the-box they’re usually less secure. The advantages of containers made it possible for us to re-think the way we develop, deploy, and manage applications.

A major innovation, enabled by containers, is the microservices design pattern where large complex applications are broken up and each application feature is developed and deployed as its own small, single-purpose, application called a microservice. This architecture enables cloud native features such as, self-healing, scaling, regular and repeatable rollouts, and more.

All of this can be backed by cloud and cloud-like infrastructure that can scale on-demand.

Today, we live in a world where applications and infrastructure can self-heal, and individual application features (microservices) can automatically scale and be updated without impacting any other application features.

Exam essentials

This chapter doesn’t map directly to an exam domain. However, the following exam topics were mentioned and will be covered in more detail later in the book.

Container runtimesA server that runs containers is called a container host. They use a low-level tool called a container runtime to start and stop containers. Docker is the best-known container runtime and was the first container runtime supported by Kubernetes. However, it is being replaced in Kubernetes by a lighter-weight version called containerd (pronounced “container dee”). Many other container runtimes exist and some of them work differently. Some offer better performance at the expense of security, whereas others offer better security at the expense of size and performance. You’ll learn more later in the book.Container securityAll containers running on a single host share the host’s OS kernel. This makes them small, portable, and fast to start. However, if the host’s kernel is compromised, all containers running on that host are also compromised. There are many container runtimes available, and not all work the same. For example, the Docker and containerd runtimes implement the shared kernel model making them lightweight and fast but susceptible to shared kernel attacks. Whereas gVisor implements a more secure model. You’ll learn more later in the book.Container networkingMicroservices architectures implement application features as small standalone services called microservices. The different microservices making up an app have to use the network to communicate and share data. This increases the number of network entities, increases the complexity of networking in general, and increases network traffic. Communicating and sharing data across the network is also slower than communicating and sharing on the same server.AutoscalingMicroservices architectures allow individual application microservices to scale independently. For example, platforms like Kubernetes can automatically scale up the reporting microservice of an application at month-end and year-end when they’re busier than normal. Again, you’ll learn more later in the book.

Recap questions

See Appendix A for answers. Page 157 in the paperback edition.

1. Which of the following are advantages of containers compared to virtual machines? Choose all correct answers.

Smaller sizeFaster start timesMore secure out-of-the-boxMore apps per physical server

2. Which of the following is an advantage of virtual machines compared to containers?

Virtual machines start faster than containersVirtual machines are more secure out-of-the-boxVirtual machines are smaller than containersVirtual machines enable microservices design patterns

3. Which layer does container virtualisation work at?

The hardware layerThe infrastructure layerThe application layerThe operating system layer

4. Which of the following are components of a container? Choose all correct answers.

Virtual process treeVirtual CPUsVirtual hard disksVirtual filesystems

5. Which of the following is true?

You can normally run more containers than virtual machines on a serverYou can normally run more VMs than containers on a server

6. Why are containers potentially less secure than VMs?

Container networking is always in plain textContainer images are always available on the public internetYou cannot encrypt container filesystemsAll containers on a host share the same OS kernel

7. Which of the following are part of a container image? Choose all correct answers.

Application codeApplication dependenciesApplication snapshotsA dedicated operating system

8. Which of the following best describes a monolithic application?

An application that only runs on on-premises datacentersAn application with all features coded in a single binaryAn application that implements cloud native features such as autoscaling and rolling updatesAn application with very small container images

9. Which of the following are disadvantages of monolithic applications? Choose all correct answers.

Every feature has to communicate over the networkUpdates are high risk and complexYou cannot scale individual application featuresYou cannot deploy them to the public cloud

10. Which of the following are advantages of microservices applications? Choose all correct answers.

Each microservice can have its own small agile development teamEach microservice can have its own dedicated OS kernelEach microservice can be scaled independentlyEach microservice can be patched independently

11. Which of the following are potential disadvantages of microservices applications? Choose all correct answers.

Container images can be very smallThey cannot run on public cloudsIncreased networking complexityIncreased network traffic

2: Cloud native architecture

In this chapter, you’ll learn everything needed to pass the Cloud Native Architecture section of the exam. These topics account for 16% of the exam and provide a foundation for the topics covered in later chapters.

The chapter is divided as follows.

Defining cloud native architectureResiliencyAutoscalingServerlessCommunity and governanceRoles and personasOpen standardsChapter summaryExam essentialsRecap questions

Defining cloud native architecture

The first thing to understand is that cloud native is a set of capabilities and practices. That’s right, cloud native isn’t about running on the public cloud, it’s a set of capabilities and practices.