Kubernetes for Developers - Joseph Heck - E-Book

Kubernetes for Developers E-Book

Joseph Heck

0,0
34,79 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Kubernetes is documented and typically approached from the perspective of someone running software that has already been built. Kubernetes may also be used to enhance the development process, enabling more consistent testing and analysis of code to help developers verify not only its correctness, but also its efficiency. This book introduces key Kubernetes concepts, coupled with examples of how to deploy and use them with a bit of Node.js and Python example code, so that you can quickly replicate and use that knowledge.

You will begin by setting up Kubernetes to help you develop and package your code. We walk you through the setup and installation process before working with Kubernetes in the development environment. We then delve into concepts such as automating your build process, autonomic computing, debugging, and integration testing. This book covers all the concepts required for a developer to work with Kubernetes.

By the end of this book, you will be in a position to use Kubernetes in development
ecosystems.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 379

Veröffentlichungsjahr: 2018

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Kubernetes for Developers

 

 

Use Kubernetes to develop, test, and deploy your applications with the help of containers

 

 

 

 

 

 

 

 

 

 

 

 

 

Joseph Heck

 

 

 

 

 

 

 

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

Kubernetes for Developers

Copyright © 2018 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Commissioning Editor: Gebin GeorgeAcquisition Editor: Rahul NairContent Development Editor: Sharon RajTechnical Editor: Prashant ChaudhariCopy Editor:Safis EditingProject Coordinator: Virginia DiasProofreader: Safis EditingIndexer: Priyanka DhadkeGraphics: Tom ScariaProduction Coordinator: Deepika Naik

First edition: April 2018

Production reference: 1050418

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-78883-475-9

www.packtpub.com

mapt.io

Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Mapt is fully searchable

Copy and paste, print, and bookmark content

PacktPub.com

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

Contributors

About the author

Joseph Heck has broad development and management experience across start-ups and large companies. He has architected, developed, and deployed a wide variety of solutions, ranging from mobile and desktop applications to cloud-based distributed systems.

He builds and directs teams and mentors individuals to improve the way they build, validate, deploy, and run software. He also works extensively with and in open source, collaborating across many projects, including Kubernetes.

About the reviewers

Paul Adamson has worked as an Ops engineer, a developer, a DevOps engineer, and all variations and mixes of these. When not reviewing this book, he keeps busy helping companies embrace the AWS infrastructure. His language of choice is PHP for all the good reasons and even some of the bad ones, but mainly habit. Apart from reviewing this book, he has been working for Healthy Performance Ltd., helping to apply cutting-edge technology to a cutting-edge approach to wellbeing.

Jakub Pavlik is a co-founder, former CTO, and chief architect of tcp cloud who has worked several years on the IaaS cloud platform based on OpenStack-Salt and OpenContrail projects, which were deployed and operated for global large service providers. Currently as the director of product engineering, he collaborates on a new Mirantis Cloud Platform for NFV/SDN, IoT, and big data use cases based on Kubernetes, containerized OpenStack, and OpenContrail. He is a member of the OpenContrail Advisory Board and is also an enthusiast of Linux OS, ice hockey, and films. He loves his wife, Hanulka.

 

 

 

 

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Table of Contents

Title Page

Copyright and Credits

Kubernetes for Developers

Packt Upsell

Why subscribe?

PacktPub.com

Contributors

About the author

About the reviewers

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Conventions used

Get in touch

Reviews

Setting Up Kubernetes for Development

What you need for development

Optional tools

Getting a local cluster up and running

Resetting and restarting your cluster

Looking at what's built-in and included with Minikube

Verifying Docker

Clearing and cleaning Docker images

Kubernetes concept – container

Kubernetes resource – Pod

Namespaces

Writing your code for Pods and Containers

Kubernetes resource – Node

Networks

Controllers

Kubernetes resource – ReplicaSet

Kubernetes resource – Deployment

Representing Kubernetes resources

Summary

Packaging Your Code to Run in Kubernetes

Container images

Container registries

Making your first container

Dockerfile commands

Example – Python/Flask container image

Building the container

Running your container

Pod name

Port forwarding

Proxy

How did the proxy know to connect to port 5000 on the container?

Getting logs from your application

Example – Node.js/Express container image

Building the container

Running your container

Port forwarding

Proxy

Getting logs from your application

Tagging your container images

Summary

Interacting with Your Code in Kubernetes

Practical notes for writing software to run in a container

Getting options for your executable code

Practical notes for building container images

Sending output from your program

Logs

Pods with more than one container

Streaming the logs

Previous logs

Timestamps

More debugging techniques

Interactive deployment of an image

Attaching to a running Pod

Running a second process in a container

Kubernetes concepts – labels

Organization of labels

Kubernetes concepts – selectors

Viewing labels

Listing resources with labels using kubectl

Automatic labels and selectors

Kubernetes resources – service

Defining a service resource

Endpoints

Service type – ExternalName

Headless service

Discovering services from within your Pod

DNS for services

Exposing services outside the cluster

Service type – LoadBalancer

Service type – NodePort

Minikube service

Example service – Redis

Finding the Redis service

Using Redis from Python

Updating the Flask deployment

Deployments and rollouts

Rollout history

Rollout undo

Updating with the kubectl set command

Summary

Declarative Infrastructure

Imperative versus declarative commands

A wall of YAML

Creating a simple deployment

Declaring your first application

ImagePullPolicy

Audit trail

Kubernetes resource – Annotations

Exposing labels and annotations in Pods

Kubernetes resource – ConfigMap

Creating a ConfigMap

Managing ConfigMaps

Exposing the configuration into your container images

Environment variables

Exposing ConfigMap as files inside the container

Dependencies on ConfigMaps

Kubernetes resource – Secrets

Exposing Secrets into a container

Secrets and security – how secret are the secrets?

Example – Python/Flask deployment with ConfigMap

SIDEBAR – JSONPATH

Using the ConfigMap within Python/Flask

Summary

Pod and Container Lifecycles

Pod lifecycle

Container lifecycle

Deployments, ReplicaSets, and Pods

Getting a snapshot of the current state

Probes 

Liveness probe

Readiness probe

Adding a probe to our Python example

Running the Python probes example

Adding a probe to our Node.js example

Container lifecycle hooks

Initialization containers

Quick interactive testing

Handling a graceful shutdown

SIGTERM in Python

SIGTERM in Node.js

Summary

Background Processing in Kubernetes

Job

CronJob

A worker queue example with Python and Celery

Celery worker example

RabbitMQ and configuration

Celery worker

Persistence with Kubernetes

Volumes

PersistentVolume and PersistentVolumeClaim

Stateful Sets

A Node.js example using Stateful Set

Custom Resource Definition

Summary

Monitoring and Metrics

Built-in metrics with Kubernetes

Kubernetes concept – Quality of Service

Choosing requests and limits for your containers

Capturing metrics with Prometheus

Installing Helm

Installing Prometheus using Helm

Viewing metrics with Prometheus 

Installing Grafana

Using Prometheus to view application metrics

Flask metrics with Prometheus

Node.js metrics with Prometheus

Service signals in Prometheus

Summary

Logging and Tracing

A Kubernetes concept – DaemonSet

Installing and using Elasticsearch, Fluentd, and Kibana

Log aggregation with EFK

Viewing logs using Kibana

Filtering by app

Lucene query language

Running Kibana in production

Distributed tracing with Jaeger

Spans and traces

Architecture of Jaeger distributed tracing

Trying out Jaeger

Example – adding tracing to your application

Adding a tracing collector to your pod

Add the libraries and code to generate traces

Considerations for adding tracing

Summary

Integration Testing

Testing strategies using Kubernetes

Reviewing resources needed for testing

Patterns of using Kubernetes with testing

Tests local and system-under-test in Kubernetes

Tests local and system-under-test in Kubernetes namespaces

Tests in Kubernetes and system-under-test in Kubernetes namespaces

Simple validation with Bats

Example – integration testing with Python

PyTest and pytest-dependency

PyTest fixtures and the python-kubernetes client

Waiting for state changes

Accessing the deployment

Example – integration testing with Node.js

Node.js tests and dependencies with mocha and chai

Validating the cluster health

Deploying with kubectl

Waiting for the pods to become available

Interacting with the deployment

Continuous integration with Kubernetes

Example – using Minikube with Travis.CI

Next steps

Example – using Jenkins and the Kubernetes plugin

Installing Jenkins using Helm

Accessing Jenkins

Updating Jenkins

Example pipeline

Next steps with pipelines

Summary

Troubleshooting Common Problems and Next Steps

Common errors and how to resolve them

Error validating data

Navigating the documentation

ErrImagePull

CrashLoopBackOff

Starting and inspecting the image

Adding your own setup to the container 

No endpoints available for service

Stuck in PodInitializing

Missing resources

Emerging projects for developers

Linters

Helm

ksonnet

Brigade

skaffold

img

Draft

ksync

Telepresence

Interacting with the Kubernetes project

Slack

YouTube

Stack Overflow

Mailing lists and forums

Summary

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

It's getting more common to find yourself responsible for running the code you've written as well as developing the features. While many companies still have an operations group (generally retitled to SRE or DevOps) that help with expert knowledge, developers (like you) are often being asked to expand your knowledge and responsibility scope.

There's been a shift to treating infrastructure-like code for some time. Several years ago, I might have described the boundary as Puppet is used by operations folks and Chef is used by developers. All of that changed with the advent and growth first of clouds in general, and more recently with the growth of Docker. Containers provide a level of control and isolation, as well as development flexibility, that is very appealing. When using containers, you quickly move to where you want to use more than one container a time, for isolation of responsibility as well as horizontal scaling.

Kubernetes is a project open sourced from Google, now hosted by the cloud-native computing foundation. It exposes many of the lessons from Google's experience of running software in containers and makes it available to you. It encompasses not only running containers, but grouping them together into services, scaling them horizontally, as well as providing means to control how these containers interact together and how they get exposed to the outside world.

Kubernetes provides a declarative structure backed with an API and command-line tools. Kubernetes can be used on your laptop, or leveraged from one of the many cloud providers. The benefit of using Kubernetes is being able to use the same set of tools with the same expectations, regardless of running it locally, in a small lab at your company, or in any number of larger cloud providers. It's not exactly the write once, run anywhere promise of Java from days gone by; more we'll give you a consistent set of tools, regardless of running on your laptop, your company's datacenter, or a cloud provider such as AWS, Azure, or Google.

This book is your guide to leveraging Kubernetes and its capabilities for developing, validating, and running your code.

This book focuses on examples and samples that take you through how to use Kubernetes and integrate it into your development workflow. Through the examples, we focus on common tasks that you may want to use to take advantage of running your code with Kubernetes.

Who this book is for

If you are a full-stack or backend software developer who's interested in, curious about, or being asked to be responsible for testing and running the code you're developing, you can leverage Kubernetes to make that process simpler and consistent. If you're looking for developer-focused examples in Node.js and Python for how to build, test, deploy, and run your code with Kubernetes, this book is perfect for you.

What this book covers

Chapter 1, Setting Up Kubernetes for Development, covers the installation of kubectl, minikube, and Docker, and running kubectl with minikube to validate your installation. This chapter also provides an introduction to the concepts in Kubernetes of Nodes, Pods, Containers, ReplicaSets, and Deployments.

Chapter 2, Packaging Your Code to Run in Kubernetes, explains how to package your code within containers in order to use Kubernetes with examples in Python and Node.js.

Chapter 3, Interacting with Your Code in Kubernetes, covers how to run containers in Kubernetes, how to access these containers, and introduces the Kubernetes concepts of Services, Labels, and Selectors.

Chapter 4, Declarative Infrastructure, covers expressing your application in a declarative structure, and how to extend that to utilize the Kubernetes concepts of ConfigMaps, Annotations, and Secrets.

Chapter 5, Pod and Container Lifecycles, looks at the life cycle of containers and Pods within Kubernetes, and how to expose hooks from your application to influence how Kubernetes runs your code, and how to terminate your code gracefully.

Chapter 6, Background Processing in Kubernetes, explains the batch processing concepts in Kubernetes of Job and CronJob, and introduces how Kubernetes handles persistence with Persistent Volumes, Persistent Volume Claims, and Stateful Sets.

Chapter 7, Monitoring and Metrics, covers monitoring in Kubernetes, and how to utilize Prometheus and Grafana to capture and display metrics and simple dashboards about Kubernetes in general, as well as your applications.

Chapter 8, Logging and Tracing, explains how to collect logs with Kubernetes using ElasticSearch, FluentD, and Kibana, and how you can set up and use distributed tracing with Jaeger.

Chapter 9, Integration Testing, covers testing strategies that take advantage of Kubernetes, and how to leverage Kubernetes in integration and end-to-end tests.

Chapter 10, Troubleshooting Common Problems and Next Steps, reviews a number of common pain points you may encounter when getting started with Kubernetes and explains how to resolve them, and provides an overview of a number of projects within the Kubernetes ecosystem that may be of interest to developers and the development process.

To get the most out of this book

You need to have the following software and hardware requirements:

Kubernetes 1.8

Docker Community Edition

kubectl 1.8 (part of Kubernetes)

VirtualBox v5.2.6 or higher

minikube v0.24.1

MacBook or Linux machine with 4 GB of RAM or more

Download the example code files

You can download the example code files for this book from your account at www.packtpub.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

Log in or register at

www.packtpub.com

.

Select the

SUPPORT

tab.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR/7-Zip for Windows

Zipeg/iZip/UnRarX for Mac

7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub at following URLs:

https://github.com/kubernetes-for-developers/kfd-nodejs

https://github.com/kubernetes-for-developers/kfd-flask

https://github.com/kubernetes-for-developers/kfd-celery

In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

 

Conventions used

There are a number of text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "Mount the downloaded WebStorm-10*.dmg disk image file as another disk in your system."

A block of code is set as follows:

import signalimport sysdef sigterm_handler(_signo, _stack_frame):sys.exit(0)signal.signal(signal.SIGTERM, sigterm_handler)

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

import signal

import sys

def sigterm_handler(_signo, _stack_frame):sys.exit(0)signal.signal(signal.SIGTERM, sigterm_handler)

Any command-line input or output is written as follows:

kubectl apply -f simplejob.yaml

Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Select System info from the Administration panel."

Warnings or important notes appear like this.
Tips and tricks appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: Email [email protected] and mention the book title in the subject of your message. If you have questions about any aspect of this book, please email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packtpub.com.

Setting Up Kubernetes for Development

Welcome to Kubernetes for Developers! This chapter starts off by helping you get the tools installed that will allow you to take advantage of Kubernetes in your development. Once installed, we will interact with those tools a bit to verify that they are functional. Then, we will review some of the basic concepts that you will want to understand to effectively use Kubernetes as a developer. We will cover the following key resources in Kubernetes:

Container

Pod

Node

Deployment

ReplicaSet

What you need for development

In addition to your usual editing and programming tools, you will want to install the software to leverage Kubernetes. The focus of this book is to let you do everything on your local development machine, while also allowing you to expand and leverage a remote Kubernetes cluster in the future if you need more resources. One of Kubernetes' benefits is how it treats one or one hundred computers in the same fashion, allowing you to take advantage of the resources you need for your software, and do it consistently, regardless of where they're located.

The examples in this book will use command-line tools in a Terminal on your local machine. The primary one will be kubectl, which communicates with a Kubernetes cluster. We will use a tiny Kubernetes cluster of a single machine running on your own development system with Minikube. I recommend installing the community edition of Docker, which makes it easy to build containers for use within Kubernetes:

kubectl

:

kubectl

(how to pronounce that is an amusing diversion within the Kubernetes community) is the primary command-line tool that is used to work with a Kubernetes cluster. To install

kubectl

, go to the page

https://kubernetes.io/docs/tasks/tools/install-kubectl/

and follow the instructions relevant to your platform.

minikube

: To install Minikube, go to the page

https://github.com/kubernetes/minikube/releases

and follow the instructions for your platform.

docker

: To install the community edition of Docker, go to the webpage 

https://www.docker.com/community-edition

and follow their instructions for your platform.

Optional tools

In addition to kubectl, minikube, and docker, you may want to take advantage of additional helpful libraries and command-line tools.

jq is a command-line JSON processor that makes it easy to parse results in more complex data structures. I would describe it as grep's cousin that's better at dealing with JSON results. You can install jq by following the instructions at https://stedolan.github.io/jq/download/. More details on what jq does and how to use it can also be found at https://stedolan.github.io/jq/manual/.

Getting a local cluster up and running

Once Minikube and Kubectl are installed, get a cluster up and running. It is worthwhile to know the versions of the tools you're using, as Kubernetes is a fairly fast-moving project, and if you need to get assistance from the community, knowing which versions of these common tools will be important.

The versions of Minikube and kubectl I used while writing this are:

Minikube: version 0.22.3

kubectl

: version 1.8.0

You can check the version of your copy with the following commands:

minikube version

This will output a version:

minikube version: v0.22.3

If you haven't already done so while following the installation instructions, start a Kubernetes with Minikube. The simplest way is using the following command:

minikube start

This will download a virtual machine image and start it, and Kubernetes on it, as a single-machine cluster. The output will look something like the following:

Downloading Minikube ISO

106.36 MB / 106.36 MB [============================================] 100.00% 0s

Getting VM IP address...

Moving files into cluster...

Setting up certs...

Connecting to cluster...

Setting up kubeconfig...

Starting cluster components...

Kubectl is now configured to use the cluster.

Minikube will automatically create the files needed for kubectl to access the cluster and control it. Once this is complete, you can get information about the cluster to verify it is up and running.

First, you can ask minikube about its status directly:

minikube status

minikube: Running

cluster: Running

kubectl: Correctly Configured: pointing to minikube-vm at 192.168.64.2

And if we ask kubectl about its version, it will report both the version of the client and the version of the cluster that it is communicating with:

kubectl version

The first output is the version of the kubectl client:

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}

Immediately after, it will communicate and report the version of Kubernetes on your cluster:

Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-09-11T21:52:19Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

And we can use kubectl to ask for information about the cluster as well:

kubectl cluster-info

And see something akin to the following:

Kubernetes master is running at https://192.168.64.2:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

This command primarily lets you know the API server that you're communicating with is up and running. We can ask for the specific status of the key internal components using an additional command:

kubectl get componentstatuses

NAME STATUS MESSAGE ERROR

scheduler Healthy ok

etcd-0 Healthy {"health": "true"}

controller-manager Healthy ok

Kubernetes also reports and stores a number of events that you can request to see. These show what is happening within the cluster:

kubectl get events

LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE

2m 2m 1 minikube Node Normal Starting kubelet, minikube Starting kubelet.

2m 2m 2 minikube Node Normal NodeHasSufficientDisk kubelet, minikube Node minikube status is now: NodeHasSufficientDisk

2m 2m 2 minikube Node Normal NodeHasSufficientMemory kubelet, minikube Node minikube status is now: NodeHasSufficientMemory

2m 2m 2 minikube Node Normal NodeHasNoDiskPressure kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure

2m 2m 1 minikube Node Normal NodeAllocatableEnforced kubelet, minikube Updated Node Allocatable limit across pods

2m 2m 1 minikube Node Normal Starting kube-proxy, minikube Starting kube-proxy.

2m 2m 1 minikube Node Normal RegisteredNode controllermanager Node minikube event: Registered Node minikube in NodeController

Resetting and restarting your cluster

If you want to wipe out your local Minikube cluster and restart, it is very easy to do so. Issuing a command to delete and then start Minikube will wipe out the environment and reset it to a blank slate:

minikube delete

Deleting local Kubernetes cluster...Machine deleted.

minikube start

Starting local Kubernetes v1.7.5 cluster...Starting VM...Getting VM IP address...Moving files into cluster...Setting up certs...Connecting to cluster...Setting up kubeconfig...Starting cluster components...Kubectl is now configured to use the cluster.

Looking at what's built-in and included with Minikube

With Minikube, you can bring up a web-based dashboard for the Kubernetes cluster with a single command:

minikube dashboard

This will open a browser and show you a web interface to the Kubernetes cluster. If you look at the URL address in the browser window, you'll see that it's pointing to the same IP address that was returned from the kubectl cluster-info command earlier, running on port 30000. The dashboard is running inside Kubernetes, and it is not the only thing that is.

Kubernetes is self-hosting, in that supporting pieces for Kubernetes to function such as the dashboard, DNS, and more, are all run within Kubernetes. You can see the state of all these components by asking about the state of all Pods in the cluster:

kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system kube-addon-manager-minikube 1/1 Running 0 6m

kube-system kube-dns-910330662-6pctd 3/3 Running 0 6m

kube-system kubernetes-dashboard-91nmv 1/1 Running 0 6m

Notice that we used the --all-namespaces option in this command. By default, kubectl will only show you Kubernetes resources that are in the default namespace. Since we haven't run anything ourselves, if we invoked kubectl get pods we would just get an empty list. Pods aren't the only Kubernetes resources through; you can ask about quite a number of different resources, some of which I'll describe later in this chapter, and more in further chapters.

For the moment, invoke one more command to get the list of services:

kubectl get services --all-namespaces

This will output all the services:

NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE

default kubernetes 10.0.0.1 <none> 443/TCP 3m

kube-system kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 2m

kube-system kubernetes-dashboard 10.0.0.147 <nodes> 80:30000/TCP 2m

Note the service named kubernetes-dashboard has a Cluster-IP value, and the ports 80:30000. That port configuration is indicating that within the Pods that are backing the kubernetes-dashboard service, it will forward any requests from port 30000 to port 80 within the container. You may have noticed that the IP address for the Cluster IP is very different from the IP address reported for the Kubernetes master that we saw previously in the kubectl cluster-info command.

It is important to know that everything within Kubernetes is run on a private, isolated network that is not normally accessible from outside the cluster. We will get into more detail on this in future chapters. For now, just be aware that minikube has some additional, special configuration within it to expose the dashboard.

Verifying Docker

Kubernetes supports multiple ways of running containers, Docker being the most common, and the most convenient. In this book, we will use Docker to help us create images that we will run within Kubernetes.

You can see what version of Docker you have installed and verify it is operational by running the following command:

docker version

Like kubectl, it will report the docker client version as well as the server version, and your output may look something like the following:

Client:

Version: 17.09.0-ce

API version: 1.32

Go version: go1.8.3

Git commit: afdb6d4

Built: Tue Sep 26 22:40:09 2017

OS/Arch: darwin/amd64

Server:

Version: 17.09.0-ce

API version: 1.32 (minimum version 1.12)

Go version: go1.8.3

Git commit: afdb6d4

Built: Tue Sep 26 22:45:38 2017

OS/Arch: linux/amd64

Experimental: false

By using the docker images command, you can see what container images are available locally, and using the docker pull command, you can request specific images. In our examples in the next chapter, we will be building upon the alpine container image to host our software, so let's go ahead and pull that image to verify that your environment is working:

docker pull alpine

Using default tag: latest

latest: Pulling from library/alpine

Digest: sha256:f006ecbb824d87947d0b51ab8488634bf69fe4094959d935c0c103f4820a417d

Status: Image is up to date for alpine:latest

You can then see the images using the following command:

docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

alpine latest 76da55c8019d 3 weeks ago 3.97MB</strong>

If you get an error when trying to pull the alpine image, it may mean that you are required to work through a proxy, or otherwise have constrained access to the internet to pull images as you need. You may need to review Docker's information on how to set up and use a proxy if you are in this situation.

Clearing and cleaning Docker images

Since we will be using Docker to build container images, it will be useful to know how to get rid of images. You have already seen the list of images with the docker image command. There are also intermediate images that are maintained by Docker that are hidden in that output. To see all the images that Docker is storing, use the following command:

docker images -a

If you have only pulled the alpine image as per the preceding text, you likely won't see any additional images, but as you build images in the next chapter, this list will grow.

You can remove images with the docker rmi command followed by the name of the image. By default, Docker will attempt to maintain images that containers have used recently or referenced. Because of this, you may need to force the removal to clean up the images.

If you want to reset and remove all the images and start afresh, there is a handy command that will do that. By tying together Docker images and docker rmi, we can ask it to force remove all the images it knows about:

docker rmi -f $(docker images -a -q)

Kubernetes concept – container

Kubernetes (and other technologies in this space) are all about managing and orchestrating containers. A container is really a name wrapped around a set of Linux technologies, the two most prominent being the container image format and the way Linux can isolate processes from one another, leveraging cgroups.

For all practical purposes, when someone is speaking of a container, they are generally implying that there is an image with everything needed to run a single process. In this context, a container is not only the image, but also the information about what to invoke and how to run it. Containers also act like they have their own network access. In reality, it's being shared by the Linux operating system that's running the containers.

When we want to write code to run under Kubernetes, we will always be talking about packaging it up and preparing it to run within a container. The more complex examples later in the book will utilize multiple containers all working together.

It is quite possible to run more than a single process inside a container, but that's generally frowned upon as a container is ideally suited to represent a single process and how to invoke it, and shouldn't be considered the same thing as a full virtual machine.

If you usually develop in Python, then you are likely familiar with using something like pip to download libraries and modules that you need, and you invoke your program with a command akin to python your_file. If you're a Node developer, then it is more likely you're familiar with npm or yarn to install the dependencies you need, and you run your code with node your_file.

If you wanted to wrap that all up and run it on another machine, you would likely either redo all the instructions for downloading the libraries and running the code, or perhaps ZIP up the whole directory and move it where you want to run it. A container is a way to collect all the information together into a single image so that it can be easily moved around, installed, and run on a Linux operating system. Originally created by Docker, the specifications are now maintained by the Open Container Initiative (OCI) (https://www.opencontainers.org).

While a container is the smallest building block of what goes into Kubernetes, the smallest unit that Kubernetes works with is a Pod.

Kubernetes resource – Pod

A Pod is the smallest unit that Kubernetes manages and is the fundamental unit that the rest of the system is built on. The team that created Kubernetes found it worthwhile to let a developer specify what processes should always be run together on the same OS, and that the combination of processes running together should be the unit that's scheduled, run, and managed.

Earlier in this chapter, you saw that a basic instance of Kubernetes has some of its software running in Pods. Much of Kubernetes is run using these same concepts and abstractions, allowing Kubernetes to self-host its own software. Some of the software to run a Kubernetes cluster is managed outside the cluster itself, but more and more leverage the concept of Pods, including the DNS services, dashboard, and controller manager, which coordinate all the control operations through Kubernetes.

A Pod is made up of one or more containers and information associated with those containers. When you ask Kubernetes about a Pod, it will return a data structure that includes a list of one or more containers, along with a variety of metadata that Kubernetes uses to coordinate the Pod with other Pods, and policies of how Kubernetes should act and react if the program fails, is asked to be restarted, and so forth. The metadata can also define things such as affinity, which influences where a Pod can be scheduled in a cluster, expectations around how to get the container images, and more. It is important to know that a Pod is not intended to be treated as a durable, long-lived entity.

They are created and destroyed and essentially meant to be ephemeral. This allows separate logic—contained in controllers - to manage responsibilities such as scale and availability. It is this separation of duties that enables Kubernetes to provide a means for self-healing in the event of failures, and provide some auto-scaling capabilities.

A Pod being run by Kubernetes has a few specific guarantees:

All the containers for a Pod will be run on the same Node

Any container running within a Pod will share the Node's network with any other containers in the same Pod

Containers within a Pod can share files through volumes, attached to the containers

A Pod has an explicit life cycle, and will always remain on the Node in which it was started

For all practical purposes, when you want to know what's running on a Kubernetes cluster, you are generally going to want to know about the Pods running within Kubernetes and their state.

Kubernetes maintains and reports on the Pod's status, as well as the state of each of the containers that make up the Pod. The states for a container are Running, Terminated, and Waiting. The life cycle of a Pod is a bit more complicated, consisting of a strictly defined Phase and a set of PodStatus. Phase is one of Pending, Running, Succeeded, Failed, or Unknown, and the specific details of what's included in a Phase is documented at https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase.

A Pod can also contain Probes, which actively check the container for some status information. Two common probes that are deployed and used by Kubernetes controllers are a livenessProbe and a readinessProbe. The livenessProbe defines whether the container is up and running. If it isn't, the infrastructure in Kubernetes kills the relevant container and then applies the restart policy defined for the Pod. The readinessProbe is meant to indicate whether the container is ready to service requests. The results of the readinessProbe are used in conjunction with other Kubernetes mechanisms such as services (which we will detail later) to forward traffic to the relevant container. In general, the probes are set up to allow the software in a container to provide a feedback loop to Kubernetes. You can find more detail on Probes, how to define them, and how they are used at https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes. We will dig into probes in detail in a future chapter.

Namespaces

Pods are collected into namespaces, which are used to group Pods together for a variety of purposes. You already saw one example of namespaces when we asked for the status of all the Pods in the cluster with the --all-namespaces option earlier.

Namespaces can be used to provide quotas and limits around resource usage, have an impact on DNS names that Kubernetes creates internal to the cluster, and in the future may impact access control policies. If no namespace is specified when interacting with Kubernetes through kubectl, the command assumes you are working with the default namespace, named default.

Writing your code for Pods and Containers

One of the keys to successfully using Kubernetes is to consider how you want your code to operate, and to structure it so that it fits cleanly into a structure of Pods and Containers. By structuring your software solutions to break problems down into components that operate with the constraints and guarantees that Kubernetes provides, you can easily take advantage of parallelism and container orchestration to use many machines as seamlessly as you would use a single machine.

The guarantees and abstractions that Kubernetes provides are reflective of years of experience that Google (and others) have had in running their software and services at a massive scale, reliably, and redundantly, leveraging the pattern of horizontal scaling to tackle massive problems.

Kubernetes resource – Node

A Node is a machine, typically running Linux, that has been added to the Kubernetes cluster. It can be a physical machine or a virtual machine. In the case of minikube, it is a single virtual machine that is running all the software for Kubernetes. In larger Kubernetes clusters, you may have one or several machines dedicated to just managing the cluster and separate machines where your workloads run. Kubernetes manages its resources across Nodes by tracking their resource usage, scheduling, starting (and if needed, restarting) Pods, as well as coordinating the other mechanisms that connect Pods together or expose them outside the cluster.

Nodes can (and do) have metadata associated with them so that Kubernetes can be aware of relevant differences, and can account for those differences when scheduling and running Pods. Kubernetes can support a wide variety of machines working together, and run software efficiently across all of them, or limit scheduling Pods to only machines that have the required resources (for example, a GPU).

Networks

We previously mentioned that all the containers in a Pod share the Node's network. In addition, all Nodes in a Kubernetes cluster are expected to be connected to each other and share a private cluster-wide network. When Kubernetes runs containers within a Pod, it does so within this isolated network. Kubernetes is responsible for handling IP addresses, creating DNS entries, and making sure that a Pod can communicate with another Pod in the same Kubernetes cluster.

Another resource, Services, which we will dig into later, is what Kubernetes uses to expose Pods to one another over this private network or handle connections in and out of the cluster. By default, a Pod running in this private, isolated network is not exposed outside of the Kubernetes cluster. Depending on how your Kubernetes cluster was created, there are multiple avenues for opening up access to your software from outside the cluster, which we'll detail later with Services that include LoadBalancer, NodePort, and Ingress.

Controllers

Kubernetes is built with the notion that you tell it what you want, and it knows how to do it. When you interact with Kubernetes, you are asserting you want one or more resources to be in a certain state, with specific versions, and so forth. Controllers are where the brains exist for tracking those resources and attempting to run your software as you described. These descriptions can include how many copies of a container image are running, updating the software version running within a Pod, and handling the case of a Node failure where you unexpectedly lose part of your cluster.

There are a variety of controllers used within Kubernetes, and they are mostly hidden behind two key resources that we will dig into further: Deployments and ReplicaSets.

Kubernetes resource – ReplicaSet

A ReplicaSet wraps Pods, defining how many need to run in parallel. A ReplicaSet is commonly wrapped in turn by a deployment. ReplicaSets are not often used directly, but are critical to represent horizontal scaling—to represent the number of parallel Pods to run.

A ReplicaSet is associated with a Pod and indicates how many instances of that Pod should be running within the cluster. A ReplicaSet also implies that Kubernetes has a controller that watches the ongoing state and knows how many of your Pod to keep running. This is where Kubernetes is really starting to do work for you, if you specified three Pods in a ReplicaSet and one fails, Kubernetes will automatically schedule and run another Pod for you.

Kubernetes resource – Deployment

The most common and recommended way to run code on Kubernetes is with a deployment, which is managed by a deployment controller. We will explore deployments in the next and further chapters, both specifying them directly and creating them implicitly with commands such as kubectl run.

A Pod by itself is interesting, but limited, specifically because it is intended to be ephemeral. If a Node were to die (or get powered down), all the Pods on that Node would stop running. ReplicaSets provide self-healing capabilities. The work within the cluster to recognize when a Pod is no longer available and will attempt to schedule another Pod, typically to bring a service back online, or otherwise continue doing work.

The deployment controller wraps around and extends the ReplicaSet controller, and is primarily responsible for rolling out software updates and managing the process of that rollout when you update your deployment resource with new versions of your software. The deployment controller includes metadata settings to know how many Pods to keep running so that you can enable a seamless rolling update of your software by adding new versions of a container, and stopping old versions when you request it.

Representing Kubernetes resources

Kubernetes resources can generally be represented as either a JSON or YAML data structure. Kubernetes is specifically built so that you can save these files, and when you want to run your software, you can use a command such askubectl deployand provide the definitions you've created previously, and it uses that to run your software. In our next chapter, we will start to show specific examples of these resources and build them up for our use.