31,19 €
Learn how to automate and manage your containers and reduce the overall operation burden on your system.
Kubernetes is an open source orchestration platform to manage containers in a cluster environment. With Kubernetes, you can configure and deploy containerized applications easily. This book gives you a quick brush up on how Kubernetes works with containers, and an overview of main Kubernetes concepts, such as Pods, Deployments, Services and etc.
This book explains how to create Kubernetes clusters and run applications with proper authentication and authorization configurations. With real-world recipes, you'll learn how to create high availability Kubernetes clusters on AWS, GCP and in on-premise datacenters with proper logging and monitoring setup. You'll also learn some useful tips about how to build a continuous delivery pipeline for your application. Upon completion of this book, you will be able to use Kubernetes in production and will have a better understanding of how to manage containers using Kubernetes.
This book is for system administrators, developers, DevOps engineers, or any stakeholder who wants to understand how Kubernetes works using a recipe-based approach. Basic knowledge of Kubernetes and Containers is required.
Hideto Saito has around 20 years of experience in the computer industry. In 1998, while working for Sun Microsystems Japan, he was impressed by Solaris OS, OPENSTEP, and Sun Ultra Enterprise 10000 (also known as StarFire). He then decided to pursue UNIX and macOS operating systems. In 2006, he relocated to southern California as a software engineer to develop products and services running on Linux and macOS X. He was especially renowned for his quick Objective-C code when he was drunk. He is also an enthusiast of Japanese anime, drama, and motorsports, and he loves Japanese Otaku culture. Hui-Chuan Chloe Lee is a DevOps and software developer. She has worked in the software industry on a wide range of projects for over five years. As a technology enthusiast, she loves trying and learning about new technologies, which makes her life happier and more fulfilling. In her free time, she enjoys reading, traveling, and spending time with the people she loves. Ke-Jou Carol Hsu has three years of experience working as a software engineer and is currently a PhD student in the area of computer systems. Not only involved programming, she also enjoys getting multiple applications and machines perfectly working together to solve big problems. In her free time, she loves movies, music, cooking, and working out.Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 482
Veröffentlichungsjahr: 2018
Copyright © 2018 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Commissioning Editor: Gebin GeorgeAcquisition Editor: Divya PoojariContent Development Editor: Dattatraya MoreTechnical Editor: Sayali ThanekarCopy Editor: Safis EditingProject Coordinator: Shweta H BirwatkarProofreader: Safis EditingIndexer: Priyanka DhadkeGraphics: Jisha ChirayilProduction Coordinator: Deepika Naik
First published: June 2016 Second edition: May 2018
Production reference: 1290518
Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.
ISBN 978-1-78883-760-6
www.packtpub.com
Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
Improve your learning with Skill Plans built especially for you
Get a free eBook or video every month
Mapt is fully searchable
Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
Hideto Saito has around 20 years of experience in the computer industry. In 1998, while working for Sun Microsystems Japan, he was impressed by Solaris OS, OPENSTEP, and Sun Ultra Enterprise 10000 (also known as StarFire). He then decided to pursue UNIX and macOS operating systems. In 2006, he relocated to southern California as a software engineer to develop products and services running on Linux and macOS X. He was especially renowned for his quick Objective-C code when he was drunk. He is also an enthusiast of Japanese anime, drama, and motorsports, and he loves Japanese Otaku culture.
Hui-Chuan Chloe Lee is a DevOps and software developer. She has worked in the software industry on a wide range of projects for over five years. As a technology enthusiast, she loves trying and learning about new technologies, which makes her life happier and more fulfilling. In her free time, she enjoys reading, traveling, and spending time with the people she loves.
Ke-Jou Carol Hsu has three years of experience working as a software engineer and is currently a PhD student in the area of computer systems. Not only involved programming, she also enjoys getting multiple applications and machines perfectly working together to solve big problems. In her free time, she loves movies, music, cooking, and working out.
Stefan Lapers started his career almost 20 years ago as a support engineer and quickly grew into Linux/Unix system engineering, security, and network positions. Over the years, he accumulated experience in developing, deploying, and maintaining hosted applications while working for great customers, such as MTV and TMF. In his spare time, he enjoys spending time with his family, tinkering with electronics, and flying model helicopters.
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
Title Page
Copyright and Credits
Kubernetes Cookbook Second Edition
Packt Upsell
Why subscribe?
PacktPub.com
Contributors
About the authors
About the reviewer
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the example code files
Download the color images
Conventions used
Sections
Getting ready
How to do it...
How it works...
There's more...
See also
Get in touch
Reviews
Building Your Own Kubernetes Cluster
Introduction
Exploring the Kubernetes architecture
Getting ready
How to do it...
Kubernetes master
API server (kube-apiserver)
Scheduler (kube-scheduler)
Controller manager (kube-controller-manager)
Command-line interface (kubectl)
Kubernetes node
kubelet
Proxy (kube-proxy)
How it works...
etcd
Kubernetes network
See also
Setting up the Kubernetes cluster on macOS by minikube
Getting ready
How to do it...
How it works...
See also
Setting up the Kubernetes cluster on Windows by minikube
Getting ready
How to do it...
How it works...
See also
Setting up the Kubernetes cluster on Linux via kubeadm
Getting ready
How to do it...
Package installation
Ubuntu
CentOS
System configuration prerequisites
CentOS system settings
Booting up the service
Network configurations for containers
Getting a node involved
How it works...
See also
Setting up the Kubernetes cluster on Linux via Ansible (kubespray)
Getting ready
Installing pip
Installing Ansible
Installing python-netaddr
Setting up ssh public key authentication
How to do it...
Maintaining the Ansible inventory
Running the Ansible ad hoc command to test your environment
Ansible troubleshooting
Need to specify a sudo password
Need to specify different ssh logon user
Need to change ssh port
 Common ansible issue
How it works...
See also
Running your first container in Kubernetes
Getting ready
How to do it...
Running a HTTP server (nginx)
Exposing the port for external access
Stopping the application
How it works…
See also
Walking through Kubernetes Concepts
Introduction
An overview of Kubernetes
Linking Pods and containers
Getting ready
How to do it...
How it works...
See also
Managing Pods with ReplicaSets 
Getting ready
How to do it...
Creating a ReplicaSet
Getting the details of a ReplicaSet
Changing the configuration of a ReplicaSet
Deleting a ReplicaSet
How it works...
There's more...
See also
Deployment API
Getting ready
How to do it...
How it works...
Using kubectl set to update the container image
Updating the YAML and using kubectl apply
See also
Working with Services
Getting ready
How to do it...
Creating a Service for different resources
Creating a Service for a Pod
Creating a Service for a Deployment with an external IP
Creating a Service for an Endpoint without a selector
Creating a Service for another Service with session affinity
Deleting a Service
How it works...
There's more...
See also
Working with volumes
Getting ready
How to do it...
emptyDir
hostPath
NFS
glusterfs
downwardAPI
gitRepo
There's more...
PersistentVolumes
Using storage classes
gcePersistentDisk
awsElasticBlockStore
See also
Working with Secrets
Getting ready
How to do it...
Creating a Secret
Working with kubectl create command line
From a file
From a directory
From a literal value
Via configuration file
Using Secrets in Pods
By environment variables
By volumes
Deleting a Secret                                   
How it works...
There's more...
Using ConfigMaps
Mounting Secrets and ConfigMap in the same volume
See also
Working with names
Getting ready
How to do it...
How it works...
See also
Working with Namespaces
Getting ready
How to do it...
Creating a Namespace
Changing the default Namespace
Deleting a Namespace
How it works…
There's more...
Creating a LimitRange
Deleting a LimitRange
See also
Working with labels and selectors
Getting ready
How to do it...
How it works...
Equality-based label selector
Set-based label selector
There's more...
Linking Service to Pods or ReplicaSets using label selectors
Linking Deployment to ReplicaSet using the set-based selector
See also
Playing with Containers
Introduction
Scaling your containers
Getting ready
How to do it...
Scale up and down manually with the kubectl scale command
Horizontal Pod Autoscaler (HPA)
How it works...
There is more…
See also
Updating live containers
Getting ready
How to do it...
Deployment update strategy – rolling-update
Rollback the update
Deployment update strategy – recreate
How it works...
There's more...
See also
Forwarding container ports
Getting ready
How to do it...
Container-to-container communication
Pod-to-Pod communication
Working with NetworkPolicy
Pod-to-Service communication
External-to-internal communication
Working with Ingress
There's more...
See also
Ensuring flexible usage of your containers
Getting ready
How to do it...
Pod as DaemonSets
Running a stateful Pod
How it works...
Pod recovery by DaemonSets
Pod recovery by StatefulSet
There's more...
See also
Submitting Jobs on Kubernetes
Getting ready
How to do it...
Pod as a single Job
Create a repeatable Job
Create a parallel Job
Schedule to run Job using CronJob
How it works...
See also
Working with configuration files
Getting ready
YAML
JSON
How to do it...
How it works...
Pod
Deployment
Service
See also
Building High-Availability Clusters
Introduction
Clustering etcd 
Getting ready
How to do it...
Static mechanism
Discovery  mechanism
kubeadm
kubespray
Kops
See also
Building multiple masters
Getting ready
How to do it...
Setting up the first master
Setting up the other master with existing certifications
Adding nodes in a HA cluster
How it works...
See also
Building Continuous Delivery Pipelines
Introduction
Moving monolithic to microservices
Getting ready
How to do it...
Microservices
Frontend WebUI
How it works...
Microservices
Frontend WebUI
Working with the private Docker registry
Getting ready
Using Kubernetes to run a Docker registry server
Using Amazon elastic container registry
Using Google cloud registry
How to do it...
Launching a private registry server using Kubernetes
Creating a self-signed SSL certificate
Creating HTTP secret
Creating the HTTP basic authentication file
Creating a Kubernetes secret to store security files
Configuring a private registry to load a Kubernetes secret
Create a repository on the AWS elastic container registry
Determining your repository URL on Google container registry
How it works...
Push and pull an image from your private registry
Push and pull an image from Amazon ECR
Push and pull an image from Google cloud registry
Using gcloud to wrap the Docker command
Using the GCP service account to grant a long-lived credential
Integrating with Jenkins
Getting ready
How to do it...
Setting up a custom Jenkins image
Setting up Kubernetes service account and ClusterRole
Launching the Jenkins server via Kubernetes deployment
How it works...
Using Jenkins to build a Docker image
Deploying the latest container image to Kubernetes
Building Kubernetes on AWS
Introduction
Playing with Amazon Web Services
Getting ready
Creating an IAM user
Installing AWS CLI on macOS
Installing AWS CLI on Windows
How to do it...
How it works...
Creating VPC and Subnets
Internet gateway
NAT-GW
Security group
EC2
Setting up Kubernetes with kops
Getting ready
How to do it...
How it works...
Working with kops-built AWS cluster
Deleting kops-built AWS cluster
See also
Using AWS as Kubernetes Cloud Provider
Getting ready
How to do it...
Elastic load balancer as LoadBalancer service
Elastic Block Store as StorageClass
There's more...
Managing Kubernetes cluster on AWS by kops
Getting ready
How to do it...
Modifying and resizing instance groups
Updating nodes
Updating masters
Upgrading a cluster
There's more...
See also
Building Kubernetes on GCP
Playing with GCP
Getting ready
Creating a GCP project
Installing Cloud SDK
Installing Cloud SDK on Windows
Installing Cloud SDK on Linux and macOS
Configuring Cloud SDK
How to do it...
Creating a VPC
Creating subnets
Creating firewall rules
Adding your ssh public key to GCP
How it works...
Launching VM instances
Playing with Google Kubernetes Engine
Getting ready
How to do it…
How it works…
See also
Exploring CloudProvider on GKE
Getting ready
How to do it…
StorageClass
Service (LoadBalancer)
Ingress 
There's more…
See also
Managing Kubernetes clusters on GKE
Getting ready
How to do it…
Node pool
Multi-zone and regional clusters
Multi-zone clusters
Regional clusters
Cluster upgrades
See also
Advanced Cluster Administration
Introduction
Advanced settings in kubeconfig
Getting ready
How to do it...
Setting new credentials
Setting new clusters
Setting contexts and changing current-context
Cleaning up kubeconfig
There's more...
See also
Setting resources in nodes
Getting ready
How to do it...
Configuring a BestEffort pod
Configuring a Guaranteed pod
Configuring a Burstable pod
How it works...
See also
Playing with WebUI
Getting ready
How to do it...
Relying on the dashboard created by minikube
Creating a dashboard manually on a system using other booting tools
How it works...
Browsing your resource by dashboard
Deploying resources by dashboard
Removing resources by dashboard
See also
Working with the RESTful API
Getting ready
How to do it...
How it works...
There's more...
See also
Working with Kubernetes DNS
Getting ready
How to do it...
DNS for pod
DNS for Kubernetes Service
DNS for StatefulSet
How it works...
Headless service when pods scale out
See also
Authentication and authorization
Getting ready
How to do it...
Authentication
Service account token authentication
X509 client certs
OpenID connect tokens
Authorization
Role and RoleBinding
ClusterRole and ClusterRoleBinding
Role-based access control (RBAC)
Admission control
NamespaceLifecycle
LimitRanger
ServiceAccount
PersistentVolumeLabel (deprecated from v1.8)
DefaultStorageClass
DefaultTolerationSeconds
ResourceQuota
DenyEscalatingExec
AlwaysPullImages
There's more…
Initializers (alpha)
Webhook admission controllers (beta in v1.9)
See also
Logging and Monitoring
Introduction
Working with EFK
Getting ready
How to do it...
Setting up EFK with minikube
Setting up EFK with kubespray
Setting up EFK with kops
How it works...
There's more...
See also
Working with Google Stackdriver
Getting ready
How to do it...
How it works...
See also
Monitoring master and node
Getting ready
How to do it...
How it works...
Introducing the Grafana dashboard
Creating a new metric to monitor Pod
There's more...
Monitoring your Kubernetes cluster on AWS
Monitoring your Kubernetes cluster on GCP
See also
Other Books You May Enjoy
Leave a review - let other readers know what you think
With the trend of microservices architecture in the recent years, a monolithic application is refactored into multiple microservices. Container simplifies the deployment of the application build from microservices. Container management, automation, and orchestration have become crucial problems. Kubernetes is here to solve these.
This book is a practical guide that provides step-by-step tips and examples to help you build and run your own Kubernetes cluster in both private and public clouds. Following along with the book will lead you to understanding how to deploy and manage your application and services in Kubernetes. You will also gain a deep understanding of how to scale and update livecontainers, and how to do port forwarding and network routing in Kubernetes. You will learn how to build a robust high-availability cluster with the book's hands-on examples. Finally, you will build a Continuous Delivery pipeline by integrating Jenkins, Docker registry, and Kubernetes.
If you've been playing with Docker containers for a while and want to orchestrate your containers in a modern way, this book is the right choice for you. This book is for those who already understand Docker and container technology, and want to explore further to find better ways to orchestrate, manage, and deploy containers. This book is perfect for going beyond a single container and working with container clusters, learning how to build your own Kubernetes, and making it work seamlessly with your Continuous Delivery pipeline.
Chapter 1, Building Your Own Kubernetes Cluster, explains how to build your own Kubernetes cluster with various deployment tools and run your first container on it.
Chapter 2, Walking through Kubernetes Concepts, covers both basic and advanced concepts we need to know about Kubernetes. Then, you will learn how to combine them to create Kubernetes objects by writing and applying configuration files.
Chapter 3, Playing with Containers, explains how to scale your containers up and down and perform rolling updates without affecting application availability. Furthermore, you will learn how deploy containers for dealing with different application workloads. It will also walk you through best practices of configuration files.
Chapter 4, Building High-Availability Clusters, provides information on how to build High Availability Kubernetes master and etcd. This will prevent Kubernetes components from being the single point of failure.
Chapter 5, Building Continuous Delivery Pipelines, talks about how to integrate Kubernetes into an existing Continuous Delivery pipeline with Jenkins and private Docker registry.
Chapter 6, Building Kubernetes on AWS, walks you through AWS fundamentals. You will learn how to build a Kuberentes cluster on AWS in few minutes.
Chapter 7, Building Kubernetes on GCP, leads you to the Google Cloud Platform world. You will learn the GCP essentials and how to launch a managed, production-ready Kubernetes cluster with just a few clicks.
Chapter 8, Advanced Cluster Administration, talks about important resource management in Kubernetes. This chapter also goes through other important cluster administration, such as Kubernetes dashboard, authentication, and authorization.
Chapter 9, Logging and Monitoring, explains how to collect both system and application logs in Kubernetes by using Elasticsearch, Logstash, and Kibana (ELK). You will also learn how to leverage Heapster, InfluxDB, and Grafana to monitor your Kubernetes cluster.
Throughout the book, we use at least three servers with a Linux-based OS to build all of the components in Kubernetes. At the beginning of the book, you could use one machine, whether it is Linux or Windows, to learn about the concepts and basic deployment. From a scalability point of view, we recommend you start with three servers in order to scale out the components independently and push your cluster to the production level.
You can download the example code files for this book from your account at www.packtpub.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
Log in or register at
www.packtpub.com
.
Select the
SUPPORT
tab.
Click on
Code Downloads & Errata
.
Enter the name of the book in the
Search
box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR/7-Zip for Windows
Zipeg/iZip/UnRarX for Mac
7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Kubernetes-Cookbook-Second-Edition. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://www.packtpub.com/sites/default/files/downloads/KubernetesCookbookSecondEdition_ColorImages.pdf.
There are a number of text conventions used throughout this book.
CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "Prepare the following YAML file, which is a simple Deployment that launches two nginx containers."
A block of code is set as follows:
# cat 3-1-1_deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: my-nginx
When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
Annotations: deployment.kubernetes.io/revision=1Selector: env=test,project=My-Happy-Web,role=frontendReplicas: 5 desired | 5 updated | 5 total | 5 available | 0 unavailableStrategyType:
RollingUpdate
Any command-line input or output is written as follows:
//install kubectl command by "kubernetes-cli" package
$ brew install kubernetes-cli
Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Installation is straightforward, so we can just choose the default options and click Next."
In this book, you will find several headings that appear frequently (Getting ready, How to do it..., How it works..., There's more..., and See also).
To give clear instructions on how to complete a recipe, use these sections as follows:
This section tells you what to expect in the recipe and describes how to set up any software or any preliminary settings required for the recipe.
This section contains the steps required to follow the recipe.
This section usually consists of a detailed explanation of what happened in the previous section.
This section consists of additional information about the recipe in order to make you more knowledgeable about the recipe.
This section provides helpful links to other useful information for the recipe.
Feedback from our readers is always welcome.
General feedback: Email [email protected] and mention the book title in the subject of your message. If you have questions about any aspect of this book, please email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packtpub.com.
In this chapter, we will cover the following recipes:
Exploring the Kubernetes architecture
Setting up a Kubernetes cluster on macOS by minikube
Setting up a
Kubernetes cluster on Windows by minikube
Setting up a
Kubernetes cluster on Linux by kubeadm
Setting up a
Kubernetes cluster on Linux by Ansible (kubespray)
Running your first container in Kubernetes
Welcome to your journey into Kubernetes! In this very first section, you will learn how to build your own Kubernetes cluster. Along with understanding each component and connecting them together, you will learn how to run your first container on Kubernetes. Having a Kubernetes cluster will help you continue your studies in the chapters ahead.
Kubernetes is an open source container management tool. It is a Go language-based (https://golang.org), lightweight and portable application. You can set up a Kubernetes cluster on a Linux-based OS to deploy, manage, and scale Docker container applications on multiple hosts.
Kubernetes is made up of the following components:
Kubernetes master
Kubernetes nodes
etcd
Kubernetes network
These components are connected via a network, as shown in the following diagram:
The preceding diagram can be summarized as follows:
Kubernetes master
: It connects to etcd via HTTP or HTTPS to store the data
Kubernetes nodes
: It connect to the Kubernetes master via HTTP or HTTPS to get a command and report the status
Kubernetes network
: It L2, L3 or overlay make a connection of their container applications
In this section, we are going to explain how to use the Kubernetes master and nodes to realize the main functions of the Kubernetes system.
The Kubernetes master is the main component of the Kubernetes cluster. It serves several functionalities, such as the following:
Authorization and authentication
RESTful API entry point
Container deployment scheduler to Kubernetes nodes
Scaling and replicating controllers
Reading the configuration to set up a cluster
The following diagram shows how master daemons work together to fulfill the aforementioned functionalities:
There are several daemon processes that form the Kubernetes master's functionality, such as kube-apiserver, kube-scheduler and kube-controller-manager. Hypercube, the wrapper binary, can launch all these daemons.
In addition, the Kubernetes command-line interface, kubect can control the Kubernetes master functionality.
The API server provides an HTTP- or HTTPS-based RESTful API, which is the hub between Kubernetes components, such as kubectl, the scheduler, the replication controller, the etcd data store, the kubelet and kube-proxy, which runs on Kubernetes nodes, and so on.
The scheduler helps to choose which container runs on which nodes. It is a simple algorithm that defines the priority for dispatching and binding containers to nodes. For example:
CPU
Memory
How many containers are running?
The controller manager performs cluster operations. For example:
Manages Kubernetes nodes
Creates and updates the Kubernetes internal information
Attempts to change the current status to the desired status
After you install the Kubernetes master, you can use the Kubernetes command-line interface, kubectl, to control the Kubernetes cluster. For example, kubectl get cs returns the status of each component. Also, kubectl get nodes returns a list of Kubernetes nodes:
//see the Component Statuses
# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok nil
scheduler Healthy ok nil
etcd-0 Healthy {"health": "true"} nil
//see the nodes
# kubectl get nodes
NAME LABELS STATUS AGE
kub-node1 kubernetes.io/hostname=kub-node1 Ready 26d
kub-node2 kubernetes.io/hostname=kub-node2 Ready 26d
The Kubernetes node is a slave node in the Kubernetes cluster. It is controlled by the Kubernetes master to run container applications using Docker (http://docker.com) or rkt (http://coreos.com/rkt/docs/latest/). In this book, we will use the Docker container runtime as the default engine.
Node or slave?
The term slave is used in the computer industry to represent the cluster worker node; however, it is also associated with discrimination. The Kubernetes project uses minion in the early version and node in the current version.
The following diagram displays the role and tasks of daemon processes in the node:
The node also has two daemon processes, named kubelet and kube-proxy, to support its functionalities.
kubelet is the main process on the Kubernetes node that communicates with the Kubernetes master to handle the following operations:
Periodically accesses the API controller to check and report
Performs container operations
Runs the HTTP server to provide simple APIs
The proxy handles the network proxy and load balancer for each container. It changes Linux iptables rules (nat table) to control TCP and UDP packets across the containers.
After starting the kube-proxy daemon, it configures iptables rules; you can use iptables -t nat -L or iptables -t nat -S to check the nat table rules, as follows:
//the result will be vary and dynamically changed by kube-proxy
# sudo iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-N FLANNEL
-N KUBE-NODEPORT-CONTAINER
-N KUBE-NODEPORT-HOST
-N KUBE-PORTALS-CONTAINER
-N KUBE-PORTALS-HOST
-A PREROUTING -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-CONTAINER
-A PREROUTING -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-CONTAINER
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-HOST
-A OUTPUT -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-HOST
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 192.168.90.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 192.168.0.0/16 -j FLANNEL
-A FLANNEL -d 192.168.0.0/16 -j ACCEPT
-A FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE
There are two more components to complement Kubernetes node functionalities, the data store etcd and the inter-container network. You can learn how they support the Kubernetes system in the following subsections.
etcd (https://coreos.com/etcd/) is the distributed key-value data store. It can be accessed via the RESTful API to perform CRUD operations over the network. Kubernetes uses etcd as the main data store.
You can explore the Kubernetes configuration and status in etcd (/registry) using the curl command, as follows:
//example: etcd server is localhost and default port is 4001
# curl -L http://127.0.0.1:4001/v2/keys/registry
{"action":"get","node":{"key":"/registry","dir":true,"nodes":[{"key":"/registry/namespaces","dir":true,"modifiedIndex":6,"createdIndex":6},{"key":"/registry/pods","dir":true,"modifiedIndex":187,"createdIndex":187},{"key":"/registry/clusterroles","dir":true,"modifiedIndex":196,"createdIndex":196},{"key":"/registry/replicasets","dir":true,"modifiedIndex":178,"createdIndex":178},{"key":"/registry/limitranges","dir":true,"modifiedIndex":202,"createdIndex":202},{"key":"/registry/storageclasses","dir":true,"modifiedIndex":215,"createdIndex":215},{"key":"/registry/apiregistration.k8s.io","dir":true,"modifiedIndex":7,"createdIndex":7},{"key":"/registry/serviceaccounts","dir":true,"modifiedIndex":70,"createdIndex":70},{"key":"/registry/secrets","dir":true,"modifiedIndex":71,"createdIndex":71},{"key":"/registry/deployments","dir":true,"modifiedIndex":177,"createdIndex":177},{"key":"/registry/services","dir":true,"modifiedIndex":13,"createdIndex":13},{"key":"/registry/configmaps","dir":true,"modifiedIndex":52,"createdIndex":52},{"key":"/registry/ranges","dir":true,"modifiedIndex":4,"createdIndex":4},{"key":"/registry/minions","dir":true,"modifiedIndex":58,"createdIndex":58},{"key":"/registry/clusterrolebindings","dir":true,"modifiedIndex":171,"createdIndex":171}],"modifiedIndex":4,"createdIndex":4}}
Network communication between containers is the most difficult part. Because Kubernetes manages multiple nodes (hosts) running several containers, those containers on different nodes may need to communicate with each other.
If the container's network communication is only within a single node, you can use Docker network or Docker compose to discover the peer. However, along with multiple nodes, Kubernetes uses an overlay network or container network interface (CNI) to achieve multiple container communication.
This recipe describes the basic architecture and methodology of Kubernetes and the related components. Understanding Kubernetes is not easy, but a step-by-step learning process on how to set up, configure, and manage Kubernetes is really fun.
Kubernetes consists of combination of multiple open source components. These are developed by different parties, making it difficult to find and download all the related packages and install, configure, and make them work from scratch.
Fortunately, there are some different solutions and tools that have been developed to set up Kubernetes clusters effortlessly. Therefore, it is highly recommended you use such a tool to set up Kubernetes on your environment.
The following tools are categorized by different types of solution to build your own Kubernetes:
Self-managed solutions that include:
minikube
kubeadm
kubespray
kops
Enterprise solutions that include:
OpenShift (
https://www.openshift.com
)
Tectonic (
https://coreos.com/tectonic/
)
Cloud-hosted solutions that include:
Google Kubernetes engine (
https://cloud.google.com/kubernetes-engine/
)
Amazon elastic container service for Kubernetes
(Amazon EKS,
https://aws.amazon.com/eks/
)
Azure Container Service (AKS,
https://azure.microsoft.com/en-us/services/container-service/
)
A self-managed solution is suitable if we just want to build a development environment or do a proof of concept quickly.
By using minikube (https://github.com/kubernetes/minikube) and kubeadm (https://kubernetes.io/docs/admin/kubeadm/), we can easily build the desired environment on our machine locally; however, it is not practical if we want to build a production environment.
By using kubespray (https://github.com/kubernetes-incubator/kubespray) and kops (https://github.com/kubernetes/kops), we can also build a production-grade environment quickly from scratch.
An enterprise solution or cloud-hosted solution is the easiest starting point if we want to create a production environment. In particular, the Google Kubernetes Engine (GKE), which has been used by Google for many years, comes with comprehensive management, meaning that users don't need to care much about the installation and settings. Also, Amazon EKS is a new service that was introduced at AWS re: Invent 2017, which is managed by the Kubernetes service on AWS.
Kubernetes can also run on different clouds and on-premise VMs by custom solutions. To get started, we will build Kubernetes using minikube on macOS desktop machines in this chapter.
minikube runs Kubernetes on the Linux VM on macOS. It relies on a hypervisor (virtualization technology), such as VirtualBox (https://www.virtualbox.org), VMWare fusion (https://www.vmware.com/products/fusion.html), or hyperkit (https://github.com/moby/hyperkit) In addition, we will need to have the Kubernetes command-line interface (CLI) kubectl, which is used to connect through the hypervisor, to control Kubernetes.
With minikube, you can run the entire suite of the Kubernetes stack on your macOS, including the Kubernetes master, node, and CLI. It is recommended that macOS has enough memory to run Kubernetes. By default, minikube uses VirtualBox as the hypervisor.
In this chapter, however, we will demonstrate how to use hyperkit, which is the most lightweight solution. As Linux VM consumes 2 GB of memory, at least 4 GB of memory is recommended. Note that hyperkit is built on the top of the hypervisor framework (https://developer.apple.com/documentation/hypervisor) on macOS; therefore, macOS 10.10 Yosemite or later is required.
The following diagram shows the relationship between kubectl, the hypervisor, minikube, and macOS:
macOS doesn't have an official package management tool, such as yum and apt-get on Linux. But there are some useful tools available for macOS. Homebrew (https://brew.sh) is the most popular package management tool and manages many open source tools, including minikube.
In order to install Homebrew on macOS, perform the following steps:
Open the Terminal and then type the following command:
$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Once installation is completed, you can type
/usr/local/bin/brew help
to see the available command options.
If you just install or upgrade Xcode on your macOS, the Homebrew installation may stop. In that case, open Xcode to accept the license agreement or type sudo xcodebuild -license beforehand.
Next, install the
hyperkit driver
for minikube. At the time of writing (February 2018), HomeBrew does not support hyperkit; therefore type the following command to install it:
$ curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-hyperkit \
&& chmod +x docker-machine-driver-hyperkit \
&& sudo mv docker-machine-driver-hyperkit /usr/local/bin/ \
&& sudo chown root:wheel /usr/local/bin/docker-machine-driver-hyperkit \
&& sudo chmod u+s /usr/local/bin/docker-machine-driver-hyperkit
Next, let's install the Kubernetes CLI. Use Homebrew with the following comment to install the
kubectl
command on your macOS:
//install kubectl command by "kubernetes-cli" package
$ brew install kubernetes-cli
Finally, you can install minikube. It is not managed by Homebrew; however, Homebrew has an extension called homebrew-cask (https://github.com/caskroom/homebrew-cask) that supports minikube.
In order to install minikube by
homebrew-cask
, just simply type the following command:
//add "cask" option
$ brew cask install minikube
If you have never installed
Docker for Mac
on your machine, you need to install it via
homebrew-cask
as well
//only if you don't have a Docker for Mac$ brew cask install docker//start Docker$ open -a Docker.app
Now you are all set! The following command shows whether the required packages have been installed on your macOS or not:
//check installed package by homebrew
$ brew list
kubernetes-cli
//check installed package by homebrew-cask
$ brew cask list
minikube
minikube is suitable for setting up Kubernetes on your macOS with the following command, which downloads and starts a Kubernetes VM stet, and then configures the kubectl configuration (~/.kube/config):
//use --vm-driver=hyperkit to specify to use hyperkit
$ /usr/local/bin/minikube start --vm-driver=hyperkit
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
150.53 MB / 150.53 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
//check whether .kube/config is configured or not
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/saito/.minikube/ca.crt
server: https://192.168.64.26:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
as-user-extra: {}
client-certificate: /Users/saito/.minikube/client.crt
client-key: /Users/saito/.minikube/client.key
After getting all the necessary packages, perform the following steps:
Wait for a few minutes for the Kubernetes cluster setup to complete.
Use
kubectl version
to check the Kubernetes master version and
kubectl get cs
to see the component status.
Also, use the
kubectl get nodes
command to check whether the Kubernetes node is ready or not:
//it shows kubectl (Client) is 1.10.1, and Kubernetes master (Server) is 1.10.0
$ /usr/local/bin/kubectl version --short
Client Version: v1.10.1
Server Version: v1.10.0
//get cs will shows Component Status
$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
//Kubernetes node (minikube) is ready
$ /usr/local/bin/kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master2mv1.10.0
Now you can start to use Kubernetes on your machine. The following sections describe how to use the
kubectl
command to manipulate Docker containers.
Note that, in some cases, you may need to maintain the Kubernetes cluster, such as starting/stopping the VM or completely deleting it. The following commands maintain the minikube environment:
Command
Purpose
minikube start --vm-driver=hyperkit
Starts the Kubernetes VM using the hyperkit driver
minikube stop
Stops the Kubernetes VM
minikube delete
Deletes a Kubernetes VM image
minikube ssh
ssh to the Kubernetes VM guest
minikube ip
Shows the Kubernetes VM (node) IP address
minikube update-context
Checks and updates ~/.kube/config if the VM IP address is changed
minikube dashboard
Opens the web browser to connect the Kubernetes UI
For example, minikube starts a dashboard (the Kubernetes UI) by the default. If you want to access the dashboard, type minikube dashboard; it then opens your default browser and connects the Kubernetes UI, as illustrated in the following screenshot:
This recipe describes how to set up a Kubernetes cluster on your macOS using minikube. It is the easiest way to start using Kubernetes. We also learned how to use kubectl, the Kubernetes command-line interface tool, which is the entry point to control our Kubernetes cluster!
By nature, Docker and Kubernetes are based on a Linux-based OS. Although it is not ideal to use the Windows OS to explore Kubernetes, many people are using the Windows OS as their desktop or laptop machine. Luckily, there are a lot of ways to run the Linux OS on Windows using virtualization technologies, which makes running a Kubernetes cluster on Windows machines possible. Then, we can build a development environment or do a proof of concept on our local Windows machine.
You can run the Linux VM by using any hypervisor on Windows to set up Kubernetes from scratch, but using minikube (https://github.com/kubernetes/minikube) is the fastest way to build a Kubernetes cluster on Windows. Note that this recipe is not ideal for a production environment because it will set up a Kubernetes on Linux VM on Windows.
To set up minikube on Windows requires a hypervisor, either VirtualBox (https://www.virtualbox.org) or Hyper-V, because, again, minikube uses the Linux VM on Windows. This means that you cannot use the Windows virtual machine (for example, running the Windows VM on macOS by parallels).
However, kubectl , the Kubernetes CLI, supports a Windows native binary that can connect to Kubernetes over a network. So, you can set up a portable suite of Kubernetes stacks on your Windows machine.
The following diagram shows the relationship between kubectl, Hypervisor, minikube, and Windows:
Hyper-V is required for Windows 8 Pro or later. While many users still use Windows 7, we will use VirtualBox as the minikube hypervisor in this recipe.
First of all, VirtualBox for Windows is required:
Go to the VirtualBox website (
https://www.virtualbox.org/wiki/Downloads
) to download the Windows installer.
Installation is straightforward, so we
can
just choose the default options and click
Next
:
Next, create the
Kubernetes
folder, which is used to store the minikube and kubectl binaries. Let's create the
k8s
folder on top of the
C:
drive, as shown in the following screenshot:
This folder must be in the command search path, so open
System Properties
, then move to the
Advanced
tab.
Click the
Environment Variables...
button, then choose
Path
, and then click the
Edit...
button, as shown in the following screenshot:
Then, append
c:\k8s
, as follows:
After clicking the
OK
button, log off and logo on to Windows again (or reboot) to apply this change.
Next, download minikube for Windows. It is a single binary, so use any web browser to download
https://github.com/kubernetes/minikube/releases/download/v0.26.1/minikube-windows-amd64
and then copy it to the
c:\k8s
folder, but change the filename to
minikube.exe
.
Next, download kubectl for Windows, which can communicate with Kubernetes. It is also single binary like minikube. So, download
https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/windows/amd64/kubectl.exe
and then copy it to the
c:\k8s
folder as well.
Eventually, you will see two binaries in the
c:\k8s
folder, as shown in the following screenshot:
Let's get started!
Open Command Prompt and then type
minikube start
, as shown in the following screenshot:
minikube downloads the Linux VM image and then sets up Kubernetes on the Linux VM; now if you open VirtualBox, you can see that the minikube guest has been registered, as illustrated in the following screenshot:
Wait for a few minutes to complete the setup of the Kubernetes cluster.
As per the following screenshot, type
kubectl version
to check the Kubernetes master version.
Use the
kubectl get nodes
command to check whether the Kubernetes node is ready or not:
Now you can start to use Kubernetes on your machine! Again, Kubernetes is running on the Linux VM, as shown in the next screenshot.
U
sing
minikube ssh
allows you to access the Linux VM that runs Kubernetes:
Therefore, any Linux-based Docker image is capable of running on your Windows machine.
Type
minikube ip
to verify which IP address the Linux VM uses and also
minikube dashboard
, to open your default web browser and navigate to the Kubernetes UI ,as shown in the following screenshot:
If you don't need to use Kubernetes anymore, type
minikube stop
or open VirtualBox to stop the Linux guest and release the resource, as shown in the following screenshot:
This recipe describes how to set up a Kubernetes cluster on your Windows OS using minikube. It is the easiest way to start using Kubernetes. It also describes kubectl, the Kubernetes command-line interface tool, which is the entry point form which to control your Kubernetes.
In this recipe, we are going to show how to create a Kubernetes cluster along with kubeadm (https://github.com/kubernetes/kubeadm) on Linux servers. Kubeadm is a command-line tool that simplifies the procedure of creating and managing a Kubernetes cluster. Kubeadm leverages the fast deployment feature of Docker, running the system services of the Kubernetes master and the etcd server as containers. When triggered by the kubeadm command, the container services will contact kubelet on the Kubernetes node directly; kubeadm also checks whether every component is healthy. Through the kubeadm setup steps, you can avoid having a bunch of installation and configuration commands when you build everything from scratch.
We will provide instructions of two types of OS:
Ubuntu Xenial 16.04 (LTS)
CentOS 7.4
Make sure the OS version is matched before continuing. Furthermore, the software dependency and network settings should be also verified before you proceed to thecd cd next step. Check the following items to prepare the environment:
Every node has a unique MAC address and product UUID
: Some plugins use the MAC address or product UUID as a unique machine ID to identify nodes (for example,
kube-dns
). If they are duplicated in the cluster, kubeadm may not work while starting the plugin:
// check MAC address of your NIC
$ ifconfig -a
// check the product UUID on your host
$ sudo cat /sys/class/dmi/id/product_uuid
Every node has a different hostname
: If the hostname is duplicated, the Kubernetes system may collect logs or statuses from multiple nodes into the same one.
Docker is installed
: As mentioned previously, the Kubernetes master will run its daemon as a container, and every node in the cluster should get Docker installed. For how to perform the Docker installation, you can follow the steps on the official website: (Ubuntu:
https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/
, and CentOS:
https://docs.docker.com/engine/installation/linux/docker-ce/centos/
) Here we have Docker CE 17.06 installed on our machines; however, only
Docker versions 1.11.2 to 1.13.1, and 17.03.x are verified with Kubernetes version 1.10.
Network ports are available
: The Kubernetes system services need network ports for communication. The ports in the following table should now be occupied according to the role of the node:
Node role
Ports
System service
Master
6443
Kubernetes API server
10248/10250/10255
kubelet local healthz endpoint/Kubelet API/Heapster (read-only)
10251
kube-scheduler
10252
kube-controller-manager
10249/10256
kube-proxy
2379/2380
etcd client/etcd server communication
Node
10250/10255
Kubelet API/Heapster (read-only)
30000~32767
Port range reserved for exposing container service to outside world
The Linux command, netstat, can help to check if the port is in use or not:
// list every listening port
$ sudo netstat -tulpn | grep LISTEN
Network tool packages are installed.
ethtool
and
ebtables
are two required utilities for kubeadm. They can be download and installed by the
apt-get
or
yum
package managing tools.
The installation procedures for two Linux OSes, Ubuntu and CentOS, are going to be introduced separately in this recipe as they have different setups.
Let's get the Kubernetes packages first! The repository for downloading needs to be set in the source list of the package management system. Then, we are able to get them installed easily through the command-line.
To install Kubernetes packages in Ubuntu perform the following steps:
Some repositories are URL with HTTPS. The
apt-transport-https
package must be installed to access the HTTPS endpoint:
$ sudo apt-get update && sudo apt-get install -y apt-transport-https
Download the public key for accessing packages on Google Cloud, and add it as follows:
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -OK
Next, add a new source list for the Kubernetes packages:
$ sudo bash -c 'echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list'
Finally, it is good to install the Kubernetes packages:
// on Kubernetes master
$ sudo apt-get update && sudo apt-get install -y kubelet kubeadm kubectl// on Kubernetes node
$ sudo apt-get update && sudo apt-get install -y kubelet
To install Kubernetes packages in CentOS perform the following steps:
As with
Ubuntu, new repository information needs to be added:
$ sudo vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Now, we are ready to pull the packages from the Kubernetes source base via the
yum
command:
// on Kubernetes master
$ sudo yum install -y kubelet kubeadm kubectl
// on Kubernetes node
$ sudo yum install -y kubelet
No matter what OS it is, check the version of the package you get!
// take it easy! server connection failed since there is not server running
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}The connection to the server 192.168.122.101:6443 was refused - did you specify the right host or port?
Before running up the whole system by kubeadm, please check that Docker is running on your machine for Kubernetes. Moreover, in order to avoid critical errors while executing kubeadm, we will show the necessary service configuration on both the system and kubelet. As well as the master, please set the following configurations on the Kubernetes nodes to get kubelet to work fine with kubeadm.
Now we can start the service. First enable and then start kubelet on your Kubernetes master machine:
$ sudo systemctl enable kubelet && sudo systemctl start kubelet
While checking the status of kubelet, you may be worried to see the status displaying activating (auto-restart); and you may get further frustrated to see the detail logs by the journalctl command, as follows:
error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Don't worry. kubeadm takes care of creating the certificate authorities file. It is defined in the service configuration file, /etc/systemd/system/kubelet.service.d/10-kubeadm.conf by argument KUBELET_AUTHZ_ARGS. The kubelet service won't be a healthy without this file, so keep trying to restart the daemon by itself.
Go ahead and start all the master daemons via kubeadm. It is worth noting that using kubeadm requires the root permission to achieve a service level privilege. For any sudoer, each kubeadm would go after the sudo command:
$ sudo kubeadm init
And you will see the sentence Your Kubernetes master has initialized successfully! showing on the screen. Congratulations! You are almost done! Just follow the information about the user environment setup below the greeting message:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
The preceding commands ensure every Kubernetes instruction is fired by your account execute with the proper credentials and connects to the correct server portal:
// Your kubectl command works great now
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
More than that, kubelet goes into a healthy state now:
// check the status of kubelet
$ sudo systemctl status kubelet
...Active: active (running) Mon 2018-04-30 18:46:58 EDT;
2min 43s
ago
...
After the master of the cluster is ready to handle jobs and the services are running, for the purpose of making containers accessible to each other through networking, we need to set up the network for container communication. It is even more important initially while building up a Kubernetes cluster with kubeadm, since the master daemons are all running as containers. kubeadm supports the CNI (https://github.com/containernetworking/cni). We are going to attach the CNI via a Kubernetes network add-on.
There are many third-party CNI solutions that supply secured and reliable container network environments. Calico (https://www.projectcalico.org), one CNI provide stable container networking. Calico is light and simple, but still well implemented by the CNI standard and integrated with Kubernetes:
$ kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
Here, whatever your host OS is, the command kubectl can fire any sub command for utilizing resources and managing systems. We use kubectl to apply the configuration of Calico to our new-born Kubernetes.
More advanced management of networking and Kubernetes add-ons will be discussed in Chapter 7, Building Kubernetes on GCP.
Let's log in to your Kubernetes node to join the group controlled by kubeadm:
First, enable and start the service,
kubelet
. Every Kubernetes machine should have
kubelet
running on it:
$ sudo systemctl enable kubelet && sudo systemctl start kubelet
After that, fire the
kubeadm
join command with an input flag token and the IP address of the master, notifying the master that it is a secured and authorized node. You can get the token on the master node via the
kubeadm
command:
// on master node, list the token you have in the cluster
$ sudo kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
da3a90.9a119695a933a867 6h 2018-05-01T18:47:10-04:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
In the preceding output, if
kubeadm init
succeeds, the default token will be generated. Copy the token and paste it onto the node, and then compose the following command:
// The master IP is 192.168.122.101, token is da3a90.9a119695a933a867, 6443 is the port of api server.
$ sudo kubeadm join --token da3a90.9a119695a933a867 192.168.122.101:6443 --discovery-token-unsafe-skip-ca-verification
Please make sure that the master's firewall doesn't block any traffic to port
6443
, which is for API server communication. Once you see the words
Successfully established connection
showing on the screen, it is time to check with the master if the group got the new member:
// fire kubectl subcommand on master
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu01 Ready master 11h v1.10.2
ubuntu02 Ready <none> 26s v1.10.2
Well done! No matter if whether your OS is Ubuntu or CentOS, kubeadm is installed and kubelet is running. You can easily go through the preceding steps to build your Kubernetes cluster.
You may be wondering about the flag discovery-token-unsafe-skip-ca-verification
