Hands-On Microservices with Kubernetes - Gigi Sayfan - E-Book

Hands-On Microservices with Kubernetes E-Book

Gigi Sayfan

0,0
36,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Enhance your skills in building scalable infrastructure for your cloud-based applications




Key Features





  • Learn to design a scalable architecture by building continuous integration (CI) pipelines with Kubernetes


  • Get an in-depth understanding of role-based access control (RBAC), continuous deployment (CD), and observability


  • Monitor a Kubernetes cluster with Prometheus and Grafana



Book Description



Kubernetes is among the most popular open-source platforms for automating the deployment, scaling, and operations of application containers across clusters of hosts, providing a container-centric infrastructure.






Hands-On Microservices with Kubernetes starts by providing you with in-depth insights into the synergy between Kubernetes and microservices. You will learn how to use Delinkcious, which will serve as a live lab throughout the book to help you understand microservices and Kubernetes concepts in the context of a real-world application. Next, you will get up to speed with setting up a CI/CD pipeline and configuring microservices using Kubernetes ConfigMaps. As you cover later chapters, you will gain hands-on experience in securing microservices, and implementing REST, gRPC APIs, and a Delinkcious data store. In addition to this, you'll explore the Nuclio project, run a serverless task on Kubernetes, and manage and implement data-intensive tests. Toward the concluding chapters, you'll deploy microservices on Kubernetes and learn to maintain a well-monitored system. Finally, you'll discover the importance of service meshes and how to incorporate Istio into the Delinkcious cluster.






By the end of this book, you'll have gained the skills you need to implement microservices on Kubernetes with the help of effective tools and best practices.





What you will learn



  • Understand the synergy between Kubernetes and microservices


  • Create a complete CI/CD pipeline for your microservices on Kubernetes


  • Develop microservices on Kubernetes with the Go kit framework using best practices


  • Manage and monitor your system using Kubernetes and open-source tools


  • Expose your services through REST and gRPC APIs


  • Implement and deploy serverless functions as a service


  • Externalize authentication, authorization and traffic shaping using a service mesh


  • Run a Kubernetes cluster in the cloud on Google Kubernetes Engine



Who this book is for



This book is for developers, DevOps engineers, or anyone who wants to develop large-scale microservice-based systems on top of Kubernetes. If you are looking to use Kubernetes on live production projects or want to migrate existing systems to a modern containerized microservices system, then this book is for you. Coding skills, together with some knowledge of Docker, Kubernetes, and cloud concepts will be useful.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 549

Veröffentlichungsjahr: 2019

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Hands-On Microservices with Kubernetes

 

 

 

Build, deploy, and manage scalable microservices on Kubernetes

 

 

 

 

 

 

 

 

Gigi Sayfan

 

 

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

Hands-On Microservices with Kubernetes

Copyright © 2019 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

 

Commissioning Editor: Pavan RamchandaniAcquisition Editor: Rohit RajkumarContent Development Editor: Amitendra PathakSenior Editor: Rahul DsouzaTechnical Editor: Prachi SawantCopy Editor: Safis EditingProject Coordinator: Jagdish PrabhuProofreader: Safis EditingIndexer: Manju ArasanProduction Designer: Jayalaxmi Raja

First published: July 2019

Production reference: 1050719

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-78980-546-8

www.packtpub.com

 

Packt.com

Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Fully searchable for easy access to vital information

Copy and paste, print, and bookmark content

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks. 

Contributors

About the author

Gigi Sayfan is a principal software architect at Helix – a bioinformatics and genomics start-up – and author of Mastering Kubernetes, published by Packt. He has been developing software professionally for more than 20 years in domains as diverse as instant messaging, morphing, chip-fabrication process control, embedded multimedia applications for games consoles, and brain-inspired machine learning. He has written production code in many programming languages including Go, Python, C#, Java, Delphi, JavaScript, and even Cobol and PowerBuilder, for operating systems such as Windows, Linux, macOS, Lynx, and Sony PlayStation. His technical expertise covers databases, low-level networking, unorthodox user interfaces, and the general SDLC.

About the reviewers

Guang Ya Liu is a senior technical staff member for IBM Cloud Private and is currently focused on cloud computing, container technology, and distributed computing. He is also a member of the IBM Academy of Technology. He was an OpenStack Magnum Core member from 2015 to 2017, and now serves as an Istio maintainer, Kubernetes member, Kubernetes Federation V2 maintainer, Apache Mesos committer, and PMC member.

 

 

Shashidhar Soppin is a senior software architect with over 18 years' experience in IT. He has worked on virtualization, storage, the cloud and cloud architecture, OpenStack, machine learning, deep learning, and Docker container technologies. Primarily, his focus is on building new approaches and solutions for enterprise customers. He is an avid author of open source technologies (OSFY), a blogger (LinuxTechi), and a holder of patents. He graduated from BIET, Davangere, India. In his free time, he loves to travel and read books.

 

 

 

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Table of Contents

Title Page

Copyright and Credits

Hands-On Microservices with Kubernetes

About Packt

Why subscribe?

Contributors

About the author

About the reviewers

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Download the color images

Conventions used

Get in touch

Reviews

Introduction to Kubernetes for Developers

Technical requirements

Installing Docker

Installing kubectl

Installing Minikube

The code

Kubernetes in a nutshell

Kubernetes – the container orchestration platform

The history of Kubernetes

The state of Kubernetes

Understanding the Kubernetes architecture

The control plane

The API server

The etcd store

The scheduler

The controller manager

The data plane

The kubelet

The kube proxy

The container runtime

Kubectl

Kubernetes and microservices – a perfect match

Packaging and deploying microservices

Exposing and discovering microservices

Securing microservices

Namespaces

Service accounts

Secrets

Secure communication

Network policies

Authenticating and authorizing microservices

Role-based access control

Upgrading microservices

Scaling microservices

Monitoring microservices

Logging

Metrics

Creating a local cluster

Installing Minikube

Troubleshooting Minikube

Verifying your cluster

Playing with your cluster

Installing Helm

Summary

Further reading

Getting Started with Microservices

Technical requirements

Installing Go with Homebrew on macOS

Installing Go on other platforms

The code

Programming in the small – less is more

Making your microservice autonomous

Employing interfaces and contracts

Exposing your service via APIs

Using client libraries

Managing dependencies

Coordinating microservices

The uniformity versus flexibility trade-off

Taking advantage of ownership

Understanding Conway's law

Vertical

Horizontal

Matrix

Troubleshooting across multiple services

Utilizing shared service libraries

Choosing a source control strategy

Monorepo

Multiple repos

Hybrid

Creating a data strategy

One data store per microservice

Running distributed queries

Employing Command Query Responsibility Segregation

Employing API composition

Using sagas to manage transactions across multiple services

Understanding ACID

Understanding the CAP theorem

Applying the saga pattern to microservices

Summary

Further reading

Delinkcious - the Sample Application

Technical requirements

Visual Studio Code

GoLand

LiteIDE

Other options

The code

Choosing Go for Delinkcious

Getting to know Go kit

Structuring microservices with Go kit

Understanding transports

Understanding endpoints

Understanding services

Understanding middleware

Understanding clients

Generating the boilerplate

Introducing the Delinkcious directory structure

The cmd subdirectory

The pkg subdirectory

The svc subdirectory

Introducing the Delinkcious microservices

The object model

The service implementation

Implementing the support functions

Invoking the API via a client library

Storing data

Summary

Further reading

Setting Up the CI/CD Pipeline

Technical requirements

The code

Understanding a CI/CD pipeline

Options for the Delinkcious CI/CD pipeline

Jenkins X

Spinnaker

Travis CI and CircleCI

Tekton

Argo CD

Rolling your own

GitOps

Building your images with CircleCI

Reviewing the source tree

Configuring the CI pipeline

Understanding the build.sh script

Dockerizing a Go service with a multi-stage Dockerfile

Exploring the CircleCI UI

Considering future improvements

Setting up continuous delivery for Delinkcious

Deploying a Delinkcious microservice

Understanding Argo CD

Argo CD is built on Argo

Argo CD utilizes GitOps

Getting started with Argo CD

Configuring Argo CD

Using sync policies

Exploring Argo CD

Summary

Further reading

Configuring Microservices with Kubernetes

Technical requirements

The code

What is configuration all about?

Configuration and secrets

Managing configuration the old-fashioned way

Convention over configuration

Command-line flags

Environment variables

Configuration files

INI format

XML format

JSON format

YAML format

TOML format

Proprietary formats

Hybrid configuration and defaults

Twelve factor app configuration

Managing configuration dynamically

Understanding dynamic configuration

When is dynamic configuration useful?

When should you avoid dynamic configuration?

Remote configuration store

Remote configuration service

Configuring microservices with Kubernetes

Working with Kubernetes ConfigMaps

Creating and managing ConfigMaps

Applying advanced configuration

Kubernetes custom resources

Service discovery

Summary

Further reading

Securing Microservices on Kubernetes

Technical requirements

The code

Applying sound security principles

Differentiating between user accounts and service accounts

User accounts

Service accounts

Managing secrets with Kubernetes

Understanding the three types of Kubernetes secret

Creating your own secrets

Passing secrets to containers

Building a secure pod

Managing permissions with RBAC

Controlling access with authentication, authorization, and admission

Authenticating microservices

Authorizing microservices

Admitting microservices

Hardening your Kubernetes cluster using security best practices

Securing your images

Always pull images

Scan for vulnerabilities

Update your dependencies

Pinning the versions of your base images

Using minimal base images

Dividing and conquering your network

Safeguarding your image registry

Granting access to Kubernetes resources as needed

Using quotas to minimize the blast radius

Units for requests and limits

Implementing security contexts

Hardening your pods with security policies

Hardening your toolchain

Authentication of admin user via JWT tokens

Authorization via RBAC

Secure communication over HTTPS

Secret and credentials management

Audits

Cluster RBAC

Summary

Further reading

Talking to the World - APIs and Load Balancers

Technical requirements

The code

Getting familiar with Kubernetes services

Service types in Kubernetes

East-west versus north-south communication

Understanding ingress and load balancing

Providing and consuming a public REST API

Building a Python-based API gateway service

Implementing social login

Routing traffic to internal microservices

Utilizing base Docker images to reduce build time

Adding ingress

Verifying that the API gateway is available outside the cluster

Finding the Delinkcious URL

Getting an access token

Hitting the Delinkcious API gateway from outside the cluster

Providing and consuming an internal gRPC API

Defining the NewsManager interface

Implementing the news manager package

Exposing NewsManager as a gRPC service

Defining the gRPC service contract

Generating service stubs and client libraries with gRPC

Using Go-kit to build the NewsManager service

Implementing the gRPC transport

Sending and receiving events via a message queue

What is NATS?

Deploying NATS in the cluster

Sending link events with NATS

Subscribing to link events with NATS

Handling link events

Understanding service meshes

Summary

Further reading

Working with Stateful Services

Technical requirements

The code

Abstracting storage

The Kubernetes storage model

Storage classes

Volumes, persistent volumes, and provisioning

Persistent volume claims

In-tree and out-of-tree storage plugins

Understanding CSI

Standardizing on CSI

Storing data outside your Kubernetes cluster

Storing data inside your cluster with StatefulSets

Understanding a StatefulSet

StatefulSet components

Pod identity

Orderliness

When should you use a StatefulSet?

Comparing deployment and StatefulSets

Reviewing a large StatefulSet example

A quick introduction to Cassandra

Deploying Cassandra on Kubernetes using StatefulSets

Achieving high performance with local storage

Storing your data in memory

Storing your data on a local SSD

Using relational databases in Kubernetes

Understanding where the data is stored

Using a deployment and service

Using a StatefulSet

Helping the user service locate StatefulSet pods

Managing schema changes

Using non-relational data stores in Kubernetes

An introduction to Redis

Persisting events in the news service

Summary

Further reading

Running Serverless Tasks on Kubernetes

Technical requirements

The code

Serverless in the cloud

Microservices and serverless functions

Modeling serverless functions in Kubernetes

Functions as code

Functions as containers

Building, configuring, and deploying serverless functions

Invoking serverless functions

Link checking with Delinkcious

Designing link checks

Implementing link checks

Serverless link checking with Nuclio

A quick introduction to Nuclio

Creating a link checker serverless function

Deploying the link checker function with nuctl

Deploying a function using the Nuclio dashboard

Invoking the link-checker function directly

Triggering link checking in LinkManager

Other Kubernetes serverless frameworks

Kubernetes Jobs and CronJobs

KNative

Fission

Kubeless

OpenFaas

Summary

Further reading

Testing Microservices

Technical requirements

Unit testing

Unit testing with Go

Unit testing with Ginkgo and Gomega

Delinkcious unit testing

Designing for testability

The art of mocking

Bootstrapping your test suite

Implementing the LinkManager unit tests

Should you test everything?

Integration testing

Initializing a test database

Running services

Running the actual test

Implementing database test helpers

Implementing service test helpers

Checking errors

Running a local service

Stopping a local service

Local testing with Kubernetes

Writing a smoke test

Running the test

Telepresence

Installing Telepresence

Running a local link service via Telepresence

Attaching to the local link service with GoLand for live debugging

Isolating tests

Test clusters

Cluster per developer

Dedicated clusters for system tests

Test namespaces

Writing multi-tenant systems

Cross namespace/cluster

End-to-end testing

Acceptance testing

Regression testing

Performance testing

Managing test data

Synthetic data

Manual test data

Production snapshot

Summary

Further reading

Deploying Microservices

Technical requirements

The code

Kubernetes deployments

Deploying to multiple environments

Understanding deployment strategies

Recreating deployment

Rolling updates

Blue-green deployment

Adding deployment – the blue label

Updating the link-manager service to match blue pods only

Prefixing the description of each link with [green]

Bumping the version number

Letting CircleCI build the new image

Deploying the new (green) version

Updating the link-manager service to use the green deployment

Verifying that the service now uses the green pods to serve requests

Canary deployments

Employing a basic canary deployment for Delinkcious

Using canary deployments for A/B testing

Rolling back deployments

Rolling back standard Kubernetes deployments

Rolling back blue-green deployments

Rolling back canary deployments

Dealing with a rollback after a schema, API, or payload change

Managing versions and dependencies

Managing public APIs

Managing cross-service dependencies

Managing third-party dependencies

Managing your infrastructure and toolchain

Local development deployments

Ko

Ksync

Draft

Skaffold

Tilt

Summary

Further reading

Monitoring, Logging, and Metrics

Technical requirements

The code

Self-healing with Kubernetes

Container failures

Node failure

Systemic failures

Autoscaling a Kubernetes cluster

Horizontal pod autoscaling

Using the horizontal pod autoscaler

Cluster autoscaling

Vertical pod autoscaling

Provisioning resources with Kubernetes

What resources should you provision?

Defining container limits

Specifying resource quotas

Manual provisioning

Utilizing autoscaling

Rolling your own automated provisioning

Getting performance right

Performance and user experience 

Performance and high availability

Performance and cost

Performance and security

Logging

What should you log?

Logging versus error reporting

The quest for the perfect Go logging interface

Logging with Go-kit

Setting up a logger with Go-kit

Using a logging middleware

Centralized logging with Kubernetes

Collecting metrics on Kubernetes

Introducing the Kubernetes metrics API

Understanding the Kubernetes metrics server

Using Prometheus

Deploying Prometheus into the cluster

Recording custom metrics from Delinkcious

Alerting

Embracing component failure

Grudgingly accepting system failure

Taking human factors into account

Warnings versus alerts

Considering severity levels

Determining alert channels

Fine-tuning noisy alerts

Utilizing the Prometheus alert manager

Configuring alerts in Prometheus

Distributed tracing

Installing Jaeger

Integrating tracing into your services

Summary

Further reading

Service Mesh - Working with Istio

Technical requirements

The code

What is a service mesh?

Comparing monoliths to microservices

Using a shared library to manage the cross-cutting concerns of microservices

Using a service mesh to manage the cross-cutting concerns of microservices

Understanding the relationship between Kubernetes and a service mesh

What does Istio bring to the table?

Getting to know the Istio architecture

Envoy

Pilot

Mixer

Citadel

Galley

Managing traffic with Istio

Routing requests

Load balancing

Handling failures

Injecting faults for testing

Doing canary deployments

Securing your cluster with Istio

Understanding Istio identity

Authenticating users with Istio

Authorizing requests with Istio

Enforcing policies with Istio

Collecting metrics with Istio

When should you avoid Istio?

Delinkcious on Istio

Removing mutual authentication between services

Utilizing better canary deployments

Automatic logging and error reporting

Accommodating NATS

Examining the Istio footprint

Alternatives to Istio

Linkerd 2.0

Envoy

HashiCorp Consul

AWS App Mesh

Others

The no mesh option

Summary

Further reading

The Future of Microservices and Kubernetes

The future of microservices

Microservices versus serverless functions

Microservices, containers, and orchestration

gRPC and gRPC-Web

GraphQL

HTTP/3 is coming

The future of Kubernetes

Kubernetes extensibility

Abstracting the container runtime

Abstracting networking

Abstracting storage

The cloud provider interface

Service mesh integration

Serverless computing on Kubernetes

Kubernetes and VMs

gVisor

Firecracker

Kata containers

Cluster autoscaling

Using operators

Federation

Summary

Further reading

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

Hands-On Microservices with Kubernetes is the book you have been waiting for. It will walk you though the parallel paths of developing microservices and deploying them on Kubernetes. The synergy between microservice-based architecture and Kubernetes is very powerful. This book covers all angles. It explains the concepts behind microservices and Kubernetes, discusses real-world concerns and trade-offs, takes you through the development of fully fledged microservice-based systems, shows you best practices, and provides ample recommendations.

This book covers an amazing amount of ground in great depth and with working code to illustrate. You will learn how to design a microservice-based architecture, build microservices, test the microservices you've built, and package them as Docker images. Then, you will learn how to deploy your system as a collection of Docker images to Kubernetes and manage it there.

Along the way, you will become familiar with most important trends to be aware of, such as automated continuous integration / continuous delivery (CI/CD) , gRPC-based microservices, serverless computing, and service meshes.

By the end of this book, you will have gained a lot of knowledge and hands-on experience with planning, developing, and operating large-scale cloud-native systems using microservice-based architecture deployed on Kubernetes.

Who this book is for

This book is targeted at software developers and DevOps engineers who want to be at the forefront of large-scale software engineering. It will help if you have experience with large-scale software systems that are deployed using containers on more than one machine and are developed by several teams.

What this book covers

Chapter 1, Introduction to Kubernetes for Developers, introduces you to Kubernetes. You will receive a whirlwind tour of Kubernetes and get an idea of how well it aligns with microservices.

Chapter 2, Getting Started with Microservices, discusses various aspects, patterns, and approaches to common problems in microservice-based systems and how they compare to other common architectures, such as monoliths and large services.

Chapter 3, Delinkcious – the Sample Application, explores why we should choose Go as the programming language of Delinkcious; then we will look at Go kit.

Chapter 4, Setting Up the CI/CD Pipeline, teaches you about the problem the CI/CD pipeline solves, covers the different options for CI/CD pipelines for Kubernetes, and finally looks at building a CI/CD pipeline for Delinkcious.

Chapter 5, Configuring Microservices with Kubernetes, moves you into the practical and real-world area of microservices configuration. Also, we will discuss Kubernetes-specific options and, in particular, ConfigMaps.

Chapter 6, Securing Microservices on Kubernetes, examines how to secure your microservices on Kubernetes in depth. We will also discuss the pillars that act as the foundation of microservice security on Kubernetes.

Chapter 7, Talking to the World – APIs and Load Balancers, sees us open Delinkcious to the world and let users interact with it from outside the cluster. Also, we will add a gRPC-based news service that users can hit up to get news about other users they follow. Finally, we will add a message queue that lets services communicate in a loosely coupled manner.

Chapter 8, Working with Stateful Services, delves into the Kubernetes storage model. We will also extend the Delinkcious news service to store its data in Redis, instead of in memory.

Chapter 9, Running Serverless Tasks on Kubernetes, dives into one of the hottest trends in cloud-native systems: serverless computing (also known as Function as a Service, or FaaS). Also, we'll cover other ways to do serverless computing in Kubernetes.

Chapter 10, Testing Microservices, covers the topic of testing and its various flavors: unit testing, integration testing, and all kinds of end-to-end testing. We also delve into how Delinkcious tests are structured.

Chapter 11, Deploying Microservices, deals with two related, yet separate, themes: production deployments and development deployments. 

Chapter 12, Monitoring, Logging, and Metrics, focuses on the operational side of running a large-scale distributed system on Kubernetes, as well as on how to design the system and what to take into account to ensure a top-notch operational posture. 

Chapter 13, Service Mesh – Working with Istio, reviews the hot topic of service meshes and, in particular, Istio. This is exciting because service meshes are a real game changer.

Chapter 14, The Future of Microservices and Kubernetes, covers the topics of Kubernetes and microservices, and will help us learn how to decide when it's the right time to adopt and invest in newer technologies.

To get the most out of this book

Any software requirements are either listed at the beginning of each chapter in the Technical requirements section, or, if the installation of a particular piece of software is part of the material of the chapter, then any instructions you need will be contained within the chapter itself. Most of the installations are software components that are installed into the Kubernetes cluster. This is an important part of the hands-on nature of the book. 

Download the example code files

You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

Log in or register at

www.packt.com

.

Select the

SUPPORT

tab.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR/7-Zip for Windows

Zipeg/iZip/UnRarX for Mac

7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Hands-On-Microservices-with-Kubernetes. In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://static.packt-cdn.com/downloads/9781789805468_ColorImages.pdf.

Conventions used

There are a number of text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "Note that I made sure it's executable via chmod +x."

A block of code is set as follows:

version: 2jobs: build: docker: - image: circleci/golang:1.11 - image: circleci/postgres:9.6-alpine

Any command-line input or output is written as follows:

$ tree -L 2

.

├── LICENSE

├── README.md

├── build.sh

Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "We can sync it by selecting Sync from the ACTIONS dropdown."

Warnings or important notes appear like this.
Tips and tricks appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packt.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packt.com.

Introduction to Kubernetes for Developers

In this chapter, we will introduce you to Kubernetes. Kubernetes is a big platform and it's difficult to do justice to it in just one chapter. Luckily, we have a whole book to explore it. Don't worry if you feel a little overwhelmed. I'll mention many concepts and capabilities briefly. In later chapters, we will cover many of these in detail, as well as the connections and interactions between those Kubernetes concepts. To spice things up and get hands-on early, you will also create a local Kubernetes cluster (Minikube) on your machine. This chapter will cover the following topics:

Kubernetes in a nutshell

The Kubernetes architecture

Kubernetes and microservices

Creating a local cluster

Technical requirements

In this chapter, you will need the following tools:

Docker

Kubectl

Minikube

Installing Docker

To install Docker, follow the instructions here: https://docs.docker.com/install/#supported-platforms. I will use Docker for macOS.

Installing kubectl

To install kubectl, follow the instructions here: https://kubernetes.io/docs/tasks/tools/install-kubectl/.

Kubectl is the Kubernetes CLI and we will use it extensively throughout the book.

Installing Minikube

To install Minikube, follow the instructions here: https://kubernetes.io/docs/tasks/tools/install-minikube/.

Note that you need to install a hypervisor too. For the macOS, I find VirtualBox the most reliable. You may prefer another hypervisor, such as HyperKit. There will be more detailed instructions later when you get to play with Minikube.

The code

The code for the chapter is available here: 

https://github.com/PacktPublishing/Hands-On-Microservices-with-Kubernetes/tree/master/Chapter01

There is another Git repository for the Delinkcious sample application that we will build together: 

https://github.com/the-gigi/delinkcious

Kubernetes in a nutshell

In this section, you'll get a sense of what Kubernetes is all about, its history, and how it became so popular.

Kubernetes – the container orchestration platform

The primary function of Kubernetes is deploying and managing a large number of container-based workloads on a fleet of machines (physical or virtual). This means that Kubernetes provides the means to deploy containers to the cluster. It makes sure to comply with various scheduling constraints and pack the containers efficiently into the cluster nodes. In addition, Kubernetes automatically watches your containers and restarts them if they fail. Kubernetes will also relocate workloads off problematic nodes to other nodes. Kubernetes is an extremely flexible platform. It relies on a provisioned infrastructure layer of compute, memory, storage, and networking, and, with these resources, it works its magic.

The history of Kubernetes

Kubernetes and the entire cloud-native scene is moving at breakneck speed, but let's take a moment to reflect on how we got here. It will be a very short journey because Kubernetes came out of Google in June 2014, just a few years ago. When Docker became popular, it changed how people package, distribute, and deploy software. But, it soon became apparent that Docker doesn't scale on its own for large distributed systems. A few orchestration solutions became available, such as Apache Mesos, and later, Docker's own swarm. But, they never measured up to Kubernetes. Kubernetes was conceptually based on Google's Borg system. It brought together the design and technical excellence of a decade of Google engineering, but it was a new open source project. At OSCON 2015, Kubernetes 1.0 was released and the floodgates opened. The growth of Kubernetes, its ecosystem, and the community behind it, was as impressive as its technical excellence.

Kubernetes means helmsman in Greek. You'll notice many nautical terms in the names of Kubernetes-related projects.

The state of Kubernetes

Kubernetes is now a household name. The DevOps world pretty much equates container orchestration with Kubernetes. All major cloud providers offer managed Kubernetes solutions. It is ubiquitous in enterprise and in startup companies. While Kubernetes is still young and innovation keeps happening, it is all happening in a very healthy way. The core is rock solid, battle tested, and used in production across lots and lots of companies. There are very big players collaborating and pushing Kubernetes forward, such as Google (obviously), Microsoft, Amazon, IBM, and VMware.

The Cloud Native Computing Foundation (CNCF) open source organization offers certification. Every 3 months, a new Kubernetes release comes out, which is the result of a collaboration between hundreds of volunteers and paid engineers. There is a large ecosystem surrounding the main project of both commercial and open source projects. You will see later how Kubernetes' flexible and extensible design encourages this ecosystem and helps in integrating Kubernetes into any cloud platform.

Understanding the Kubernetes architecture

Kubernetes is a marvel of software engineering. The architecture and design of Kubernetes are a big part in its success. Each cluster has a control plane and data plane. The control plane consists of several components, such as an API server, a metadata store for keeping the state of a cluster, and multiple controllers that are responsible for managing the nodes in the data plane and providing access to users. The control plane in production will be distributed across multiple machines for high availability and robustness. The data plane consists of multiple nodes, or workers. The control plane will deploy and run your pods (groups of containers) on these nodes, and then watch for changes and respond.

Here is a diagram that illustrates the overall architecture:

Let's review in detail the control plane and the data plane, as well as kubectl, which is the command-line tool you use to interact with the Kubernetes cluster.

The control plane

The control plane consists of several components:

API server

The etcd metadata store

Scheduler

Controller manager

Cloud controller manager

Let's examine the role of each component.

The API server

The kube-api-server is a massive REST server that exposes the Kubernetes API to the world. You can have multiple instances of the API server in your control plane for high-availability. The API server keeps the cluster state in etcd.

The etcd store

The complete cluster is stored in etcd (https://coreos.com/etcd/), a consistent and reliable, distributed key-value store. The etcd store is an open source project (developed by CoreOS, originally).

It is common to have three or five instances of etcd for redundancy. If you lose the data in your etcd store, you lose your cluster.

The scheduler

The kube-scheduler is responsible for scheduling pods to worker nodes. It implements a sophisticated scheduling algorithm that takes a lot of information into account, such as resource availability on each node, various constraints specified by the user, types of available nodes, resource limits and quotas, and other factors, such as affinity, anti-affinity, tolerations, and taints.

The controller manager

The kube-controller manager is a single process that contains multiple controllers for simplicity. These controllers watch for events and changes to the cluster and respond accordingly:

Node controller

: Responsible for noticing and responding when nodes go down.

Replication controller

: This makes sure that there is the correct number of pods for each replica set or replication controller object.

Endpoints controller

: This assigns for each service an endpoints object that lists the service's pods.

Service account and token controllers

: These initialize new namespaces with default service accounts and corresponding API access tokens.

The data plane

The data plane is the collection of the nodes in the cluster that run your containerized workloads as pods. The data plane and control plane can share physical or virtual machines. This happens, of course, when you run a single node cluster, such as Minikube. But, typically, in a production-ready deployment, the data plane will have its own nodes. There are several components that Kubernetes installs on each node in order to communicate, watch, and schedule pods: kubelet, kube-proxy, and the container runtime (for example, the Docker daemon).

The kubelet

The kubelet is a Kubernetes agent. It's responsible for talking to the API server and for running and managing the pods on the node. Here are some of the responsibilities of the kubelet:

Downloading pod secrets from the API server

Mounting volumes

Running the pod container via the

Container Runtime Interface

 (

CRI

)

Reporting the status of the node and each pod

Probe container liveness

The kube proxy

The kube proxy is responsible for the networking aspects of the node. It operates as a local front for services and can forward TCP and UDP packets. It discovers the IP addresses of services via DNS or environment variables.

The container runtime

Kubernetes eventually runs containers, even if they are organized in pods. Kubernetes supports different container runtimes. Originally, only Docker was supported. Now, Kubernetes runs containers through an interface called CRI, which is based on gRPC. 

Each container runtime that implements CRI can be used on a node controlled by the kubelet, as shown in the preceding diagram.

Kubectl

Kubectl is a tool you should get very comfortable with. It is your command-line interface (CLI) to your Kubernetes cluster. We will use kubectl extensively throughout the book to manage and operate Kubernetes. Here is a short list of the capabilities kubectl puts literally at your fingertips:

Cluster management

Deployment

Troubleshooting and debugging

Resource management (Kubernetes objects)

Configuration and metadata

Just type kubectl to get a complete list of all the commands and kubectl <command> --help for more detailed info on specific commands.

Kubernetes and microservices – a perfect match

Kubernetes is a fantastic platform with amazing capabilities and a wonderful ecosystem. How does it help you with your system? As you'll see, there is a very good alignment between Kubernetes and microservices. The building blocks of Kubernetes, such as namespaces, pods, deployments, and services, map directly to important microservices concepts and an agile software development life cycle (SDLC). Let's dive in.

Packaging and deploying microservices

When you employ a microservice-based architecture, you'll have lots of microservices. Those microservices, in general, may be developed independently, and deployed independently. The packaging mechanism is simply containers. Every microservice you develop will have a Dockerfile. The resulting image represents the deployment unit for that microservice. In Kubernetes, your microservice image will run inside a pod (possibly alongside other containers). But an isolated pod, running on a node, is not very resilient. The kubelet on the node will restart the pod's container if it crashes, but if something happens to the node itself, the pod is gone. Kubernetes has abstractions and resources that build on the pod.

ReplicaSets are sets of pods with a certain number of replicas. When you create a ReplicaSet, Kubernetes will make sure that the correct number of pods you specify always run in the cluster. The deployment resource takes it a step further and provides an abstraction that exactly aligns with the way you consider and think about microservices. When you have a new version of a microservice ready, you will want to deploy it. Here is a Kubernetes deployment manifest:

apiVersion: apps/v1kind: Deploymentmetadata: name: nginx labels: app: nginxspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.15.4 ports: - containerPort: 80

The file can be found at https://github.com/the-gigi/hands-on-microservices-with-kubernetes-code/blob/master/ch1/nginx-deployment.yaml.

This is a YAML file (https://yaml.org/) that has some fields that are common to all Kubernetes resources, and some fields that are specific to deployments. Let's break this down piece by piece. Almost everything you learn here will apply to other resources:

The

apiVersion

field marks the Kubernetes resources version. A specific version of the Kubernetes API server (for example, V1.13.0) can work with different versions of different resources. Resource versions have two parts: an API group (in this case, 

apps

) and a version number (

v1

). The version number may include

alpha

or

beta

 designations:

apiVersion: apps/v1

The

kind

field specifies what resource or API object we are dealing with. You will meet many kinds of resources in this chapter and later:

kind: Deployment

The

metadata

section contains the name of the resource (

nginx

) and a set of labels, which are just key-value string pairs. The name is used to refer to this particular resource. The labels allow for operating on a set of resources that share the same label. Labels are very useful and flexible. In this case, there is just one label (

app: nginx

):

metadata: name: nginx labels: app: nginx

Next, we have a

spec

field. This is a ReplicaSet

spec

. You could create a ReplicaSet directly, but it would be static. The whole purpose of deployments is to manage its set of replicas. What's in a ReplicaSet spec? Obviously, it contains the number of

replicas

(

3

). It has a selector with a set of

matchLabels

(also

app: nginx

), and it has a pod template. The ReplicaSet will manage pods that have labels that match 

matchLabels

:

spec: replicas: 3 selector: matchLabels: app: nginx template: ...

Let's have a look at the pod template. The template has two parts:

metadata

and a

spec

. The

metadata

is where you specify the labels. The

spec

describes the

containers

in the pod. There may be one or more containers in a pod. In this case, there is just one container. The key field for a container is the image (often a Docker image), where you packaged your microservice. That's the code we want to run. There is also a name (

nginx

) and a set of ports:

metadata: labels: app: nginxspec: containers: - name: nginx image: nginx:1.15.4 ports: - containerPort: 80

There are more fields that are optional. If you want to dive in deeper, check out the API reference for the deployment resource at https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#deployment-v1-apps.

Exposing and discovering microservices

We deployed our microservice with a deployment. Now, we need to expose it, so that it can be used by other services in the cluster and possibly also make it visible outside the cluster. Kubernetes provides the Service resource for that purpose. Kubernetes services are backed up by pods, identified by labels:

apiVersion: v1kind: Servicemetadata: name: nginx labels: app: nginxspec: ports: - port: 80 protocol: TCP selector: app: nginx

Services discover each other inside the cluster, using DNS or environment variables. This is the default behavior. But, if you want to make a service accessible to the world, you will normally set an ingress object or a load balancer. We will explore this topic in detail later.

Securing microservices

Kubernetes was designed for running large-scale critical systems, where security is of paramount concern. Microservices are often more challenging to secure than monolithic systems because there is so much internal communication across many boundaries. Also, microservices encourage agile development, which leads to a constantly changing system. There is no steady state you can secure once and be done with it. You must constantly adapt the security of the system to the changes. Kubernetes comes pre-packed with several concepts and mechanisms for secure development, deployment, and operation of your microservices. You still need to employ best practices, such as principle of least privilege, security in depth, and minimizing blast radius. Here are some of the security features of Kubernetes.

Namespaces

Namespaces let you isolate different parts of your cluster from each other. You can create as many namespaces as you want and scope many resources and operations to their namespace, including limits, and quotas. Pods running in a namespace can only access directly their own namespace. To access other namespaces, they must go through public APIs.

Service accounts

Service accounts provide identity to your microservices. Each service account will have certain privileges and access rights associated with its account. Service accounts are pretty simple:

apiVersion: v1kind: ServiceAccountmetadata: name: custom-service-account

You can associate service accounts with a pod (for example, in the pod spec of a deployment) and the microservices that run inside the pod will have that identity and all the privileges and restrictions associated with that account. If you don't assign a service account, then the pod will get the default service account of its namespace. Each service account is associated with a secret used to authenticate it.

Secure communication

Kubernetes utilizes client-side certificates to fully authenticate both sides of any external communication (for example, kubectl). All communication to the Kubernetes API from outside should be over HTTP. Internal cluster communication between the API server and the kubelet on the node is over HTTPS too (the kubelet endpoint). But, it doesn't use a client certificate by default (you can enable it).

Communication between the API server and nodes, pods, and services is, by default, over HTTP and is not authenticated. You can upgrade them to HTTPS, but note that the client certificate is checked, so don't run your worker nodes on public networks.

Network policies

In a distributed system, beyond securing each container, pod, and node, it is critical to also control communication over the network. Kubernetes supports network policies, which give you full flexibility to define and shape the traffic and access across the cluster.

Authenticating and authorizing microservices

Authentication and authorization are also related to security, by limiting access to trusted users and to limited aspects of Kubernetes. Organizations have a variety of ways to authenticate their users. Kubernetes supports many of the common authentication schemes, such as X.509 certificates, and HTTP basic authentication (not very secure), as well as an external authentication server via webhook that gives you ultimate control over the authentication process. The authentication process just matches the credentials of a request with an identity (either the original or an impersonated user). What that user is allowed to do is controlled by the authorization process. Enter RBAC.

Role-based access control

Role-based access control (RBAC) is not required! You can perform authorization using other mechanisms in Kubernetes. However, it is a best practice. RBAC is based on two concepts: role and binding. A role is a set of permissions on resources defined as rules. There are two types of roles: Role, which applies to a single namespace, and ClusterRole, which applies to all namespaces in a cluster.

Here is a role in the default namespace that allows the getting, watching, and listing of all pods. Each role has three components: API groups, resources, and verbs:

kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: namespace: default name: pod-readerrules:- apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"]

Cluster roles are very similar, except there is no namespace field because they apply to all namespaces.

A binding is associating a list of subjects (users, user groups, or service accounts) with a role. There are two types of binding, RoleBinding and ClusterRoleBinding, which correspond to Role and ClusterRole.

kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: pod-reader namespace: defaultsubjects:- kind: User name: gigi # Name is case sensitive apiGroup: rbac.authorization.k8s.ioroleRef: kind: Role # must be Role or ClusterRole name: pod-reader # must match the name of the Role or ClusterRole you bind to apiGroup: rbac.authorization.k8s.io

It's interesting that you can bind a ClusterRole to a subject in a single namespace. This is convenient for defining roles that should be used in multiple namespaces, once as a cluster role, and then binding them to specific subjects in specific namespaces.

The cluster role binding is similar, but must bind a cluster role and always applies to the whole cluster.

Note that RBAC is used to grant access to Kubernetes resources. It can regulate access to your service endpoints, but you may still need fine-grained authorization in your microservices.

Upgrading microservices

Deploying and securing microservices is just the beginning. As you develop and evolve your system, you'll need to upgrade your microservices. There are many important considerations regarding how to go about it that we will discuss later (versioning, rolling updates, blue-green, and canary). Kubernetes provides direct support for many of these concepts out of the box and the ecosystem built on top of it to provide many flavors and opinionated solutions.

The goal is often zero downtime and safe rollback if a problem occurs. Kubernetes deployments provide the primitives, such as updating a deployment, pausing a roll-out, and rolling back a deployment. Specific workflows are built on these solid foundations. The mechanics of upgrading a service typically involve upgrading its image to a new version and sometimes changes to its support resources and access: volumes, roles, quotas, limits, and so on.

Scaling microservices

There are two aspects to scaling a microservice with Kubernetes. The first aspect is scaling the number of pods backing up a particular microservice. The second aspect is the total capacity of the cluster. You can easily scale a microservice explicitly by updating the number of replicas of a deployment, but that requires constant vigilance on your part. For services that have large variations in the volume of requests they handle over long periods (for example, business hours versus off hours or week days versus weekends), it might take a lot of effort. Kubernetes provides horizontal pod autoscaling, which is based on CPU, memory, or custom metrics, and can scale your service up and down automatically.

Here is how to scale our nginx deployment that is currently fixed at three replicas to go between 2 and 5, depending on the average CPU usage across all instances:

apiVersion: autoscaling/v1kind: HorizontalPodAutoscalermetadata: name: nginx namespace: defaultspec: maxReplicas: 5 minReplicas: 2 targetCPUUtilizationPercentage: 90 scaleTargetRef: apiVersion: v1 kind: Deployment name: nginx

The outcome is that Kubernetes will watch CPU utilization of the pods that belong to the nginx deployment. When the average CPU over a certain period of time (5 minutes, by default) exceeds 90%, it will add more replicas until the maximum of 5, or until utilization drops below 90%. The HPA can scale down too, but will always maintain a minimum of two replicas, even if the CPU utilization is zero.

Monitoring microservices

Your microservices are deployed and running on Kubernetes. You can update the version of your microservices whenever it is needed. Kubernetes takes care of healing and scaling automatically. However, you still need to monitor your system and keep track of errors and performance. This is important for addressing problems, but also for informing you on potential improvements, optimizations, and cost cutting.

There are several categories of information that are relevant and that you should monitor:

Third-party logs

Application logs

Application errors

Kubernetes events

Metrics

When considering a system composed of multiple microservices and multiple supporting components, the number of logs will be substantial. The solution is central logging, where all the logs go to a single place where you can slice and dice at your will. Errors can be logged, of course, but often it is useful to report errors with additional metadata, such as stack trace, and review them in their own dedicated environment (for example, sentry or rollbar). Metrics are useful for detecting performance and system health problems or trends over time.

Kubernetes provides several mechanisms and abstractions for monitoring your microservices. The ecosystem provides a number of useful projects too.

Logging

There are several ways to implement central logging with Kubernetes:

Have a logging agent that runs on every node

Inject a logging sidecar container to every application pod

Have your application send its logs directly to a central logging service

There are pros and cons to each approach. But, the main thing is that Kubernetes supports all approaches and makes container and pod logs available for consumption.

Refer to https://kubernetes.io/docs/concepts/cluster-administration/logging/#cluster-level-logging-architectures for an in-depth discussion.

Metrics

Kubernetes comes with cAdvisor (https://github.com/google/cadvisor), which is a tool for collecting container metrics integrated into the kubelet binary. Kubernetes used to provide a metrics server called heapster that required additional backends and a UI. But, these days, the best in class metrics server is the open source Prometheus project. If you run Kubernetes on Google's GKE, then Google Cloud Monitoring is a great option that doesn't require additional components to be installed in your cluster. Other cloud providers also have integration with their monitoring solutions (for example, CloudWatch on EKS).

Creating a local cluster

One of the strengths of Kubernetes as a deployment platform is that you can create a local cluster and, with relatively little effort, have a realistic environment that is very close to your production environment. The main benefit is that developers can test their microservices locally and collaborate with the rest of the services in the cluster. When your system is comprised of many microservices, the more significant tests are often integration tests and even configuration and infrastructure tests, as opposed to unit tests. Kubernetes makes that kind of testing much easier and requires much less brittle mocking.

In this section, you will install a local Kubernetes cluster and some additional projects, and then have some fun exploring it using the invaluable kubectl command-line tool.

Installing Minikube

Minikube is a single node Kubernetes cluster that you can install anywhere. I used macOS here, but, in the past, I used it successfully on Windows too. Before installing Minikube itself, you must install a hypervisor. I prefer HyperKit:

$ curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-hyperkit \

&& chmod +x docker-machine-driver-hyperkit \

&& sudo mv docker-machine-driver-hyperkit /usr/local/bin/ \

&& sudo chown root:wheel /usr/local/bin/docker-machine-driver-hyperkit \

&& sudo chmod u+s /usr/local/bin/docker-machine-driver-hyperkit

But, I've run into trouble with HyperKit from time to time. If you can't overcome the issues, I suggest using VirtualBox as the hypervisor instead. Run the following command to install VirtualBox via Homebrew:

$ brew cask install virtualbox

Now, you can install Minikube itself. Homebrew is the best way to go again:

brew cask install minikube

If you're not on macOS, follow the official instructions here: https://kubernetes.io/docs/tasks/tools/install-minikube/.

You must turn off any VPN before starting Minikube with HyperKit. You can restart your VPN after Minikube has started.

Minikube supports multiple versions of Kubernetes. At the moment, the default version is 1.10.0, but 1.13.0 is already out and supported, so let's use that version:

$ minikube start --vm-driver=hyperkit --kubernetes-version=v1.13.0

If you're using VirtualBox as your hypervisor, you don't need to specify --vm-driver:

$ minikube start --kubernetes-version=v1.13.0

You should see the following:

$ minikube start --kubernetes-version=v1.13.0

Starting local Kubernetes v1.13.0 cluster...

Starting VM...

Downloading Minikube ISO

178.88 MB / 178.88 MB [============================================] 100.00% 0s

Getting VM IP address...

E0111 07:47:46.013804 18969 start.go:211] Error parsing version semver: Version string empty

Moving files into cluster...

Downloading kubeadm v1.13.0

Downloading kubelet v1.13.0

Finished Downloading kubeadm v1.13.0

Finished Downloading kubelet v1.13.0

Setting up certs...

Connecting to cluster...

Setting up kubeconfig...

Stopping extra container runtimes...

Starting cluster components...

Verifying kubelet health ...

Verifying apiserver health ...Kubectl is now configured to use the cluster.

Loading cached images from config file.

Everything looks great. Please enjoy minikube!

Minikube will automatically download the Minikube VM (178.88 MB) if it's the first time you are starting your Minikube cluster.

At this point, your Minikube cluster is ready to go.

Troubleshooting Minikube

If you run into some trouble (for example, if you forgot to turn off your VPN), try to delete your Minikube installation and restart it with verbose logging:

$ minikube delete

$ rm -rf ~/.minikube

$ minikube start --vm-driver=hyperkit --kubernetes-version=v1.13.0 --logtostderr --v=3

If your Minikube installation just hangs (maybe waiting for SSH), you might have to reboot to unstick it. If that doesn't help, try the following:

sudo mv /var/db/dhcpd_leases /var/db/dhcpd_leases.old

sudo touch /var/db/dhcpd_leases

Then, reboot again.

Verifying your cluster

If everything is OK, you can check your Minikube version:

$ minikube version

minikube version: v0.31.0

Minikube has many other useful commands. Just type minikube to see the list of commands and flags.

Playing with your cluster

Minikube is running, so let's have some fun. Your kubectl is going to serve you well in this section. Let's start by examining our node:

$ kubectl get nodes

NAME STATUS ROLES AGE VERSION

minikube Ready master 4m v1.13.0

Your cluster already has some pods and services running. It turns out that Kubernetes is dogfooding and many of its own services are plain services and pods. But, those pods and services run in namespaces. Here are all the namespaces:

$ kubectl get ns

NAME STATUS AGE

default Active 18m

kube-public Active 18m

kube-system Active 18m

To see all the services in all the namespaces, you can use the --all-namespaces flag:

$ kubectl get svc --all-namespaces

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19m

kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 19m

kube-system kubernetes-dashboard ClusterIP 10.111.39.46 <none> 80/TCP 18m

The Kubernetes API server, itself, is running as a service in the default namespace and then we have kube-dns and the kubernetes-dashboard running in the kube-system namespace.

To explore the dashboard, you can run the dedicated Minikube command, minikube dashboard. You can also use kubectl, which is more universal and will work on any Kubernetes cluster:

$ kubectl port-forward deployment/kubernetes-dashboard 9090

Then, browse to http://localhost:9090 and you will see the following dashboard:

Installing Helm

Helm is the Kubernetes package manager. It doesn't come with Kubernetes, so you have to install it. Helm has two components: a server-side component called tiller, and a CLI called helm.

Let's install helm locally first, using Homebrew:

$ brew install kubernetes-helm

Then, properly initialize both the server and client type:

$ helm init

$HELM_HOME has been configured at /Users/gigi.sayfan/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.

To prevent this, run `helm init` with the --tiller-tls-verify flag.

For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation

Happy Helming!

With Helm in place, you can easily install all kinds of goodies in your Kubernetes cluster. There are currently 275 chars (the Helm term for a package) in the stable chart repository:

$ helm search | wc -l

275

For example, check out all the releases tagged with the db type:

$ helm search db

NAME CHART VERSION APP VERSION DESCRIPTION

stable/cockroachdb 2.0.6 2.1.1 CockroachDB is a scalable, survivable, strongly-consisten...

stable/hlf-couchdb 1.0.5 0.4.9 CouchDB instance for Hyperledger Fabric (these charts are...

stable/influxdb 1.0.0 1.7 Scalable datastore for metrics, events, and real-time ana...

stable/kubedb 0.1.3 0.8.0-beta.2 DEPRECATED KubeDB by AppsCode - Making running production...

stable/mariadb 5.2.3 10.1.37 Fast, reliable, scalable, and easy to use open-source rel...

stable/mongodb 4.9.1 4.0.3 NoSQL document-oriented database that stores JSON-like do...

stable/mongodb-replicaset 3.8.0 3.6 NoSQL document-oriented database that stores JSON-like do...

stable/percona-xtradb-cluster 0.6.0 5.7.19 free, fully compatible, enhanced, open source drop-in rep...

stable/prometheus-couchdb-exporter 0.1.0 1.0 A Helm chart to export the metrics from couchdb in Promet...

stable/rethinkdb 0.2.0 0.1.0 The open-source database for the realtime web

jenkins-x/cb-app-slack 0.0.1 A Slack App for CloudBees Core

stable/kapacitor 1.1.0 1.5.1 InfluxDB's native data processing engine. It can process ...

stable/lamp 0.1.5 5.7 Modular and transparent LAMP stack chart supporting PHP-F...

stable/postgresql 2.7.6 10.6.0 Chart for PostgreSQL, an object-relational database manag...

stable/phpmyadmin 2.0.0 4.8.3 phpMyAdmin is an mysql administration frontend

stable/unifi 0.2.1 5.9.29 Ubiquiti Network's Unifi Controller

We will use Helm a lot throughout the book.

Summary