35,99 €
Transform yourself into a Kubernetes specialist in serverless applications.
Kubernetes has established itself as the standard platform for container management, orchestration, and deployment. It has been adopted by companies such as Google, its original developers, and Microsoft as an integral part of their public cloud platforms, so that you can develop for Kubernetes and not worry about being locked into a single vendor.
This book will initially start by introducing serverless functions. Then you will configure tools such as Minikube to run Kubernetes. Once you are up-and-running, you will install and configure Kubeless, your first step towards running Function as a Service (FaaS) on Kubernetes. Then you will gradually move towards running Fission, a framework used for managing serverless functions on Kubernetes environments. Towards the end of the book, you will also work with Kubernetes functions on public and private clouds.
By the end of this book, we will have mastered using Function as a Service on Kubernetes environments.
If you are a DevOps engineer, cloud architect, or a stakeholder keen to learn about serverless functions in Kubernetes environments, then this book is for you.
Russ McKendrick is an experienced solution architect who has been working in IT and related industries for over 25 years. During his career, he has had varied responsibilities, from looking after an entire IT infrastructure to providing first-line, second-line, and senior support in both client-facing and internal teams for large organizations. Russ supports open source systems and tools on public and private clouds at Node4 Limited, where he heads up the Open Source Solutions team.Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 257
Veröffentlichungsjahr: 2018
Copyright © 2018 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Commissioning Editor: Gebin George Acquisition Editor: Prateek BharadwajContent Development Editor: Nithin VargheseTechnical Editor: Khushbu SutarCopy Editor: Safis EditingProject Coordinator: Virginia DiasProofreader: Safis EditingIndexer: Francy PuthiryGraphics: Tania DuttaProduction Coordinator: Aparna Bhagat
First published: January 2018
Production reference: 1170118
Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.
ISBN 978-1-78862-037-6
www.packtpub.com
Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
Improve your learning with Skill Plans built especially for you
Get a free eBook or video every month
Mapt is fully searchable
Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
Russ McKendrick is an experienced solution architect who has been working in IT and related industries for over 25 years. During his career, he has had varied responsibilities, from looking after an entire IT infrastructure to providing first-line, second-line, and senior support in both client-facing and internal teams for large organizations.
Russ supports open source systems and tools on public and private clouds at Node4 Limited, where he heads up the Open Source Solutions team.
Paul Adamson has worked as an Ops engineer, a developer, a DevOps Engineer, and all variations and mixes of all of these. When not reviewing this book, Paul keeps busy helping companies embrace the AWS infrastructure. His language of choice is PHP for all the good reasons and even some of the bad.
Jeeva S. Chelladhurai has been working as a DevOps specialist at the IBM GTS Labs for the last 10 years. He is the coauthor of Learning Docker by Packt Publishing. He has more than 20 years of IT industry experience and has technically managed and mentored diverse teams across the globe in envisaging and building pioneering telecommunication products. He is also a strong proponent of the agile methodologies, DevOps, and IT automation. He holds a master's degree in computer science from Manonmaniam Sundaranar University and a graduation certificate in project management from Boston University.
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the example code files
Download the color images
Conventions used
Get in touch
Reviews
The Serverless Landscape
Serverless and Functions as a Service
Pets, cattle, chickens, insects, and snowflakes
Pets
Cattle
Chickens
Insects
Snowflakes
Summing up
Serverless and insects
Public cloud offerings
AWS Lambda
Prerequisites
Creating a Lambda function
Microsoft Azure Functions
Prerequisites
Creating a Function app
The serverless toolkit
Problems solved by serverless and Functions as a Service
Summary
An Introduction to Kubernetes
A brief history of Kubernetes
Control groups
lmctfy
Borg
Project Seven
An overview of Kubernetes
Components
Pods and services
Workloads
ReplicaSet
Deployments
StatefulSets
Kubernetes use cases
References
Summary
Installing Kubernetes Locally
About Minikube
Installing Minikube
macOS 10.13 High Sierra
Windows 10 Professional
Ubuntu 17.04
Hypervisors
Starting Minikube
Minikube commands
Stop and delete
Environment
Virtual machine access and logs
Hello world
The dashboard
The command line
References
Summary
Introducing Kubeless Functioning
Installing Kubeless
The Kubeless Kubernetes cluster
The command-line client
macOS 10.13 High Sierra
Windows 10 Professional
Ubuntu 17.04
The Kubeless web interface
Kubeless overview
So what is Kubeless?
Who made Kubeless?
Kubeless commands
Hello world
The basic example
An example of reading data
Twitter example
The Twitter API
Adding secrets to Kubernetes
The Twitter function
The Kubeless serverless plugin
Summary
Using Funktion for Serverless Applications
Introducing Funktion
Installing and configuring Funktion
The command-line client
macOS 10.13 High Sierra
Windows 10 Professional
Ubuntu 17.04
Launching a single-node Kubernetes cluster
Bootstrapping Funktion
Deploying a simple function
Twitter streams
Summary
Installing Kubernetes in the Cloud
Launching Kubernetes in DigitalOcean
Creating Droplets
Deploying Kubernetes using kubeadm
Removing the cluster
Launching Kubernetes in AWS
Getting set up
Launching the cluster using kube-aws
The Sock Shop
Removing the cluster
Launching Kubernetes in Microsoft Azure
Preparing the Azure command-line tools
Launching the AKS cluster
The Sock Shop
Removing the cluster
Launching Kubernetes on the Google Cloud Platform
Installing the command-line tools
Launching the Google container cluster
The Sock Shop
Running Kubeless
Removing the cluster
Summary
Apache OpenWhisk and Kubernetes
Apache OpenWhisk overview
Running Apache OpenWhisk locally
Installing Vagrant
Downloading and configuring Apache OpenWhisk
Installing the Apache OpenWhisk client
Hello world
Running Apache OpenWhisk on Kubernetes
Deploying OpenWhisk
CouchDB
Redis
API Gateway
ZooKeeper
Kafka
Controller
Invoker
NGINX
Configuring OpenWhisk
Hello world
Summary
Launching Applications Using Fission
Fission overview
Installing the prerequisites
Installing Helm
Installing the Fission CLI
Running Fission locally
Launching Fission using Helm
Working through the output
Launching our first function
A guestbook
Fission commands
The fission function command
The create command
The get option
The list and getmeta commands
The logs command
The update command
The delete command
The fission environment command
The create command
The list and get command
The delete command
Running Fission in the cloud
Launching the Kubernetes cluster
Installing Fission
The guestbook
Some more examples
Weather
Slack
Whales
Summary
Looking at OpenFaaS
An introduction to OpenFaaS
Running OpenFaaS locally
The OpenFaaS command-line client
Docker
Starting the Minikube cluster
Installing OpenFaaS using Helm
Hello world!
The OpenFaaS UI and store
Prometheus
Summary
Serverless Considerations
Security best practices
Securing Kubernetes
Securing serverless services
OpenFaaS
Kubeless
Funktion
Apache OpenWhisk
Fission
Conclusions
Monitoring Kubernetes
The dashboard
Google Cloud
Microsoft Azure
Summary
Running Serverless Workloads
Evolving software and platforms
Kubernetes
Serverless tools
Kubeless
Apache OpenWhisk
Fission
OpenFaaS
Funktion
Future developments
Why Functions as a Service on Kubernetes
Fixed points
Databases
Storage
Summary
Other Books You May Enjoy
Leave a review - let other readers know what you think
Kubernetes has been one of the standout technologies of the last few years; it has been adopted as a container clustering and orchestration platform by all the major public cloud providers, and it has quickly become the standard across the industry.
Add to this that Kubernetes is open source, and you have the perfect base for hosting your own Platform as a Service or PaaS across multiple public and private providers; you can even run it on a laptop and, due to its design, you will get a consistent experience across all of your platforms.
Its design also makes it the perfect platform for running serverless functions. In this book, we will look at several platforms that can be both deployed on and integrated with Kubernetes, meaning that not only will we have PaaS but also a robust Function as a Service platform running in your Kubernetes environment.
This book is primarily for operations engineers, cloud architects, and developers who want to host their serverless functions on a Kubernetes cluster.
Chapter 1, The Serverless Landscape, explains what is meant by serverless. Also, we will get some practical experience of running serverless functions on public clouds using AWS Lambda and Azure Functions.
Chapter 2, An Introduction to Kubernetes, discusses what Kubernetes is, what problems it solves, and also takes a look at its backstory, from internal engineering tool at Google to an open source powerhouse.
Chapter 3, Installing Kubernetes Locally, explains how to get hands-on experience with Kubernetes. We will install a local single node Kubernetes cluster using Minikube and interact with it using the command-line client.
Chapter 4, Introducing Kubeless Functioning, explains how to launch your first serverless function using Kubeless once the Kubernetes is up and running locally.
Chapter 5, Using Funktion for Serverless Applications, explains the use of Funktion for a slightly different take on calling serverless functions.
Chapter 6, Installing Kubernetes in the Cloud, covers launching a cluster in DigitalOcean, AWS, Google Cloud, and Microsoft Azure after getting some hands-on experience using Kubernetes locally.
Chapter 7, Apache OpenWhisk and Kubernetes, explains how to launch, configure, and use Apache OpenWhisk, the serverless platform originally developed by IBM, using our newly launched cloud Kubernetes cluster.
Chapter 8, Launching Applications Using Fission, covers the deploying of Fission, the popular serverless framework for Kubernetes, along with a few example functions.
Chapter 9, Looking at OpenFaaS, covers OpenFaaS. While it's, first and foremost, a Functions as a Service framework for Docker, it is also possible to deploy it on top of Kubernetes.
Chapter 10, Serverless Considerations, discusses security best practices along with how you can monitor your Kubernetes cluster.
Chapter 11, Running Serverless Workloads, explains how quickly the Kubernetes ecosystem is evolving and how you can keep up. We also discuss which tools you should use, and why you would want your serverless functions on Kubernetes.
Operating Systems:
macOS High Sierra
Ubuntu 17.04
Windows 10 Professional
Software: We will be installing several command-line tools throughout this book; each of the tools will have installation instructions and details of its requirements in the chapters. Note that while instructions for Windows systems are provided, a lot of the tools we will be using were originally designed to run primarily on Linux/Unix based systems such as Ubuntu 17.04 and macOS High Sierra, and the book will favor these systems. While every effort has been made at the time of writing to verify that the tools work on Windows-based systems, as some of the tools are experimental builds, we cannot guarantee that they will continue to work on updated systems, because of this, I would recommend using either a Linux- or Unix-based system.
Hardware:
Windows 10 Professional and Ubuntu 17.04 system requirements
:
Systems using processors (CPUs) launched in 2011 or later with a 1.3 GHz or faster core speed, except Intel Atom processors or AMD processors based on the
Llano
and
Bobcat
micro-architectures
4 GB RAM minimum with 8 GB RAM or more recommended
Apple Mac system requirements
:
iMac
: Late 2009 or newer
MacBook/MacBook (Retina)
: Late 2009 or newer
MacBook Pro
: Mid-2010 or newer
MacBook Air
: Late 2010 or newer
Mac mini
: Mid-2010 or newer
Mac Pro
: Mid-2010 or newer
Access to at least one of the following public cloud services:
AWS
:
https://aws.amazon.com/
Google Cloud
:
https://cloud.google.com/
Microsoft Azure
:
https://azure.microsoft.com/
DigitalOcean
:
https://www.digitalocean.com/
You can download the example code files for this book from your account at www.packtpub.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
Log in or register at
www.packtpub.com
.
Select the
SUPPORT
tab.
Click on
Code Downloads & Errata
.
Enter the name of the book in the
Search
box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR/7-Zip for Windows
Zipeg/iZip/UnRarX for Mac
7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub athttps://github.com/PacktPublishing/Kubernetes-for-Serverless-Applications. We also have other code bundles from our rich catalog of books and videos available athttps://github.com/PacktPublishing/. Check them out!
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://www.packtpub.com/sites/default/files/downloads/KubernetesforServerlessApplications_ColorImages.pdf.
There are a number of text conventions used throughout this book.
CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "This contains a single file called index.html."
A block of code is set as follows:
apiVersion: apps/v1beta1kind: Deploymentmetadata: name: cli-hello-world labels: app: nginx
When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
apiVersion: apps/v1beta1kind: Deploymentmetadata: name:
cli-hello-world
labels: app: nginx
Any command-line input or output is written as follows:
$ brew cask install minikube
Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "At the bottom of the page, you will have a button that allows you to create an Access Token and Access Token Secret for your account."
Feedback from our readers is always welcome.
General feedback: Email [email protected] and mention the book title in the subject of your message. If you have questions about any aspect of this book, please email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packtpub.com.
Welcome to the first chapter of Kubernetes for Serverless Applications. In this chapter, we are going to be looking at and discussing the following:
What do we mean by serverless and Functions as a Service?
What services are out there?
An example of
Lambda by
Amazon Web Services
An example of
Azure Functions
Using the serverless toolkit
What problems can we solve using s
erverless and Functions as a Service?
I think it is important we start by addressing the elephant in the room, and that is the term serverless.
When you say serverless to someone, the first conclusion they jump to is that you are running your code without any servers.
This can be quite a valid conclusion if you are using one of the public cloud services we will be discussing later in this chapter. However, when it comes to running in your own environment, you can't avoid having to run on a server of some sort.
Before we discuss what we mean by serverless and Functions as a Service, we should discuss how we got here. As people who work with me will no doubt tell you, I like to use the pets versus cattle analogy a lot as this is quite an easy way to explain the differences in modern cloud infrastructures versus a more traditional approach.
I first came across the pets versus cattleanalogy back in 2012 from a slide deck published by Randy Bias. The slide deck was used during a talk Randy Bias gave at the cloudscaling conference on architectures for open and scalable clouds. Towards the end of the talk, he introduced the concept of pets versus cattle, which Randy attributes to Bill Baker who at the time was an engineer at Microsoft.
The slide deck primarily talks about scaling out and not up; let's go into this in a little more detail and discuss some of the additions that have been made since the presentation was first given five years ago.
Pets are typically what we, as system administrators, spend our time looking after. They are traditional bare metal servers or virtual machines:
We name each server as you would a pet. For example,
app-server01.domain.com
and
database-server01.domain.com
.
When our pets are ill, you will take them to the vets. This is much like you, as a system administrator, would reboot a server, check logs, and replace the faulty components of a server to ensure that it is running healthily.
You pay close attention to your pets for years, much like a server. You monitor for issues, patch them, back them up, and ensure they are fully documented.
There is nothing much wrong with running pets. However, you will find that the majority of your time is spent caring for them—this may be alright if you have a few dozen servers, but it does start to become unmanageable if you have a few hundred servers.
Cattle are more representative of the instance types you should be running in public clouds such as Amazon Web Services (AWS) or Microsoft Azure, where you have auto scaling enabled.
You have so many cattle in your herd you don't name them; instead they are given numbers and tagged so you can track them. In your instance cluster, you can also have too many to name so, like cattle, you give them numbers and tag them. For example, an instance could be called
ip123067099123.domain.com
and tagged as
app-server
.
When a member of your herd gets sick, you shoot it, and if your herd requires it you replace it. In much the same way, if an instance in your cluster starts to have issues it is automatically terminated and replaced with a replica.
You do not expect the cattle in your herd to live as long as a pet typically would, likewise you do not expect your instances to have an uptime measured in years.
Your herd lives in a field and you watch it from afar, much like you don't monitor individual instances within your cluster; instead, you monitor the overall health of your cluster. If your cluster requires
additional resources
, you launch more instances and when you no longer require a resource, the instances are automatically terminated, returning you to your desired state.
In 2015, Bernard Golden added to the pets versus cattle analogy by introducing chickens to the mix in a blog post titled Cloud Computing: Pets, Cattle and Chickens? Bernard suggested that chickens were a good term for describing containers alongside pets and cattle:
Chickens are more efficient than cattle; you can fit a lot more of them into the same space your herd would use. In the same way, you can fit a lot more containers into your cluster as you can launch multiple containers per instance.
Each chicken requires fewer resources than a member of your herd when it comes to feeding. Likewise, containers are less resource-intensive than instances, they take seconds to launch, and can be configured to consume less CPU and RAM.
Chickens have a much lower life expectancy than members of your herd. While cluster instances can have an uptime of a few hours to a few days, it is more than possible that a container will have a lifespan of minutes.
Keeping in line with the animal theme, Eric Johnson wrote a blog post for RackSpace which introduced insects. This term was introduced to describe serverless and Functions as a Service.
Insects have a much lower life expectancy than chickens; in fact, some insects only have a lifespan of a few hours. This fits in with serverless and Functions as a Service as these have a lifespan of seconds.
Later in this chapter, we will be looking at public cloud services from AWS and Microsoft Azure which are billed in milliseconds, rather than hours or minutes.
Around the time Randy Bias gave his talk which mentioned pets versus cattle, Martin Fowler wrote a blog post titled SnowflakeServer. The post described every system administrator's worst nightmare:
Every snowflake is unique and impossible to reproduce. Just like that one server in the office that was built and not documented by that one guy who left several years ago.
Snowflakes are delicate. Again, just like that one server—you dread it when you have to log in to it to diagnose a problem and you would never dream of rebooting it as it may never come back up.
Once I have explained pets, cattle, chickens, insects, and snowflakes, I sum up by saying:
Then finally I say this:
In this book, we will be discussing insects, and I will assume that you know a little about the services and concepts that cover cattle and chickens.
As already mentioned, using the word serverless gives the impression that servers will not be needed. Serverless is a term used to describe an execution model.
When executing this model you, as the end user, do not need to worry about which server your code is executed on as all of the decisions on placement, server management, and capacity are abstracted away from you—it does not mean that you literally do not need any servers.
Now there are some public cloud offerings which abstract so much of the management of servers away from the end user that it is possible to write an application which does not rely on any user-deployed services and that the cloud provider will manage the compute resources needed to execute your code.
Typically these services, which we will look at in the next section, are billed for the resources used to execute your code in per second increments.
So how does that explanation fits in with the insect analogy?
Let's say I have a website that allows users to upload photos. As soon as the photos are uploaded they are cropped, creating several different sizes which will be used to display as thumbnails and mobile-optimized versions on the site.
In the pets and cattle world, this would be handled by a server which is powered on 24/7 waiting for users to upload images. Now this server probably is not just performing this one function; however, there is a risk that if several users all decide to upload a dozen photos each, then this will cause load issues on the server where this function is being executed.
We could take the chickens approach, which has several containers running across several hosts to distribute the load. However, these containers would more than likely be running 24/7 as well; they will be watching for uploads to process. This approach could allow us to horizontally scale the number of containers out to deal with an influx of requests.
Using the insects approach, we would not have any services running at all. Instead, the function should be triggered by the upload process. Once triggered, the function will run, save the processed images, and then terminate. As the developer, you should not have to care how the service was called or where the service was executed, so long as you have your processed images at the end of it.
Before we delve into the core subject of this book and start working with Kubernetes, we should have a look at the alternatives; after all, the services we are going to be covering in upcoming chapters are nearly all loosely based off these services.
The three main public cloud providers all provide a serverless service:
AWS Lambda from AWS (
https://aws.amazon.com/lambda/
)
Azure Functions by Microsoft (
https://azure.microsoft.com/en-gb/services/functions/
)
Cloud Functions from Google (
https://cloud.google.com/functions/
)
Each of these services has the support of several different code frameworks. For the purposes of this book, we will not be looking at the code frameworks in too much detail as using these is a design decision which has to based on your code.
We are going to be looking at two of these services, Lambda from AWS and Functions by Microsoft Azure.
The first service we are going to look at is AWS Lambda by AWS. The tagline for the service is quite a simple one:
Now those of you who have used AWS before might be thinking the tagline makes it sound a lot like the AWS Elastic Beanstalk service. This service inspects your code base and then deploys it in a highly scalable and redundant configuration. Typically, this is the first step for most people in moving from pets to cattle as it abstracts away the configuration of the AWS services which provide the scalability and high availability.
Before we work through launching a hello world example, which we will be doing for all of the services, we will need an AWS account and its command-line tools installed.
First of all, you need an AWS account. If you don't have an account, you can sign up for an account at https://aws.amazon.com/:
While clicking on the Create a Free Account and then following the onscreen instructions will give you 12 months' free access to several services, you will still need to provide credit or debit card details and it is possible that you could incur costs.
Once you have your AWS account, you should create a user using the AWS Identity and Access Management (IAM) service. This user can have administrator privileges and you should use that user to access both the AWS Console and the API.