37,19 €
Edge computing is a way of processing information near the source of data instead of processing it on data centers in the cloud. In this way, edge computing can reduce latency when data is processed, improving the user experience on real-time data visualization for your applications. Using K3s, a light-weight Kubernetes and k3OS, a K3s-based Linux distribution along with other open source cloud native technologies, you can build reliable edge computing systems without spending a lot of money.
In this book, you will learn how to design edge computing systems with containers and edge devices using sensors, GPS modules, WiFi, LoRa communication and so on. You will also get to grips with different use cases and examples covered in this book, how to solve common use cases for edge computing such as updating your applications using GitOps, reading data from sensors and storing it on SQL and NoSQL databases. Later chapters will show you how to connect hardware to your edge clusters, predict using machine learning, and analyze images with computer vision. All the examples and use cases in this book are designed to run on devices using 64-bit ARM processors, using Raspberry Pi devices as an example.
By the end of this book, you will be able to use the content of these chapters as small pieces to create your own edge computing system.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 443
Veröffentlichungsjahr: 2022
A use case guide for building edge systems using K3s, k3OS, and open source cloud native technologies
Sergio Méndez
BIRMINGHAM—MUMBAI
Copyright © 2022 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Rahul Nair
Publishing Product Manager: Preet Ahuja
Content Development Editor: Nihar Kapadia
Technical Editor: Shruthi Shetty
Copy Editor: Safis Editing
Project Coordinator: Ashwin Dinesh Kharwa
Proofreader: Safis Editing
Indexer: Pratik Shirodkar
Production Designer: Prashant Ghare
Senior Marketing Coordinator: Nimisha Dua
First published: October 2022
Production reference: 1280922
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-80056-859-4
www.packt.com
To my mother, Chusita, and my father, Arnaldo, my friends in the cloud native ecosystem, my colleagues at Yalo, and my students at USAC University, who have motivated and supported me throughout the process of writing this book.
Also, I would like to thank the Packt editors, who worked with me to ensure high-quality content.
Sergio Méndez is a systems engineer and professor of operating systems in Guatemala at USAC University. His work at the university is related to teaching and researching cloud native technologies. He has experience working on DevOps and MLOps, using open source technologies at work. He is involved with several open source communities, including CNCF communities, promoting students in the CNCF ecosystem, and he hosts a cloud native meetup in Guatemala. He has been a speaker at several conferences, such as OSCON, KubeCon, WTF is Cloud Native?, and Kubernetes Community Days. He is also a Linkerd Ambassador.
I’d like to thank the team at Packt for giving me the opportunity to write my first book about something that has been wholly enjoyable. Most thanks, however, go to my reviewers and friends, Tiffany Jachja and Santiago Torres, for supporting me by reviewing this book during my busy professional life.
Santiago Torres-Arias is an assistant professor at Purdue University’s School of Electrical and Computer Engineering department. His interests include binary analysis, cryptography, distributed systems, and security-oriented software engineering. His current research focuses on securing the software development life cycle, cloud security, and update systems. Santiago is a member of the Arch Linux security team and has contributed patches to F/OSS projects at various degrees of scale, including Git, the Linux kernel, Reproducible Builds, NeoMutt, and the Briar project. Santiago is also a maintainer of the Cloud Native Computing Foundation’s project The Update Framework (TUF), as well as the lead of the in-toto project.
I’d like to thank the broader CNCF community for encouraging engagement from various perspectives and walks of life. In particular, I’d like to thank the leads of TAG-Security, as well as the Supply Chain Security Workgroup for all their input and feedback throughout the years. Outside of CNCF, I’d like to thank my colleagues and students at Purdue University for fostering a welcoming and truth-seeking environment.
Tiffany Jachja is an accomplished writer, speaker, and technologist, helping teams and other technologists deliver their best work. She brings her experiences in DevOps and cloud native application development to the data science field as an engineering leader. In her tenure within technology, she’s led the successful delivery of technologies across various spaces and industries, including academia, government, finance, enterprise, start-ups, and media. She now helps people worldwide deliver their best work to create the success, recognition, and wealth they desire.
Edge computing consists of processing data near to the source where this data is generated. In order to build an edge computing system, you must understand the different layers and components that an edge system uses to process the information. Using K3s a lightweight Kubernetes, you can take advantage of the use of containers to design distributed system and automate the way that your applications are updated. This book will gives you all the necessary tools to create your own edge system across learning the basics and different use cases of edge computing. By the end of this book, you will understand how to implement your own edge computing system that uses containers with K3s for your Kubernetes clusters and cloud native open source software.
This book is for operations or DevOps engineers looking to move their data processing tasks to the edge or for those engineers looking to implement an edge computing system, but they don’t have the technology background to do so. It can also be used for enthusiast and entrepreneurs looking to implement or experiment with edge computing for different or potential use case scenarios.
Chapter 1, Edge Computing with Kubernetes, explains basic concepts of Edge Computing including its components, layers, example architectures to build these kind of systems, and showing how to use cross compiling techniques for Go, Rust, Python and Java to run software at the edge that runs on devices with ARM processors.
Chapter 2, K3s Installation and Configuration, describes what K3s is, its components, and how to install K3s using different configurations such as single and multi-node, and finally explains advanced configurations for K3s clusters to use external storages to replace the use of etcd instead, expose applications outside the cluster installing and using an ingress controller, uninstalling the cluster and some useful commands to troubleshoot cluster installations.
Chapter 3, K3s Advanced Configurations and Management, introduce the reader to advanced configurations for its K3s cluster, including the installation of MetalLB a bare metal load balancer, the installation of Longhorn for storage at the edge, upgrades in the cluster and finally backing up and restoring K3s cluster configurations.
Chapter 4, k3OS Installation and Configurations, focuses on how to use k3OS a Kubernetes distribution packaged in an ISO image that could be used to be installed on edge devices. It also covers how to use overlay on ARM devices and perform installations using config files to configure a single or multi-node K3s clusters.
Chapter 5, K3s Homelab for Edge Computing Experiments, describes how to configure your own Homelab using all the previous configurations described in the previous chapters to produce a basic production ready environment to run your edge computing applications. Starting with cluster configurations, including configurations for ingress controller, persistence for applications and how to deploy a Kubernetes dashboard for your cluster at the edge.
Chapter 6, Exposing Your Applications Using Ingress Controllers and Certificates, gives an introduction about how to configure and use the ingress controllers NGINX, Traefik and Contour together with cert-manager to expose applications running on bare metal using TLS certificates.
Chapter 7, GitOps with Flux for Edge Applications, explores how to automate edge applications updates when source code changes are detected using a GitOps strategy together with Flux and GitHub Actions.
Chapter 8, Observability and Traffic Splitting Using Linkerd, describes how to use a Service Mesh to implement simple monitoring, observability, traffic splitting, and faulty traffic to improve services availability using Linkerd running at the edge.
Chapter 9, Edge Serverless and Event-Driven Architectures with Knative and Cloud Events, gives an introduction about how to implement your own serverless functions using Knative Serving. It also shows how to implement simple event-driven architectures using Knative Eventing together with Cloud Event to define and run events in your edge systems.
Chapter 10, SQL and NoSQL Databases at the Edge, explores different type of databases that can be used to record data at the edge. This chapter covers in specific the configuration and use of MySQL, Redis, MongoDB, PostgreSQL and Neo4j to cover different use cases for SQL and NoSQL databases running at the edge.
Chapter 11, Monitoring the Edge with Prometheus and Grafana, focuses on monitoring edge environments and devices using the time series database Prometheus and Grafana. In specific, this chapter focuses on creating custom real-time graphs for data coming from edge sensors that capture temperature and humidity.
Chapter 12, Communicating with Edge Devices across Long Distances Using LoRa, describes how to communicate edge devices in long distances using LoRa wireless protocol and how to visualize captured sensors edge data using MySQL and Grafana.
Chapter 13, Geolocalization Applications Using GPS, NoSQL, and K3s Clusters, describes how to implement a simple geolocalization or geo-tracking system using GPS modules and ARM devices showing vehicles moving in real-time, and reports of their tracking logs between a date range.
Chapter 14, Computer Vision with Python and K3s Clusters, describes how to create a smart traffic system to detect potential obstacles for drivers when driving in the city and give intelligent alerts and reports of the live state of traffic during rush hours. It is also described step by step how to implement this system using Redis, OpenCV, TensorFlow Lite, Scikit Learn and GPS modules running at the edge.
Chapter 15, Designing Your Own Edge Computing System, describes a basic methodology to create your own edge computing system and how you can use cloud provider managed services, complementary hardware and software and some useful recommendations while implementing your system. Finalizing with other use cases to explore for edge computing.
To feel more comfortable with this book, you need some previous experience using Linux command line, and some basic programming knowledge. When reading a chapter, pay attention to download the source code, that will simplify the use of all the examples in this book.
This book mainly uses MacOS to perform local configurations. For the Raspberry Pi implementations Linux is used. Finally, there is a chapter that uses Windows to update the ESP32 firmware.
All the requirements need it to run the examples in this book are described in the Technical requirements section of each chapter.
If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.
You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Edge-Computing-Systems-with-Kubernetes. If there’s an update to the code, it will be updated in the GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://packt.link/gZ68B.
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, functions, service name, deployment names, variables, pathnames, and URLs. Here is an example: “WIFISetUp(void): we configure the Wi-Fi connection, here you have to replace NET_NAME with your network name and PASSWORD with the password to access your connection.”
A block of code is set as follows:
@app.route('/') def hello_world(): return 'It works'Any command-line input or output is written as follows:
$ mkdir code
$ kubectl apply -f example.yaml
Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “Now create another file by clicking in File | New”
Tips or important notes
Appear like this.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.
Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Once you’ve read Edge Computing Systems with Kubernetes, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.
Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.
Preface
Preface
Preface
In this part of the book, you will learn about the basic concepts, architectures, use cases, and current solutions for edge computing systems, as well as learning how to install a cluster using k3s/k3OS and Raspberry Pi devices.
This part of the book comprises the following chapters:
Chapter 1, Edge Computing with KubernetesChapter 2, K3s Installation and ConfigurationChapter 3, K3s Advanced Configurations and ManagementChapter 4, k3OS Installation and ConfigurationsChapter 5, K3s Homelab for Edge Computing ExperimentsThis chapter offers a quick deep dive into K3s. We will start by understanding what K3s is and its architecture, and then we will learn how to prepare your ARM device for K3s. Following this, you will learn how to perform a basic installation of K3s from a single node cluster to a multi-node cluster, followed by a backend configuration using MySQL. Additionally, this chapter covers how to install an Ingress controller, using Helm Charts and Helm, to expose your Services across the load balancer created by NGINX. Finally, we will look at how to uninstall K3s and troubleshoot your cluster. At the end of the chapter, you will find additional resources to implement additional customizations for K3s.
In this chapter, we're going to cover the following main topics:
Introducing K3s and its architecture Preparing your edge environment to run K3sCreating K3s single and multi-node clustersUsing external MySQL storage for K3sInstalling Helm to install software packages in KubernetesChanging the default Ingress controllerUninstalling K3s from the master node or an agent nodeTroubleshooting a K3s clusterFor this chapter, you will need one of the following options:
Raspberry Pi 4 Model B with 4 GB of RAM (suggested minimum)An AWS account to create a Graviton2 instanceAny x86_64 VM instance with Linux installedAn internet connection and DHCP support for local K3s clustersWith these requirements, we are going to install K3s and start experimenting with this Kubernetes distribution. So, let's get started.
K3s is a lightweight Kubernetes distribution created by Rancher Labs. It includes all the necessary components inside a small binary file. Rancher removed all the unnecessary components for this Kubernetes distribution to run the cluster, and it also added other useful features to run K3s at the edge, such as MySQL support as a replacement for etcd, an optimized Ingress controller, storage for single node clusters, and more. Let's examine Figure 2.1 to understand how K3s is designed and packaged:
Figure 2.1 – The K3s cluster components
In the preceding diagram, you can see that K3s has two components: the server and the agent. Each of these components must be installed on a node. A node is a bare metal machine or a VM that works as a master or agent node. The master node manages and provisions Kubernetes objects such as Deployments, Services, and Ingress controllers inside the agent nodes. An agent node oversees the processing of information using these objects. Each node uses the different components shown in Figure 2.1
