Bootstrapping Service Mesh Implementations with Istio - Anand Rai - E-Book

Bootstrapping Service Mesh Implementations with Istio E-Book

Anand Rai

0,0
32,39 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Istio is a game-changer in managing connectivity and operational efficiency of microservices, but implementing and using it in applications can be challenging. This book will help you overcome these challenges and gain insights into Istio's features and functionality layer by layer with the help of easy-to-follow examples. It will let you focus on implementing and deploying Istio on the cloud and in production environments instead of dealing with the complexity of demo apps. 
You'll learn the installation, architecture, and components of Istio Service Mesh, perform multi-cluster installation, and integrate legacy workloads deployed on virtual machines. As you advance, you'll understand how to secure microservices from threats, perform multi-cluster deployments on Kubernetes, use load balancing, monitor application traffic, implement service discovery and management, and much more. You’ll also explore other Service Mesh technologies such as Linkerd, Consul, Kuma, and Gloo Mesh. In addition to observing and operating Istio using Kiali, Prometheus, Grafana and Jaeger, you'll perform zero-trust security and reliable communication between distributed applications.
After reading this book, you'll be equipped with the practical knowledge and skills needed to use and operate Istio effectively.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 440

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Bootstrapping Service Mesh Implementations with Istio

Build reliable, scalable, and secure microservices on Kubernetes with Service Mesh

Anand Rai

BIRMINGHAM—MUMBAI

Bootstrapping Service Mesh Implementations with Istio

Copyright © 2023 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Mohd Riyan Khan

Publishing Product Manager: Surbhi Suman

Senior Editor: Tanya D’cruz

Technical Editor: Nithik Cheruvakodan

Copy Editor: Safis Editing

Project Coordinator: Ashwin Kharwa

Proofreader: Safis Editing

Indexer: Hemangini Bari

Production Designer: Ponraj Dhandapani

Marketing Coordinator: Agnes D'souza

First published: April 2023

Production reference: 1230323

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80324-681-9

www.packtpub.com

I am grateful to my kids, Yashasvi and Agastya, who sacrificed their playtime with me so I could write this book, and to Pooja, my loving wife, who supported me and kept me committed to this venture. This book, like everything else in my life, would not be possible without the blessings, love, and care of my beloved father, Mr. Jitendra Rai, and my loving mother, Mrs. Prem Lata Rai. This book is the result of the inspiration I get from my uncles, Mr. Pradeep Kumar Rai and Mr. Awadhesh Rai, who have been my pillars of support, my mentors and coaches. It was their motivation and guidance that led me to pursue computer science as a hobby and a career.

– Anand Rai

Contributors

About the author

Anand Rai has over 18 years’ experience of working in information technology for various organizations including technology providers and consumers. He has held a variety of executive and senior roles in those organizations but has always taken a hands-on approach to technology. This experience has given him perspectives on how development in information technology drives productivity and improvements in our daily lives. His areas of specialization are application integration, API management, microservices architectures, the cloud, DevOps, and Kubernetes. He loves solving problems, visualizing new solutions, and helping organizations to use technology as an enabler for achieving business outcomes.

About the reviewers

Andres Sacco is a technical leader at TravelX and has experience with many different languages, including Java, PHP, and Node.js. In his previous job, Andres helped find alternative ways to optimize data transfer between microservices, which reduced the infrastructure cost by 55%. Before introducing these optimizations, he investigated different testing methods for better coverage of the microservices than the unit tests provided. He has also dictated internal courses about new technologies and has written articles on Medium. Andres is a co-author of the book Beginning Scala 3, published by Apress.

Ajay Reddy Yeruva currently works as a senior software engineer in the IP-DevOps team at Ritchie Bros. Auctioneers, along with volunteering as vice president at AAITP. He has an IT career that spans around 10 years. Before moving to Ritchie Bros. Auctioneers, he worked at Cisco Systems and Infosys Limited as a systems/software engineering consultant, helping to set up and maintain production applications, infrastructure, CI/CD pipelines, and operations. He is an active member of DevSecOps, AIOps, GitOps, and DataOps communities on different forums. When it comes to the best monitoring and observability talents Ajay tops the global list!

Table of Contents

Preface

Part 1: The Fundamentals

1

Introducing Service Meshes

Revisiting cloud computing

Advantages of cloud computing

Understanding microservices architecture

Understanding Kubernetes

Getting to know Service Mesh

Retry mechanism, circuit breaking, timeouts, and deadlines

Blue/green and canary deployments

Summary

2

Getting Started with Istio

Why is Istio the most popular Service Mesh?

Exploring alternatives to Istio

Kuma

Linkerd

Consul

AWS App Mesh

OpenShift Service Mesh

F5 NGINX Service Mesh

Preparing your workstation for Istio installation

System specifications

Installing minikube and the Kubernetes command-line tool

Installing Istio

Enabling Istio for a sample application

Sidecar injection

Istio gateways

Observability tools

Kiali

Jaeger

Prometheus

Grafana

Istio architecture

Summary

3

Understanding Istio Control and Data Planes

Exploring the components of a control plane

istiod

The Istio operator and istioctl

Istio agent

Deployment models for the Istio control plane

Single cluster with a local control plane

Primary and remote cluster with a single control plane

Single cluster with an external control plane

Exploring Envoy, the Istio data plane

What is Envoy?

Dynamic configuration via xDS APIs

Extensibility

Summary

Part 2: Istio in Practice

4

Managing Application Traffic

Technical requirements

Setting up the environment

Creating an EKS cluster

Setting up kubeconfig and kubectl

Deploying the Sockshop application

Managing Ingress traffic using the Kubernetes Ingress resource

Managing Ingress using the Istio Gateway

Creating the gateway

Creating virtual services

Traffic routing and canary release

Traffic mirroring

Routing traffic to services outside of the cluster

Exposing Ingress over HTTPS

Enabling HTTP redirection to HTTPS

Enabling HTTPS for multiple hosts

Enabling HTTPS for CNAME and wildcard records

Managing Egress traffic using Istio

Summary

5

Managing Application Resiliency

Application resiliency using fault injection

What is HTTP delay?

What is HTTP abort?

Application resiliency using timeouts and retries

Timeouts

Retries

Building application resiliency using load balancing

Round-robins

RANDOM

LEAST_REQUEST

Defining multiple load balancing rules

Rate limiting

Circuit breakers and outlier detection

Summary

6

Securing Microservices Communication

Understanding Istio security architecture

Authentication using mutual TLS

Service-to-service authentication

Authentication with clients outside the mesh

Configuring RequestAuthentication

Configuring RequestAuthorization

Summary

7

Service Mesh Observability

Understanding observability

Metric scraping using Prometheus

Installing Prometheus

Deploying a sample application

Customizing Istio metrics

Adding dimensions to the Istio metric

Creating a new Istio metric

Visualizing telemetry using Grafana

Implementing distributed tracing

Enabling distributed tracing with Jaeger

Summary

Part 3: Scaling, Extending,and Optimizing

8

Scaling Istio toMulti-Cluster Deployments Across Kubernetes

Technical requirements

Setting up Kubernetes clusters

Setting up OpenSSL

Additional Google Cloud steps

Establishing mutual trust in multi-cluster deployments

Primary-remote on multi-network

Establishing trust between the two clusters

Deploying the Envoy dummy application

Primary-remote on the same network

Multi-primary on different networks

Deploying and testing via Envoy dummy services

Multi-primary on the same network

Summary

9

Extending Istio Data Plane

Technical requirements

Why extensibility

Customizing the data plane using Envoy Filter

Understanding the fundamentals of Wasm

Extending the Istio data plane using Wasm

Introducing Proxy-Wasm

Wasm with Istio

Summary

10

Deploying Istio Service Mesh for Non-Kubernetes Workloads

Technical requirements

Examining hybrid architecture

Setting up a Service Mesh for hybrid architecture

Overview of the setup

Setting up a demo app on a VM

Setting up Istio in the cluster

Configuring the Kubernetes cluster

Setting up Istio on a VM

Integrating the VM workload with the mesh

Summary

11

Troubleshooting andOperating Istio

Understanding interactions between Istio components

Exploring Istiod ports

Exploring Envoy ports

Inspecting and analyzing the Istio configuration

Troubleshooting errors using access logs

Troubleshooting errors using debug logs

Changing debug logs for the Istio data plane

Changing log levels for the Istio control plane

Debugging the Istio agent

Understanding Istio’s best practices

Examining attack vectors for the control plane

Examining attack vectors for the data plane

Securing the Service Mesh

Automating best practices using OPA Gatekeeper

Summary

12

Summarizing What We Have Learned and the Next Steps

Technical requirements

Enforcing workload deployment best practices using OPA Gatekeeper

Applying our learnings to a sample application

Enabling Service Mesh for the sample application

Configuring Istio to manage application traffic

Configuring Istio to manage application resiliency

Configuring Istio to manage application security

Certification and learning resources for Istio

Understanding eBPF

Summary

Appendix – Other Service Mesh Technologies

Consul Connect

Deploying an example application

Zero-trust networking

Traffic management and routing

Gloo Mesh

Kuma

Deploying envoydemo and curl in Kuma mesh

Traffic management and routing

Linkerd

Deploying envoydemo and curl in Linkerd

Zero-trust networking

Index

Other Books You May Enjoy

Preface

Istio is one of the most widely adopted Service Mesh technologies. It is used to manage application networking to provide security and operational efficiency to microservices. This book explores Istio layer by layer to explain how it is used to manage application networking, resiliency, observability, and security. Using various hands-on examples, you’ll learn about Istio Service Mesh installation, its architecture, and its various components. You will perform a multi-cluster installation of Istio along with integrating legacy workloads deployed on virtual machines. You’ll learn how to extend the Istio data plane using WebAssembly (WASM), as well as covering Envoy and why it is used as the data plane for Istio. You’ll see how OPA Gatekeeper can be used to automate best practices for Istio. You’ll learn how to observe and operate Istio using Kiali, Prometheus, Grafana, and Jaeger. You’ll also explore other Service Mesh technologies such as Linkerd, Consul, Kuma, and Gloo Mesh. The easy-to-follow hands-on examples built using lightweight applications throughout the book will help you to focus on implementing and deploying Istio to cloud and production environments instead of having to deal with complex demo applications.

After reading this book, you’ll be able to perform reliable and zero-trust communication between applications, solve application networking challenges, and build resilience in distributed applications using Istio.

Who this book is for

Software developers, architects, and DevOps engineers with experience in using microservices in Kubernetes-based environments and who want to solve application networking challenges that arise in microservice communications will benefit from this book. To get the most out of this book, you will need to have some experience in working with the cloud, microservices, and Kubernetes.

What this book covers

Chapter 1, Introducing Service Meshes, covers the fundamentals of cloud computing, microservices architecture, and Kubernetes. It then outlines the context as to why a Service Mesh is required and what value it delivers. If you don’t have hands-on experience in dealing with large-scale deployment architecture using Kubernetes, the cloud, and microservices architecture, then this chapter will familiarize you with these concepts and give you a good foundation for understanding the more complex subjects in the subsequent chapters.

Chapter 2, Getting Started with Istio, describes why Istio has experienced viral popularity among the Service Mesh technologies available. The chapter then provides instructions to install and run Istio and walks you through Istio’s architecture and its various components. Once installed, you will then enable Istio sidecar injection in an example application packaged with the Istio installation. The chapter provides a step-by-step look at the pre- and post-enablement of Istio in the example application to give you an idea of how Istio works.

Chapter 3, Understanding Istio Control and Data Planes, dives deeper into Istio’s control plane and data plane. This chapter will help you understand the Istio control plane so you can plan the installation of control planes in a production environment. After reading this chapter, you should be able to identify the various components of the Istio control plane including istiod, along with the functionality they each deliver in the overall working of Istio. The chapter will also familiarize you with Envoy, its architecture, and how to use Envoy as a standalone proxy.

Chapter 4, Managing Application Traffic, provides details on how to manage application traffic using Istio. The chapter is full of hands-on examples, exploring the management of Ingress traffic using the Kubernetes Ingress resource and then showing how to do this using Istio Gateway, along with securely exposing Ingress over HTTPS. The chapter provides examples of canary releases, traffic mirroring, and routing traffic to services outside the mesh. Finally, we’ll see how to manage traffic egressing from the mesh.

Chapter 5, Managing Application Resiliency, provides details on how to make use of Istio to increase the application resiliency of microservices. The chapter discusses various aspects of application resiliency including fault injection, timeout and retries, load balancing, rate limiting, circuit breakers, and outlier detection, and how each of these is addressed by Istio.

Chapter 6, Securing Microservices Communication,dives deeper into advanced topics on security. The chapter starts with explaining Istio’s security architecture, followed by implementing mutual TLS for service communication both with other services in the mesh and with downstream clients outside the mesh. The chapter will walk you through various hands-on exercises to create custom security policies for authentication and authorization.

Chapter 7, Service Mesh Observability, provides insight into why observability is important, how to collect telemetry information from Istio, the different types of metrics available and how to fetch them via APIs, and how to enable distributed tracing for applications deployed in the mesh.

Chapter 8, Scaling Istio to Multi-Cluster Deployments Across Kubernetes, walks you through how Istio can be used to provide seamless connectivity between applications deployed across multiple Kubernetes clusters. The chapter also covers multiple installation options for Istio to achieve high availability and continuity with the Service Mesh. The chapter covers advanced topics of Istio installation and familiarizes you with how to set up Istio in a primary-remote configuration on multiple networks, primary-remote configuration on a single network, multi-primary configuration on different networks, and multi-primary configuration on a single network.

Chapter 9, Extending Istio Data Plane, provides various options to extend the Istio data plane. The chapter discusses EnvoyFilter and WebAssembly in great detail and examines how they can be used to extend the functionality of the Istio data plane beyond what is offered out of the box.

Chapter 10, Deploying the Istio Service Mesh for Non-Kubernetes Workloads, provides a background as to why organizations have a significant number of workloads still deployed on virtual machines. The chapter then introduces the concept of hybrid architecture, a combination of modern and legacy architecture, followed by showing how Istio helps to marry these two worlds of legacy and modern technologies and how you can extend Istio beyond Kubernetes to virtual machines.

Chapter 11, Troubleshooting and Operating Istio, provides details of common problems you will encounter when operating Istio and how to distinguish and isolate them from other issues. The chapter then covers various techniques to analyze and troubleshoot the day-2 problems often faced by operations and reliability engineering teams. The chapter provides various best practices for deploying and operating Istio and shows how to automate the enforcement of best practices using OPA Gatekeeper.

Chapter 12, Summarizing What We Have Learned and the Next Steps, helps you revise what you’ve learned from this book by putting it to use to deploy and configure an open source application, helping you gain confidence in employing your learning in real-world applications. The chapter also provides various resources you can explore to advance your learning and expertise in Istio. Finally, the chapter introduces eBPF, an advanced technology poised to make a positive impact on service meshes.

Appendix –Other Service Mesh Technologies, introduces other Service Mesh technologies including Linkerd, Gloo Mesh, and Consul Connect, which are gaining popularity, recognition, and adoption by organizations. The information provided in this appendix is not exhaustive, but rather aims to make you familiar with the alternatives to Istio and help you form an opinion on how these technologies fare in comparison to Istio.

To get the most out of this book

Readers will need hands-on experience of using and deploying microservices on Kubernetes-based environments. Readers need to be familiar with using YAML and JSON and performing basic operations of Kubernetes. As the book makes heavy usage of various cloud provider services, it is helpful to have some experience of using various cloud platforms.

Software/hardware covered in the book

Operating system requirements

A workstation with a quad-core processor and 16 GB RAM at a minimum

macOS or Linux

Access to AWS, Azure, and Google Cloud subscriptions

N/A

Visual Studio Code or similar

N/A

minikube, Terraform

N/A

If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Bootstrap-Service-Mesh-Implementations-with-Istio. If there’s an update to the code, it will be updated in the GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://packt.link/DW41O.

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “The configuration patch is applied to HTTP_FILTER and in particular to the HTTP router filter of the http_connection_managernetwork filter.”

A block of code is set as follows:

"filterChainMatch": {                     "destinationPort": 80,                     "transportProtocol": "raw_buffer"                 },

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

"filterChainMatch": {                     "destinationPort": 80,                     "transportProtocol": "raw_buffer"                 },

Any command-line input or output is written as follows:

% curl -H "Host:httpbin.org" http://a816bb2638a5e4a8c990ce790b47d429-1565783620.us-east-1.elb.amazonaws.com/get

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: "Cloud computing is utility-style computing with a business model similar to what is provided by businesses selling utilities such as LPG and electricity to our homes"

Tips or important notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share Your Thoughts

Once you’ve read Bootstrapping Service Mesh Implementations with Istio, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Download a free PDF copy of this book

Thanks for purchasing this book!

Do you like to read on the go but are unable to carry your print books everywhere?

Is your eBook purchase not compatible with the device of your choice?

Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.

Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application.

The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily

Follow these simple steps to get the benefits:

Scan the QR code or visit the link below

https://packt.link/free-ebook/9781803246819

Submit your proof of purchaseThat’s it! We’ll send your free PDF and other benefits to your email directly

Part 1: The Fundamentals

In this part of the book, we will cover the fundamentals of the Service Mesh, why it is needed, and what type of applications need it. You will understand the difference between Istio and other Service Mesh implementations. This part will also walk you through the steps to configure and set up your environment and install Istio, and while doing so, it will unravel the Istio control plane and data plane, how they operate, and their roles in the Service Mesh.

This part contains the following chapters:

Chapter 1, Introducing Service MeshesChapter 2, Getting Started with IstioChapter 3, Understanding Istio Control and Data Planes

1

Introducing Service Meshes

Service Mesh are an advanced and complex topic. If you have experience using the cloud, Kubernetes, and developing and building an application using microservices architecture, then certain benefits of Service Mesh will be obvious to you. In this chapter, we will familiarize ourselves with and refresh some key concepts without going into too much detail. We will look at the problems you experience when you are deploying and operating applications built using microservices architecture and deployed on containers in the cloud, or even traditional data centers. Subsequent chapters will focus on Istio, so it is good to take some time to read through this chapter to prepare yourself for the learning ahead.

In this chapter, we’re going to cover the following main topics:

Cloud computing and its advantagesMicroservices architectureKubernetes and how it influences design thinkingAn introduction to Service Mesh

The concepts in the chapter will help you build an understanding of Service Mesh and why they are needed. It will also provide you guidance on identifying some of the signals and symptoms in your IT environment that indicate you need to implement Service Mesh. If you don’t have hands-on experience in dealing with large-scale deployment architecture using Kubernetes, cloud, and microservices architecture, then this chapter will familiarize you with these concepts and give you a good start toward understanding more complex subjects in subsequent chapters. Even if you are already familiar with these concepts, it will still be a good idea to read this chapter to refresh your memory and experiences.

Revisiting cloud computing

In this section, we will look at what cloud computing is in simple terms, what benefits it provides, how it influences design thinking, as well software development processes.

Cloud computing is utility-style computing with a business model similar to what is provided by businesses selling utilities such as LPG and electricity to our homes. You don’t need to manage the production, distribution, or operation of electricity. Instead. you focus on consuming it effectively and efficiently by just plugging in your device to the socket on the wall, using the device, and paying for what you consume. Although this example is very simple, it is still very relevant as an analogy. Cloud computing providers provide access to compute, storage, databases, and a plethora of other services, including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) over the internet.

Figure 1.1 – Cloud computing options

Figure 1.1 illustrates the cloud computing options most commonly used:

IaaS provides infrastructure such as networking to connect your application with other systems in your organization, as well as everything else you would like to connect to. IaaS gives you access to computational infrastructure to run your application, equivalent to Virtual Machines (VMs) or bare-metal servers in traditional data centers. It also provides storage for host data for your applications to run and operate. Some of the most popular IaaS providers are Amazon EC2, Azure virtual machines, Google Compute Engine, Alibaba E-HPC (which is very popular in China and the Greater China region), and VMware vCloud Air.PaaS is another kind of offering that provides you with the flexibility to focus on building applications rather than worrying about how your application will be deployed, monitored, and so on. PaaS includes all that you get from IaaS but also middleware to deploy your applications, development tools to help you build applications, databases to store data, and so on. PaaS is especially beneficial for companies adopting microservices architecture. When adopting microservices architecture, you also need to build an underlying infrastructure to support microservices. The ecosystem required to support microservices architecture is expensive and complex to build. Making use of PaaS to deploy microservices makes microservices architecture adoption much faster and easier. There are many examples of popular PaaS services from cloud providers. However, we will be using Amazon Elastic Kubernetes Service (EKS) as a PaaS to deploy the sample application we will explore hands-on with Istio.SaaS is another kind of offering that provides a complete software solution that you can use as a service. It is easy to get confused between PaaS and SaaS services, so to make things simple, you can think of SaaS as services that you can consume without needing to write or deploy any code. For example, it’s highly likely that you are using an email service as SaaS with the likes of Gmail. Moreover, many organizations use productivity software that is SaaS, and popular examples are services such as Microsoft Office 365. Other examples include CRM systems such as Salesforce and enterprise resource planning (ERP) systems. Salesforce also provides a PaaS offering where Salesforce apps can be built and deployed. Salesforce Essentials for small businesses, Sales Cloud, Marketing Cloud, and Service Cloud are SaaS offerings, whereas Salesforce Platform, which is a low-code service for users to build Salesforce applications, is a PaaS offering. Other popular examples of SaaS are Google Maps, Google Analytics, Zoom, and Twilio.

Cloud services providers also provide different kinds of cloud offerings, with varying business models, access methods, and target audiences. Out of many such offerings, the most common are a public cloud, a private cloud, a hybrid cloud, and a community cloud:

A public cloud is the one you most probably are familiar with. This offering is available over the internet and is accessible to anyone and everyone with the ability to subscribe, using a credit card or similar payment mechanism.A private cloud is a cloud offering that can be accessed over the internet or a restricted private network to a restricted set of users. A private cloud can be an organization providing IaaS or PaaS to its IT users; there are also service providers who provide a private cloud to organizations. The private cloud delivers a high level of security and is widely used by organizations that have highly sensitive data.A hybrid cloud refers to an environment where public and private clouds are collectively used. Also, a hybrid cloud is commonly used when more than one cloud offering is in use – for example, an organization using both AWS and Azure with applications deployed and data flowing across the two. A hybrid cloud is a good option when there are data and applications that are required to be hosted in a private cloud due to security reasons. Conversely, there may be other applications that don’t need to reside in the private cloud and can benefit from the scalability and elasticity features of a public cloud. Rather than restricting yourself to a public or private cloud, or one cloud provider or another, you should reap the benefit of the strengths of various cloud providers and create an IT landscape that is secure, resilient, elastic, and cost-effective.A community cloud is another cloud offering available to a set of organizations and users. Some good examples are AWS GovCloud in the US, which is a community cloud for the US government. This kind of cloud restricts who can use it – for example, AWS GovCloud can only be used by US government departments and agencies.

Now that you understand the true crux of cloud computing, let’s look at some of its key advantages in the following section.

Advantages of cloud computing

Cloud computing enables organizations to easily access all kinds of technologies without going through high upfront investment in expensive hardware and software procurement. By utilizing cloud computing, organizations achieve agility, as they can innovate faster by having access to high-end compute power and infrastructure (such as a load balancer, compute instances, and so on) and also to software services (such as machine learning, analytics, messaging infrastructure, AI, databases, and so on) that can be integrated as building blocks in a plug-and-play style to build software applications.

For example, if you’re building a software application, then most probably it will need the following:

Load balancersDatabasesServers to run and compute servers to host an applicationStorage to host the application binaries, logs, and so onA messaging system for asynchronous communication

You will need to procure, set up, and configure this infrastructure in an on-premises data center. This activity, though important for launching and operationalizing your applications in production, does not produce any business differentiators between you and your competition. High availability and resiliency of your software application infrastructure is a requirement that is required to sustain and survive in the digital world. To compete and beat your competition, you need to focus on customer experience and constantly delivering benefits to your consumers.

When deploying on-premises, you need to factor in all upfront costs of procuring infrastructure, which include the following:

Network devices and bandwidthLoad balancersA firewallServers and storageRack spaceAny new software required to run the application

All the preceding costs will incur Capital Expenditures (CapEx) for the project. You will also need to factor in the setup cost, which includes the following:

Network, compute servers, and cablingVirtualization, operating systems, and base configurationSetup of middleware such as application servers and web servers (if using containerization, then the setup of container platforms, databases, and messaging)Logging, auditing, alarming, and monitoring components

All the preceding will incur CapEx for the project but may fall under the organization’s Operating Expenses (OpEx).

On top of the aforementioned additional costs, the most important factor to consider is the time and human resources required to procure, set up, and make the infrastructure ready for use. This significantly impacts your ability to launch features and services on the market (also called agility and time to market).

When using the cloud, these costs can be procured with a pay-as-you-go model. Where you need compute and storage, it can be procured in the form of IaaS, and where you need middleware, it can be procured in the form of PaaS. You will realize that some of the functionality you need to build might be already available as SaaS. This expedites your software delivery and time to market. On the cost front, some of the costs will still incur CapEx for your project, but your organization can claim it as OpEx, which has certain benefits from a tax point of view. Whereas it previously took months of preparation to set up all that you needed to deploy your application, it can now be done in days or weeks.

Cloud computing also changes the way you design, develop, and operate IT systems. In Chapter 4, we will look at cloud-native architecture and how it differs from traditional architecture.

Cloud computing makes it easier to build and ship software applications with low upfront investments. The following section describes microservices architecture and how it is used to build and deliver highly scalable and resilient applications.

Understanding microservices architecture

Before we discuss microservices architecture, let’s first discuss monolithic architecture. It’s highly likely that you will have encountered or probably even participated in building one. To understand it better, let’s take a scenario and see how it has been traditionally solved using monolithic architecture.

Let’s imagine a book publisher who wants to start an online bookstore. The online bookstore needs to provide the following functionalities to its readers:

Readers should be able to browse all the books available for purchase.Readers should be able to select the books they want to order and save them to a shopping cart. They should also be able to manage their shopping cart.Readers should be able to then authorize payment for the book order using a credit card.Readers should have the books delivered to their shipping address once payment is complete.Readers should be able to sign up, store details including their shipping address, and bookmark favorite books.Readers should be able to sign in, check what books they have purchased, download any purchased electronic copies, and update shipping details and any other account information.

There will be many more requirements for an online bookstore, but for the purpose of understanding monolithic architecture, let’s try to keep it simple by limiting the scope to these requirements.

It is worth mentioning Conway’s law, where he stated that, often, the design of monolithic systems reflects the communication structure of an organization:

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

– Melvin E. Conway

There are various ways to design this system; we can follow traditional design patterns such as model-view-controller (MVC), but to do a fair comparison with microservices architecture, let’s make use of hexagonal architecture. We will also be using hexagonal architecture in microservices architecture.

With a logical view of hexagonal architecture, business logic sits in the center. Then, there are adaptors to handle requests coming from outside as well as to send requests outside, which are called inbound and outbound adaptors respectively. The business logic has one or more ports, which are basically a defined set of operations that define how adaptors can interact with business logic as well as how business logic can invoke external systems. The ports through which external systems interact with business logic are called inbound ports, whereas the ports through which business logic interacts with external systems are called outbound ports.

We can summarize the execution flow in a hexagonal architecture in the following two points:

User interface and REST API adaptors for web and mobile invoke business logic via inbound adaptorsBusiness logic invokes external-facing adaptors such as databases and external systems via outbound adaptors

One last but very important point to make about hexagonal architecture is that business logic is made up of modules that are a collection of domain objects. To know more about domain-driven design definitions and patterns, you can read the reference guide written by Eric Evans at https://domainlanguage.com/wp-content/uploads/2016/05/DDD_Reference_2015-03.pdf.

Returning to our online bookstore application, the following will be the core modules:

Order management: Managing customer orders, shopping carts, and updates on order progressCustomer management: Managing customer accounts, including sign-up, sign-in, and subscriptionsPayment management: Managing paymentsProduct catalog: Managing all the products availableShipping: Managing the delivery of ordersInventory: Managing up-to-date information on inventory levels

With these in mind, let’s draw the hexagonal architecture for this system.

Figure 1.2 – The online book store application monolith

Though the architecture follows hexagonal architecture and some principles of domain-driven design, it is still packaged as one deployable or executable unit, depending on the underlying programming language you are using to write it. For example, if you are using Java, the deployable artifact will be a WAR file, which will then be deployed on an application server.

The monolithic application looks awesome when it’s greenfield but nightmarish when it becomes brownfield, in which case it would need to be updated or extended to incorporate new features and changes.

Monolithic architectures are difficult to understand, evolve, and enhance because the code base is big and, with time, gets humongous in size and complexity. This means it takes a long time to make code changes and to ship the code to production. Code changes are expensive and require thorough regression testing. The application is difficult and expensive to scale, and there is no option to allocate dedicated computing resources to individual components of the application. All resources are allocated holistically to the application and are consumed by all parts of it, irrespective of their importance in its execution.

The other issue is lock-in to one technology for the whole code base. What this basically means is that you need to constrain yourself to one or a few technologies to support the whole code base. Code lock-in is detrimental to efficient outcomes, including performance, reliability, as well as the amount of effort required to achieve an outcome. You should be using technologies that are the best fit to solve a problem. For example, you can use TypeScript for the UI, Node.js for the API, Golang for modules needing concurrency or maybe for writing the core modules, and so on. Using a monolithic architecture, you are stuck with technologies you used in the past, which might not be the right fit to solve the current problem.

So, how does microservices architecture solve this problem? Microservices is an overloaded term, and there are many definitions of it; in other words, there is no single definition of microservices. A few well-known personalities have contributed their own definitions of microservices architecture:

The term Microservices architecture has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. While there is no precise definition of this architectural style, there are certain common characteristics around organization around business capability, automated deployment, intelligence in the endpoints, and decentralized control of languages and data.

– Martin Fowler and James Lewis

The definition was published on https://martinfowler.com/articles/microservices.html and is dated March 25, 2014, so you can ignore “sprung up over the last few years” in the description, as microservices architecture has becoming mainstream and pervasive.

Another definition of microservices is from Adam Cockcroft: “Loosely coupled service-oriented architecture with bounded contexts.”

In microservices architecture, the term micro is a topic of intense debate, and often the question asked is, “How micro should microservices be?” or “How should I decompose my application?”. There is no easy answer to this; you can follow various decomposing strategies by following domain-driven design and decomposing applications into services based on business capability, functionality, the responsibility or concern of each service or module, scalability, bounded context, and blast radius. There are numerous articles and books written on the topic of microservices and decomposition strategies, so I am sure you can find enough to read about strategies for sizing your application in microservices.

Let’s get back to the online bookstore application and redesign it using a microservices architecture. The following diagram represents the online bookstore applications built using microservices architecture principles. The individual services are still following hexagonal architecture, and for brevity, we have not represented the inbound and outbound ports and adaptors. You can assume that ports, adaptors, and containers are within the hexagon itself.

Figure 1.3 – The online bookstore microservices architecture

Microservices architecture provides several benefits over monolithic architecture. Having independent modules segregated based on functionality and decoupled from each other unlocks the monolithic shackles that drag the software development process. Microservices can be built faster at a comparatively lower cost than a monolith and are well adept for continuous deployment processes and, thus, have faster time to production. With microservices architecture, developers can release code to production as frequently as they want. The smaller code base of microservices is easy to understand, and developers only need to understand microservices and not the whole application. Also, multiple developers can work on microservices within the application without any risk of code being overwritten or impacting each other’s work. Your application, now made up of microservices, can leverage polyglot programming techniques to deliver performance efficiency, with less effort for more outcomes, and best-of-breed technologies to solve a problem.

Microservices as self-contained independent deployable units provide you with fault isolation and a reduced blast radius – for example, assume that one of the microservices starts experiencing exceptions, performance degradation, memory leakage, and so on. In this case, because the service is deployed as a self-contained unit with its own resource allocation, this problem will not affect other microservices. Other microservices will not get impacted by overconsumption of memory, CPU, storage, network, and I/O.

Microservices are also easier to deploy because you can use varying deployment options, depending on microservices requirements and what is available to you – for example, you can have a set of microservices deployed on a serverless platform and, at the same time, another set on a container platform along with another set on virtual machines. Unlike monolithic applications, you are not bounded by one deployment option.

While microservices provide numerous benefits, they also come with added complexity. This added complexity is because now you have too much to deploy and manage. Not following correct decomposition strategies can also create micro-monoliths that are nightmarish to manage and operate. Another important aspect is communication between microservices. As there will be lots of microservices that need to talk to each other, it is very important that communication between microservices is swift, performant, reliable, resilient, and secure. In the Getting to know Service Mesh section, we will dig deeper into what we mean by these terms.

For now, with a good understanding of microservices architecture, it’s time to look at Kubernetes, which is also the de facto platform for deploying microservices.

Understanding Kubernetes

When designing and deploying microservices, it is easy to manage a small number of microservices. As the number of microservices grows, so does the complexity of managing them. The following list showcases some of the complexities caused by the adoption of microservices architecture:

Microservices will have specific deployment requirements in terms of the kind of base operating systems, middleware, database, and compute/memory/storage. Also, the number of microservices will be large, which, in turn, means that you will need to provide resources to every microservice. Moreover, to keep the cost down, you will need to be efficient with the allocation of resources and their utilization.Every microservice will have a different deployment frequency. For example, any updates to payment microservices might be on a monthly basis, whereas updates to frontend UI microservices might be on a weekly or daily basis.Microservices need to communicate with each other, for which they need to know about each other’s existence, and they should have application networking in place to communicate efficiently.Developers who are building microservices need to have consistent environments for all stages of the development life cycle so that there are no unknowns, or near-unknowns, about the behavior of microservices when deployed in a production environment.There should be a continuous deployment process in place to build and deploy microservices. If you don’t have an automated continuous deployment process, then you will need an army of people to support microservices deployments.With so many microservices deployed, it is inevitable that there will be failures, but you cannot burden the microservices developer to solve those problems. Cross-cutting concerns such as resiliency, deployment orchestration, and application networking should be easy to implement and should not distract the focus of microservice developers. These cross-cutting concerns should be facilitated by the underlying platform and should not be incorporated into the microservices code.

Kubernetes, also abbreviated as K8S, is an open source system that originated from Google. Kubernetes provides automated deployment, scaling, and management of containerized applications. It provides scalability without you needing to hire an army of DevOps engineers. It fits and suits all kinds of complexities – that is, it works on a small scale as well as an enterprise scale. Google, as well as many other organizations, runs a huge number of containers on the Kubernetes platform.

Important note

A container is a self-contained deployment unit that contains all code and associated dependencies, including operating system, system, and application libraries packaged together. Containers are instantiated from images, which are lightweight executable packages. A Pod is a deployable unit in Kubernetes and is comprised of one or more containers, with each one in the Pod sharing the resources, such as storage and network. A Pod’s contents are always co-located and co-scheduled and run in a shared context.

The following are some of the benefits of the Kubernetes platform:

Kubernetes provides automated and reliable deployments by taking care of rollouts and rollbacks. During deployments, Kubernetes progressively rolls out changes while monitoring microservices’ health to ensure that there is no disruption to the processing of a request. If there is a risk to the overall health of microservices, then Kubernetes will roll back the changes to bring the microservices back to a healthy state.If you are using the cloud, then different cloud providers have different storage types. When running in data centers, you will be using various network storage types. When using Kubernetes, you don’t need to worry about underlying storage, as it takes care of it. It abstracts the complexity of underlying storage types and provides an API-driven mechanism for developers to allocate storage to the containers.Kubernetes takes care of DNS and IP allocation for the Pods; it also provides a mechanism for microservices to discover each other using simple DNS conventions. When more than one copy of services is running, then Kubernetes also takes care of load balancing between them.Kubernetes automatically takes care of the scalability requirements of Pods. Depending on resource utilization, Pods are automatically scaled up, which means that the number of running Pods is increased, or scaled down, which means that the number of running Pods is reduced. Developers don’t have to worry about how to implement scalability. Instead, they just need average utilization of CPU, memory, and various other custom metrics along with scalability limits.In a distributed system, failures are bound to happen. Similarly, in microservices deployments, Pods and containers will become unhealthy and unresponsive. Such scenarios are handled by Kubernetes by restarting the failed containers, rescheduling containers to other worker nodes if underlying nodes are having issues, and replacing containers that have become unhealthy.As discussed earlier, microservices architecture being resource-hungry is one of its challenges, and a resource should be allocated efficiently and effectively. Kubernetes takes care of that responsibility by maximizing the allocation of resources without impairing availability or sacrificing the performance of containers.

Figure 1.4 – The online bookstore microservice deployed on Kubernetes

The preceding diagram is a visualization of the online bookstore application built using microservices architecture and deployed on Kubernetes.

Getting to know Service Mesh

In the previous section, we read about monolithic architecture, its advantages, and disadvantages. We also read about how microservices solve the problem of scalability and provide flexibility to rapidly deploy and push software changes to production. The cloud makes it easier for an organization to focus on innovation without worrying about expensive and lengthy hardware procurement and expensive CapEx cost. The cloud also facilitates microservices architecture not only by facilitating on-demand infrastructure but also by providing various ready-to-use platforms and building blocks, such as PaaS and SaaS. When organizations are building applications, they don’t need to reinvent the wheel every time; instead, they can leverage ready-to-use databases, various platforms including Kubernetes, and Middleware as a Service (MWaaS).

In addition to the cloud, microservice developers also leverage containers, which makes microservices development much easier by providing a consistent environment and compartmentalization to help achieve modular and self-contained architecture of microservices. On top of containers, the developer should also use a container orchestration platform such as Kubernetes, which simplifies the management of containers and takes care of concerns such as networking, resource allocation, scalability, reliability, and resilience. Kubernetes also helps to optimize the infrastructure cost by providing better utilization of underlying hardware. When you combine the cloud, Kubernetes, and microservices architecture, you have all the ingredients you need to deliver potent software applications that not only do the job you want them to do but also do it cost-effectively.

So, the question on your mind must be, “Why do I need a Service Mesh?” or “Why do I need Service Mesh if I am using the cloud, Kubernetes, and microservices?” It is a great question to ask and think about, and the answer becomes evident once you have reached a stage where you are confidently deploying microservices on Kubernetes, and then you reach a certain tipping point where networking between microservices just becomes too complex to address by using Kubernetes’ native features.

Fallacies of distributed computing

Fallacies of a distributed system are a set of eight assertions made by L Peter Deutsch and others at Sun Microsystems. These assertions are false assumptions often made by software developers when designing distributed applications. The assumptions are that a network is reliable, latency is zero, bandwidth is infinite, the network is secure, the topology doesn’t change, there is one administrator, the transport cost is zero, and the network is homogenous.

At the beginning of the Understanding Kubernetes section, we looked at the challenges developers face when implementing microservices architecture. Kubernetes provides various features for the deployment of containerized microservices as well as container/Pod life cycle management through declarative configuration, but it falls short of solving communication challenges between microservices. When talking about the challenges of microservices, we used terms such as application networking to describe communication challenges. So, let’s try to first understand what application networking is and why it is so important for the successful operations of microservices.

Application networking is also a loosely used term; there are various interpretations of it depending on the context it is being used in. In the context of microservices, we refer to application networking as the enabler of distributed communication between microservices. The microservice can be deployed in one Kubernetes cluster or multiple clusters over any kind of underlying infrastructure. A microservice can also be deployed in a non-Kubernetes environment in the cloud, on-premises, or both. For now, let’s keep our focus on Kubernetes and application networking within Kubernetes.

Irrespective of where microservices are deployed, you need a robust application network in place for microservices to talk to each other. The underlying platform should not just facilitate communication but also resilient communication. By resilient communication, we mean the kind of communication where it has a large probability of being successful even when the ecosystem around it is in adverse conditions.

Apart from the application network, you also need visibility of the communication happening between microservices; this is also called observability. Observability is important in microservices communication in knowing how the microservices are interacting with each other. It is also important that microservices communicate securely with each other. The communication should be encrypted and defended against man-in-the-middle attacks. Every microservice should have an identity and be able to prove that they are authorized to communicate with other microservices.

So, why Service Meshes? Why can’t these requirements be addressed in Kubernetes? The answer lies in Kubernetes architecture and what it was designed to do. As mentioned before, Kubernetes is application life cycle management software. It provides application networking, observability, and security, but at a very basic level that is not sufficient to meet the requirements of modern and dynamic microservices architecture. This doesn’t mean that Kubernetes is not modern software. Indeed, it is a very sophisticated and cutting-edge technology, but only for serving container orchestration.

Traffic management in Kubernetes is handled by the Kubernetes network proxy, also called kube-proxy. kube-proxy runs on each node in the Kubernetes cluster. kube-proxy communicates with the Kubernetes API server and gets information about Kubernetes services. Kubernetes services are another level of abstraction to expose a set of Pods as a network service. kube-proxy implements a form of virtual IP for services that sets iptables rules, defining how any traffic for that service will be routed to the endpoints, which are essentially the underlying Pods hosting the application.

To understand it better, let’s look at the following example. To run this example, you will need minikube and kubectl on your computing device. If you don’t have this software installed, then I suggest you hold off from installing it, as we will be going through the installation steps in Chapter 2.

We will create a Kubernetes deployment and service by following the example in https://minikube.sigs.k8s.io/docs/start/:

$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4 deployment.apps/hello-minikube created

We just created a deployment object named hello-minikube. Let’s execute the kubectl describe command:

$ kubectl describe deployment/hello-minikube Name:                   hello-minikube ……. Selector:               app=hello-minikube ……. Pod Template:   Labels:  app=hello-minikube   Containers:    echoserver:     Image:        k8s.gcr.io/echoserver:1.4     ..

From the preceding code block, you can see that a Pod has been created, containing a container instantiated from the k8s.gcr.io/echoserver:1.4 image. Let’s now check the Pods:

$ kubectl get po hello-minikube-6ddfcc9757-lq66b   1/1     Running   0          7m45s

The preceding output confirms that a Pod has been created. Now, let’s create a service and expose it so that it is accessible on a cluster-internal IP on a static port, also called NodePort:

$ kubectl expose deployment hello-minikube --type=NodePort --port=8080 service/hello-minikube exposed

Let’s describe the service:

$ kubectl describe services/hello-minikube Name:                     hello-minikube Namespace:                default Labels:                   app=hello-minikube Annotations:              <none> Selector:                 app=hello-minikube Type:                     NodePort IP:                       10.97.95.146 Port:                     <unset>  8080/TCP TargetPort:               8080/TCP NodePort:                 <unset>  31286/TCP Endpoints:                172.17.0.5:8080 Session Affinity:         None External Traffic Policy:  Cluster

From the preceding output, you can see that a Kubernetes service named hello-minikube has been created and is accessible on port 31286, also called NodePort. We also see that there is an Endpoints object with the 172.17.0.5:8080 value. Soon, we will see the connection between NodePortand Endpoints.

Let’s dig deeper and look at what is happening to iptables. If you would like to see what the preceding service returns, then you can simply type