Mastering Service Mesh - Anjali Khatri - E-Book

Mastering Service Mesh E-Book

Anjali Khatri

0,0
40,81 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Understand how to use service mesh architecture to efficiently manage and safeguard microservices-based applications with the help of examples




Key Features



  • Manage your cloud-native applications easily using service mesh architecture


  • Learn about Istio, Linkerd, and Consul – the three primary open source service mesh providers


  • Explore tips, techniques, and best practices for building secure, high-performance microservices



Book Description



Although microservices-based applications support DevOps and continuous delivery, they can also add to the complexity of testing and observability. The implementation of a service mesh architecture, however, allows you to secure, manage, and scale your microservices more efficiently. With the help of practical examples, this book demonstrates how to install, configure, and deploy an efficient service mesh for microservices in a Kubernetes environment.






You'll get started with a hands-on introduction to the concepts of cloud-native application management and service mesh architecture, before learning how to build your own Kubernetes environment. While exploring later chapters, you'll get to grips with the three major service mesh providers: Istio, Linkerd, and Consul. You'll be able to identify their specific functionalities, from traffic management, security, and certificate authority through to sidecar injections and observability.






By the end of this book, you will have developed the skills you need to effectively manage modern microservices-based applications.




What you will learn



  • Compare the functionalities of Istio, Linkerd, and Consul


  • Become well-versed with service mesh control and data plane concepts


  • Understand service mesh architecture with the help of hands-on examples


  • Work through hands-on exercises in traffic management, security, policy, and observability


  • Set up secure communication for microservices using a service mesh


  • Explore service mesh features such as traffic management, service discovery, and resiliency



Who this book is for



This book is for solution architects and network administrators, as well as DevOps and site reliability engineers who are new to the cloud-native framework. You will also find this book useful if you're looking to build a career in DevOps, particularly in operations. Working knowledge of Kubernetes and building microservices that are cloud-native is necessary to get the most out of this book.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 607

Veröffentlichungsjahr: 2020

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Mastering Service Mesh

 

 

 

 

 

 

 

 

 

Enhance, secure, and observe cloud-native applications with Istio, Linkerd, and Consul

 

 

 

 

 

 

 

 

 

 

 

 

Anjali Khatri
Vikram Khatri

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

Mastering Service Mesh

Copyright © 2020 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Commissioning Editor: Vijin BorichaAcquisition Editor:Meeta RajaniContent Development Editor:Carlton BorgesSenior Editor: Rahul DsouzaTechnical Editor: Dinesh PawarCopy Editor: Safis EditingProject Coordinator:Neil DmelloProofreader: Safis EditingIndexer:Priyanka DhadkeProduction Designer: Nilesh Mohite

First published: March 2020

Production reference: 1270320

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-78961-579-1

www.packt.com

 

Packt.com

Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Fully searchable for easy access to vital information

Copy and paste, print, and bookmark content

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks. 

Foreword

This book provides an understanding of modern service mesh providers for building applications without needing to build traffic management, telemetry, and security solutions. Advanced cloud-native polyglot application developers need to focus only the business logic. The service mesh takes care of the Operations from the DevOps using automation that does not require any changes in the applications. Thanks to Anjali and Vikram for providing hands-on examples to understand these new technologies in an easy to understand fashion.Dinesh NirmalVice PresidentData and AI DevelopmentIBM Cloud and Cognitive SoftwareSilicon Valley Lab,San Jose, CA, USA

The embracing of microservices by the world of business is critical as they enable significantly faster deployment of new services and quick adaption of existing services with continuous availability. Microservices platforms are going through rapid change and engineers must keep up to avoid skill obsolescence.

Capabilities such as observability and canary are key in churning applications rapidly while keeping a large microservice mesh continually available. The mesh of microservices spans businesses and their partners, where they collectively provide services to their customers, and often span multi-cloud. Common business services, such as security and single identity management have become global requirements, which has fundamentally changed the design and operation of platforms. The mesh assumes far more control as it replaces troubled microservices nodes quickly with alternatives to provide continual availability.

Keeping up with such rapid technology change at a hands-on level is a must. This book manages to cover the high-level concepts and then maps them to actual tasks that engineers need to perform to design, deploy, and operate these systems.

Hamid Pirahesh

IBM Fellow, ACM Fellow

The concepts around cloud-native development continue to mature and real use cases grow in number across a variety of industries. However, cloud native approaches are only beginning to have significant widespread impact on mission critical systems, or what some might call systems of record. This is the next big step forward for cloud-native applications.

Mission critical applications demand high levels of availability, resiliency, security, and visibility that in turn place strong demands on the underlying supporting platform. While there are many solid advantages to the cloud-native approach, the fact is that there are new and more things to be managed, and many new situations will be encountered.

A service mesh becomes a consistent and simplified way of dealing with many of those things that accompany the notion of a cloud-native mission critical system. While there are other approaches, those that are consistent with Kubernetes and based on open source will have the most significant impact and be the most easily adopted.

Mastering Service Mesh is a good book to read for an in-depth understanding of the concept of service meshes, as well as to gain detailed insights into the various service mesh offerings available today. Concrete examples throughout the book and accompanying samples help bring these topics into focus and demonstrate the concepts in action. This book is a necessary addition to the library of all those who are involved in creating, evolving, and operating cloud-native production environments that support cloud-native applications.

Eric Herness

IBM FellowCTO, Cloud Engagement Hub

Contributors

About the authors

Anjali Khatri is an enterprise cloud architect at DivvyCloud, advancing the cloud-native growth for the company by helping customers maintain security and compliance for resources running on AWS, Google, Azure, and other cloud providers. She is a technical leader in the adoption, scaling, and maturity of DivvyCloud's capabilities. In collaboration with product and engineering, she works with customer success around feature request architecture, case studies, account planning, and continuous solution delivery.

Prior to Divvycloud, Anjali worked at IBM and Merlin. She has 9+ years of professional experience in program management for software development, open source analytics sales, and application performance consulting.

 

 

 

 

Vikram Khatri is the chief architect of Cloud Pak for Data System at IBM. Vikram has 20 years of experience leading and mentoring high-performing, cross-functional teams to deliver high-impact, best-in-class technology solutions. Vikram is a visionary thought leader when it comes to architecting large-scale transformational solutions from monolithic to cloud-native applications that include data and AI. He is an industry-leading technical expert with a track record of leveraging deep technical expertise to develop solutions, resulting in revenues exceeding $1 billion over 14 years, and is also a technology subject matter expert in cloud-native technologies who frequently speaks at industry conferences and trade shows.

This book is written by a daughter-father team.

About the reviewers

Debasish Banerjee, Ph.D., is an executive architect who is a seasoned thought leader, hands-on architect, and practitioner of cutting-edge technologies with a proven track record of advising and working with Fortune 500 customers in the USA, Europe, and Asia with various IBM products and strategies. He is presently leading the collaborative development effort with IBM Research for Mono2Micro, an AI-based utility for transforming monoliths to microservices. Application modernization, microservice generation, and deployment are his current areas of interest. Debasish obtained his Ph.D. in combinator-based functional programming languages.

I fondly remember many discussions, both technical and otherwise, with Eric Herness,IBM Fellow, Danny Mace, VP, Dr. Ruchir Puri, IBM Fellow, Garth Tschetter, Director, Lorraine Johnson, Director, Mark Borowski, Directors and many others. The late Manilal Banerjee, my father, would have been very proud to see my contribution. Cheenar Banerjee and Neehar Banerjee, my daughters, as well as being my pride and joy, are sources of inspiration for me.

 

 

 

 

Cole Calistra is an accomplished hands-on technology leader with over 20 years of diverse industry experience that includes leading fast-growing SaaS start-ups, senior architecture roles within Fortune 500 giants, and acting as a technical adviser to a mix of start-ups and established businesses. He is currently CTO at LEON Health Science. Prior to this, he served as a founding team member and CTO of the SaaS-based facial recognition and emotion analysis API provider, Kairos.

His credentials include multiple professional level certifications at both AWS and GCP, and he is currently pursuing an MS in computer science at the Georgia Institute of Technology. Cole is the proud father of two daughters, Abigail and Jill.

 

 

 

 

 

Jimmy Song reviewed the section on Linkerd. He is a developer advocate on cloud native and a co-founder of the ServiceMesher community. Jimmy currently works for Ant Financial.

 

 

 

 

 

Huabing Zhao reviewed the section on Consul. He has been involved in the information technology industry for almost 20 years, most of it at ZTE, where he works on telecommunication management systems and network function virtualization. Currently, he is a software expert at ZTE, a member of Istio, and a PTL of ONAP.

 

 

 

 

 

 

 

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Table of Contents

Title Page

Copyright and Credits

Mastering Service Mesh

About Packt

Why subscribe?

Foreword

Contributors

About the authors

About the reviewers

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

Useful terms

To get the most out of this book

Download the example code files

Download the color images

Conventions used

Errata

Get in touch

Reviews

Section 1: Cloud-Native Application Management

Monolithic Versus Microservices

Early computer machines

Hardware virtualization

Software virtualization

Container orchestration

Monolithic applications

Brief history of SOA and ESB

API Gateway

Drawbacks of monolithic applications

Microservices applications

Early pioneers

What is a microservice?

Evolution of microservices

Microservices architecture

Benefits and drawbacks of microservices

Future of microservices

Summary

Questions

Further reading

Cloud-Native Applications

An introduction to CNAs

Container runtime

Container orchestration platforms

Cloud-native infrastructure

Summary

Questions

Further reading

Section 2: Architecture

Service Mesh Architecture

Service mesh overview

Who owns the service mesh?

Basic and advanced service mesh capabilities

Emerging trends

Shifting Dev responsibilities to Ops

Service mesh rules

Observability

Routing

Automatic scaling

Separation of duties

Trust

Automatic service registration and discovery 

Resiliency

Service mesh architecture

Summary

Questions

Further reading

Service Mesh Providers

Introducing service mesh providers

Istio

Linkerd

Consul

Other providers

A quick comparison

Support services

Summary

Questions

Further reading

Service Mesh Interface and SPIFFE

SMI

SMI specifications

SPIFFE

Summary

Questions

Further reading

Section 3: Building a Kubernetes Environment

Building Your Own Kubernetes Environment

Technical requirements

Downloading your base VM 

Building an environment for Windows

Downloading our virtualization software

Setting the network address 

Performing finalization checks

Building an environment for macOS

Downloading our virtualization software

Setting the network address

Performing finalization checks

Performing prerequisite tasks

Building Kubernetes using one VM

Installing Kubernetes

Running kubeadm

Configuring kubectl

Installing the Calico network for pods

Creating an admin account

Installing kubectl on client machines

Performing finalization checks

Installing Helm and Tiller

Installing without security

Installing with Transport Layer Security (TLS)

Installing the Kubernetes dashboard

Running the Kubernetes dashboard

Get an authentication token

Exploring the Kubernetes dashboard

Additional steps

Installing the Metrics Server 

Installing VMware Octant 

Installing Prometheus and Grafana 

Uninstalling Kubernetes and Docker

Powering the VM up and down

Summary

Questions

Further reading

Section 4: Learning about Istio through Examples

Understanding the Istio Service Mesh

Technical requirements 

Introducing the Istio service mesh

Istio's architecture

Control plane

Galley

Pilot

Service discovery

Traffic management

Gateway

Virtual service

Routing rules

Fault injection

Abort rules

Service entry

Destination rule

Load balancing

Circuit breaker

Blue/green deployment

Canary deployment

Namespace isolation

Mixer

Configuration of Mixer

Attributes

Handlers

Rules

Citadel

Certificate and key rotation

Authentication  

Strong identity

RBAC for a strong identity

Authorization

Enabling  mTLS to secure service communication

Secure N-to-N mapping of services

Policies

Implementing authentication

Implementing authorization

Data plane

Sidecar proxy

Istio's Envoy sidecar proxy

What is Envoy?

Envoy architecture

Deployment

Observability

Summary

Questions

Further reading

Installing a Demo Application

Technical requirements

Exploring Istio's BookInfo application

BookInfo application architecture

Deploying the Bookinfo application in Kubernetes

Enabling a DNS search for Kubernetes services in a VM

Understanding the BookInfo application

Exploring the BookInfo application in a Kubernetes environment

Summary

Questions

Further reading

Installing Istio

Technical requirements

Getting ready

Performing pre-installation tasks

Downloading the source code

Validating the environment before installation

Choosing an installation profile

Installing Istio

Installing Istio using the helm template

Installing Istio using Helm and Tiller

Installing Istio using a demo profile

Verifying our installation

Installing a load balancer

Enabling Istio

Enabling Istio for an existing application

Enabling Istio for new applications

Setting up horizontal pod scaling

Summary

Questions

Further reading

Exploring Istio Traffic Management Capabilities

Technical requirements

Traffic management

Creating an Istio gateway

Finding the Ingress gateway IP address

Creating a virtual service

Running using pod's transient IP address

Running using a service IP address

Running using Node Port

Creating a destination rule

Traffic shifting

Identity-based traffic routing

Canary deployments

Fault injection

Injecting HTTP delay faults

Injecting HTTP abort faults

Request timeouts

Circuit breaker

Managing traffic

Managing Ingress traffic patterns

Managing Egress traffic patterns

Blocking access to external services

Allowing access to external services

Routing rules for external services

Traffic mirroring

Cleaning up

Summary

Questions

Further reading

Exploring Istio Security Features

Technical requirements

Overview of Istio's security

Authentication

Testing the httpbin service

Generating keys and certificates

Installing the step CLI

Generating private key, server, and root certificates

Mapping IP addresses to hostname

Configuring an Ingress gateway using SDS

Creating secrets using key and certificate

Enabling httpbin for simple TLS

Enabling bookinfo for simple TLS

Rotating virtual service keys and certificates

Enabling an Ingress gateway for httpbin using mutual TLS

Verifying the TLS configuration

Node agent to rotate certificates and keys for services

Enabling mutual TLS within the mesh

Converting into strict mutual TLS

Redefining destination rules

Enabling mTLS at the namespace level

Verifying the TLS configuration

Authorization

Namespace-level authorization

Service-level authorization at the individual level

Service-level authorization for databases

Advanced capabilities

Summary

Questions

Further reading

Enabling Istio Policy Controls

Technical requirements

Introduction to policy controls

Enabling rate limits

Defining quota and assigning to services

Defining rate limits

Defining quota rules

Controlling access to a service

Denying access

Creating attribute-based white/blacklists

Creating an IP-based white/blacklist

Summary

Questions

Further reading

Exploring Istio Telemetry Features

Technical requirements

Telemetry and observability

Configuring UI access

Collecting built-in metrics

Collecting new metrics

Database metrics

Distributed tracing

Trace sampling

Tracing backends

Adapters for the backend

Exploring prometheus 

Sidecar proxy metrics

Prometheus query

Prometheus target collection health

Prometheus configuration

Visualizing metrics through Grafana

Service mesh observability through Kiali

Tracing with Jaeger

Cleaning up

Summary

Questions

Further reading

Section 5: Learning about Linkerd through Examples

Understanding the Linkerd Service Mesh

Technical requirements

Introducing the Linkerd Service Mesh

Linkerd architecture

Control plane

Using the command-line interface (CLI)

Data plane

Linkerd proxy

Architecture

Configuring a service

Ingress controller

Observability

Grafana and Prometheus

Distributed tracing

Exporting metrics

Injecting the debugging sidecar

Reliability

Traffic split

Fault injection

Service profiles

Retries and timeouts

Load balancing

Protocols and the TCP proxy

Security

Automatic mTLS

Summary

Questions

Further reading

Installing Linkerd

Technical requirements

Installing the Linkerd CLI

Installing Linkerd

Validating the prerequisites

Installing the Linkerd control plane

Separating roles and responsibilities

Cluster administrator

Application administrator

Ingress gateway

Accessing the Linkerd dashboard

Deploying the Linkerd demo emoji app

Installing a demo application

Deploying the booksapp application

Summary

Questions

Further reading

Exploring the Reliability Features of Linkerd

Technical requirements

Overview of the reliability of Linkerd 

Configuring load balancing

Setting up a service profile

Retrying failed transactions

Retry budgets

Implementing timeouts

Troubleshooting error code

Summary

Questions

Further reading

Exploring the Security Features of Linkerd

Technical requirements

Setting up mTLS on Linkerd

Validating mTLS on Linkerd

Using trusted certificates for the control plane

Installing step certificates

Creating step root and intermediate certificates

Redeploying control plane using certificates

Regenerating and rotating identity certificates for microservices

Securing the ingress gateway

TLS termination

Testing the application in the browser

Testing the application through curl

Summary

Questions

Further reading

Exploring the Observability Features of Linkerd

Technical requirements

Gaining insight into the service mesh

Insights using CLI

Insight using Prometheus

Insights using Grafana

External Prometheus integration

Cleaning up

Summary

Questions

Further reading

Section 6: Learning about Consul through Examples

Understanding the Consul Service Mesh

Technical requirements

Introducing the Consul service mesh

The Consul architecture

Data center 

Client/server

Protocols

RAFT

Consensus protocol

Gossip protocol

Consul's control and data planes

Configuring agents

Service discovery and definitions

Consul integration

Monitoring and visualization

Telegraf

Grafana

Traffic management

Service defaults

Traffic routing

Traffic split

Mesh gateway

Summary

Questions

Further reading

Installing Consul

Technical requirements

Installing Consul in a VM

Installing Consul in Kubernetes

Creating persistent volumes 

Downloading the Consul Helm chart

Installing Consul

Connecting Consul DNS to Kubernetes

Consul server in a VM

Summary

Questions

Further reading

Exploring the Service Discovery Features of Consul

Technical requirements

Installing a Consul demo application

Defining Ingress for the Consul dashboard

Service discovery

Using the Consul web console

Implementing mutual TLS

Exploring intentions

Exploring the Consul key-value store

Securing Consul services with ACL

Monitoring and metrics

Registering an external service

Summary

Questions

Further reading

Exploring Traffic Management in Consul

Technical requirements

Overview of traffic management in Consul 

Implementing L7 configuration

Deploying a demo application

Traffic management in Consul

Directing traffic to a default subset

Canary deployment

Round-robin traffic

Shifting traffic permanently

Path-based traffic routing

Checking Consul services

Mesh gateway

Summary

Questions

Further reading

Assessment

Chapter 1: Monolithic versus Microservices

Chapter 2: Cloud-Native Applications

Chapter 3: Service Mesh Architecture

Chapter 4: Service Mesh Providers

Chapter 5: Service Mesh Interface and SPIFFE

Chapter 6: Building Your Own Kubernetes Environment

Chapter 7: Understanding the Istio Service Mesh

Chapter 8: Installing a Demo Application

Chapter 9: Installing Istio

Chapter 10: Exploring Istio Traffic Management Capabilities

Chapter 11: Exploring Istio Security Features

Chapter 12: Enabling Istio Policy Controls

Chapter 13: Exploring Istio Telemetry Features

Chapter 14: Understanding the Linkerd Service Mesh

Chapter 15: Installing Linkerd

Chapter 16: Exploring the Reliability Features of Linkerd

Chapter 17: Exploring the Security Features of Linkerd

Chapter 18: Exploring the Observability Features of Linkerd

Chapter 19: Understanding the Consul Service Mesh

Chapter 20: Installing Consul

Chapter 21: Exploring the Service Discovery Features of Consul

Chapter 22: Exploring Traffic Management in Consul

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

This book is about mastering service mesh. It assumes that you have prior knowledge of Docker and Kubernetes. As a developer, knowing Service-Oriented Architecture (SOA) and Enterprise Service Bus (ESB) patterns will be beneficial, but not mandatory.

Service mesh is the new buzzword and a relatively new concept that started in 2017, and so it does not have much history behind it. Service mesh is the evolution of already existing technologies with further improvements.

The first service mesh implementation emerged as Istio 0.1 in May 2017. Istio is a combination of different technologies from IBM, Google, and Lyft, and hence, Istio and service mesh were used interchangeably to mean the same thing.

Envoy (which originated at Lyft and is now open source) is a graduate project from the Cloud Native Computing Foundation (CNCF) and is a core part of Istio. Envoy, as a reverse proxy next to a microservice, forms the core of a service mesh.

William Morgan, the creator of Linkerd, which is an incubating project at CNCF, coined the term service mesh. The termservice mesh was boosted when it was used prominently in KubeCon and at the CloudNativeCon 2018 conference in Copenhagen by Jason McGee, an IBM Fellow. 

A service mesh is a framework on top of a cloud-native microservices application. Istio, Linkerd, and Consul are all service mesh implementations.

Linkerd is an open source network proxy and referred to as a service mesh.

Consul is another open source project backed by Hasicorp and is referred to as a service mesh, but it uses different architecture.

Who this book is for

This book covers the operation part of DevOps, and so is most suited for operational professionals who are responsible for managing microservices-based applications.

Anyone interested in starting out on a career as an operations professional (the second part of DevOps) will benefit from reading this book. This book is about managing microservices applications when in the production environment from the operations perspective.

Even if you do not have experience in developing microservices applications, you can take the role of an operations professional or become a Site Reliability Engineer (SRE). A knowledge of Kubernetes and Docker is a prerequisite, but it is not necessary to know SOA and ESB in depth.

What this book covers

In this book, we are focusing on Istio, Linkerd, and Consul from the implementation perspective.

A service mesh implementation, such as Istio, takes away some of the responsibilities of developers and puts them in a dedicated layer so that they are consumable without writing any code. In other words, it frees up developers so that they can focus on business logic and places more responsibility in the hands of operational professionals.

This book is not about developing microservices, and so does not cover the persona of a developer. 

Chapter 1, Monolithic Versus Microservices, provides a high-level overview of monolithic versus microservices-based applications. The evolution of service-oriented architecture to microservices-based architecture became possible as a result of distributed computing through Kubernetes.

Chapter 2, Cloud-Native Applications, provides an overview of building cloud-native applications using container-based environments to develop applications built with services that can scale independently. This chapter explains the ease of Development (Dev) using the polyglot app through containerization and the assumption of further responsibilities by Operations (Ops) due to the decoupling of services.

Chapter 3, Service Mesh Architecture, covers the evolution of the term service mesh and its origin. It provides an overview of the service mesh as a decoupling agent between Dev (provider) and Ops (consumer) and explains basic and advanced service communication through smart endpoints and trust between microservices.

Chapter 4, Service Mesh Providers, provides an overview of the three open source service mesh providers – Istio, Linkerd, and Consul.

Chapter 5, Service Mesh Interface and SPIFFE, provides an introduction to the evolving service mesh interface specification. The SPIFFE specification offers secure naming for the services running in a Kubernetes environment.

Chapter 6, Building Your Own Kubernetes Environment, explains how, in order to learn about service meshes with any of the three providers throughout this book, having a development environment is essential. There are choices when it comes to spinning a Kubernetes cluster in a public cloud, and that requires an upfront cost. This chapter provides a straightforward way to build your single-node Kubernetes environment so that you can practice the examples using your laptop or MacBook.

Chapter 7, Understanding the Istio Service Mesh, shows the architecture of the Istio control plane and its features and functions. 

Chapter 8, Installing the Demo Application, shows how to install the demo application for Istio.

Chapter 9, Installing Istio, shows the different ways of installing Istio using separate profiles to suit the end goal of a service mesh.

Chapter 10, Exploring Istio Traffic Management Capabilities, shows Istio's features of traffic routing from the perspectives of canary testing, A/B testing, traffic splitting, shaping, and conditional routing.

Chapter 11, Exploring Istio Security Features, explores how to secure service-to-service communication using mTLS, securing gateways, and using Istio Citadel as a certificate authority.

Chapter 12, Enabling Istio Policy Controls, explores of enabling network controls, rate limits, and the enforcement of quotas without having to change the application.

Chapter 13, Exploring Istio Telemetry Features, looks at using observability features in Prometheus, Grafana, and Kiali to display collected metrics and service-to-service communication.

Chapter 14, Understanding the Linkerd Service Mesh, shows the architecture of Linkerd from the control plane perspective to demonstrate its features and functions. 

Chapter 15,InstallingLinkerd, shows how to install Linkerd in Kubernetes, how to set up a Linkerd demo emoji application, and how to inject a sidecar proxy.

Chapter 16, Exploring the Reliability Features of Linkerd, goes through Linkerd traffic reliability features and covers load balancing, retries, traffic splitting, timeout circuit breaking, and dynamic request routing.

Chapter 17, Exploring the Security Features of Linkerd, explains the process of setting up mTLS without any configuration by default and gradual installation as regards the certificate creation process.

Chapter 18, Exploring the Observability Features of Linkerd, details the Linkerd dashboard and CLI, which provides some insights into the service mesh for live traffic, success rates, routes, and latencies.

Chapter 19, Understanding the Consul Service Mesh, shows the architecture of Consul from the control plane perspective to demonstrate its features and functions. 

Chapter 20, Installing Consul, shows how to install Consul in Kubernetes and VMs/bare-metal machines.

Chapter 21, Exploring the Service Discovery Features of Consul, shows a demo application explaining Consul service discovery, key/value stores, ACLs, intentions, and monitoring/metrics collection. We explain the integration process of external services running in a non-Kubernetes environment.

Chapter 22, Exploring Traffic Management in Consul, shows the integration of Consul using the open source project Ambassador. It shows traffic management capabilities such as rate limits, self-service routing, testing, and enabling end-to-end TLS through the use of an Envoy sidecar proxy.

Useful terms

This book contains a number of specific terms that you might not have come across before, and here is a brief glossary to help you while reading this book:

Ingress gateway

: In Kubernetes, an ingress is an object that allows external access to internal microservices. An ingress is a collection of rules to route external traffic to services inside the Kubernetes cluster. In Istio, the ingress gateway sits at the edge of the cluster and allows the creation of multiple ingress gateways to configure access to the cluster.

Egress gateway

: The egress gateway is a feature of Istio that allows external access to the microservices running inside a Kubernetes cluster. This gateway also sits on the edge of the service mesh.

Polyglot programming

: This is the practice of writing code in multiple languages for services. For example, we can write different microservices in different languages, such as Go, Java, Ruby, and Python, and yet they can still communicate with one another.

A/B testing

: This is testing between two versions (A and B) of a microservice while both are in production.

Canary release

: This entails moving faster for cloud-native applications. Canary release is about a new version of a microservice available to a small subset of users in a production environment along with the old version. Once the new version can be used with confidence, the old version can be taken out of service without any ensuing disruption. 

Circuit breaker

: A failure of communication between microservices may occur due to latency or faults. The circuit breaker

breaks the connection between microservices following the detection of latency/faults. The incoming traffic then reroutes to other microservices to avoid partial or cascading failures. The circuit breaker helps to attain load balancing and to prevent the continual overloading of a particular system. 

To get the most out of this book

You will get the most out of this book by building an environment yourself and practicing with it using the examples provided herein.

If you have not used Kubernetes before, it is best to follow the example of building your Kubernetes environment on your Windows laptop or MacBook. This book is not about Kubernetes, but having a Kubernetes environment is a must. We explain how to build your Kubernetes environment in Chapter 6, Building Your Own Kubernetes Environment.

If you are comfortable with any other Kubernetes provider, you can take and test the examples in a Kubernetes environment of your choosing.

Since technology is evolving rapidly, we have a GitHub repository, which you can refer to for the latest changes.

You can practice examples given in this book either on a Windows or macOS platform. The hardware/software requirements are as under. Refer to Chapter 6, Building Your Own Kubernetes Environment for further details.

Software/Hardware covered in the book

OS Requirements

Workstation/Laptop or MacBook with a minimum 16 GB RAM / Intel Core i7 or higher, a minimum of 512 GB SSD

Windows 10 or macOS Pro (2015 or later)

VMware Player V15.x or VMware Fusion 11.x 

Windows or macOS

7z Software for Windows or Free 7z Unarchiver for macOS 

Windows or macOS

If you are using the digital version of this book, we advise you to type the code yourself or access the code via the GitHub repository (link available in the next section). Doing so will help you avoid any potential errors related to copy/pasting of code.

Download the example code files

You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

Log in or register at

www.packt.com

.

Select the

Support

 tab.

Click on

Code Downloads

.

Enter the name of the book in the

Search

box and follow the onscreen instructions.

Once you download the file, please make sure that you unzip or extract the folder using the latest version of:

7-Zip for Windows

Free 7z Unarchiver

for Mac

The code bundle for the book is on GitHub at https://github.com/PacktPublishing/Mastering-Service-Mesh.

Note: For the implementation chapters throughout this book, we recommend our readers to pull all the necessary source code files from https://github.com/servicemeshbook/ for Istio, Linkerd, and Consul. We will have chapter-specific repository links, with clear instructions regarding all GitHub repository exports. Both Mastering-Service-Mesh and servicemeshbook GitHub page(s) will continue to stay active and up to date.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: http://www.packtpub.com/sites/default/files/downloads/9781789615791_ColorImages.pdf.

Conventions used

There are several text conventions used throughout this book.

CodeInText: Indicates code words in a text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "Optionally, you can configure a separate disk to mount /var/lib/docker and restart Docker."

A block of code is as follows:

apiVersion: authentication.istio.io/v1alpha1kind: Policymetadata: name: SVC-A-mTLS-disable namespace: ns1spec: targets: - name: Service-A peers: - mtls: mode: DISABLE

When we wish to draw your attention to a particular part of a code block, the relevant lines shows in bold:

peers: - mtls:

mode: DISABLE

Any command-line input or output shows as follows:

$ kubectl get pods

$ istioctl proxy

Bold: Indicates a new term, an important word, or words that you see on screen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "On the left-hand menu under Workloads, click Pods."

Warnings or important notes appear like this.
Tips and tricks appear like this.

Errata

The technology landscape is evolving rapidly. When we started writing this book, the Istio release was 1.0.3, this book's current Istio release is 1.3.5. It is a similar case with Linkerd and Consul. The time to market is of the essence and these three open-source projects show a true CICD (short for Continuous Improvement and Continuous Delivery) approach using agile DevOps tools.

In order to run commands and scripts from this book, stick to the version used herein. However, we will update our GitHub repository for this book at https://github.com/servicemeshbook with newer versions that will be released in the future. You can switch to the newer branch in each repository for updated scripts and commands.

We were conscientious and implemented hands-on testing during development for all three service meshes and it is likely that some issues may remain. We suggest that you open issues that you encounter while going through the book. Use these links to open an issue for any errata and bugs:

Istio

https://github.com/servicemeshbook/istio/issues

Linkerd

:

 

https://github.com/servicemeshbook/linkerd/issues

Consul

:

 

https://github.com/servicemeshbook/consul/issues

Your feedback is important to us and you may open an issue for suggestions and any further proposed improvements in relation to the above-mentioned service meshes.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find an error in this book, we appreciate it if you report this to us. Please visit https://www.packtpub.com/support/errata, select the book, click on the Errata Submission Form link, and enter the details.

Piracy: If you come across any illegal copies of our works in any form on the internet, please report to us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in, and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, please leave a review on the site you purchased from. Your comments help us to improve upon the future revisions. If you like the book, leave a positive response for other potential readers to make an informed decision. We at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packt.com.

Section 1: Cloud-Native Application Management

In this section, you will look at high-level artifact of cloud-native applications in order to understand the service mesh architecture.

This section contains the following chapters:

Chapter 1

,

Monolithic Versus Microservices

Chapter 2

,

Cloud-Native Applications

Monolithic Versus Microservices

The purpose of this book is to walk you through the service mesh architecture. We will cover three main open source service mesh providers: Istio, Linkerd, and Consul. First of all, we will talk about how the evolution of technology led to Service Mesh. In this chapter, we will cover the application development journey from monolithic to microservices.

The technology landscape that fueled the growth of the monolithic framework is based on the technology stack that became available 20+ years ago. As hardware and software virtualization improved significantly, a new wave of innovation started with the adoption of microservices in 2011 by Netflix, Amazon, and other companies. This trend started by redesigning monolithic applications into small and independent microservices.

Before we get started on monolithic versus microservices, let's take a step back and review what led to where we are today before the inception of microservices. This chapter will go through the brief evolution of early computer machines, hardware virtualization, software virtualization, and transitioning from monolithic to microservices-based applications. We will try to summarize the journey from the early days to where we are today.

In this chapter, we will cover the following topics:

Early computer machines

Monolithic applications

Microservices applications

Early computer machines

IBM launched its first commercial computer (https://ibm.biz/Bd294n), the IBM 701, in 1953, which was the most powerful high-speed electronic calculator of that time. Further progression of the technology produced mainframes, and that revolution was started in the mid-1950s (https://ibm.biz/Bd294p).

Even before co-founding Intel in 1968 with Robert Noyce, Gordon Moore espoused his theory of Moore's Law (https://intel.ly/2IY5qLU) in 1965, which states that the number of transistors incorporated in a chip will approximately double every 24 months. Exponential growth still continues to this day, though this trend may not continue for long. 

IBM created its first official VM product called VM/370 in 1972 (http://www.vm.ibm.com/history), followed by hardware virtualization on the Intel/AMD platform in 2005 and 2006. Monolithic applications were the only choice on early computing machines.

Early machines ran only one operating system. As time passed and machines grew in size, a need to run multiple operating systems by slicing the machines into smaller virtual machines led to the virtualization of hardware.

Hardware virtualization

Hardware virtualization led to the proliferation of virtual machines in data centers. Greg Kalinsky, EVP and CIO of Geico, in his keynote address to the IBM Think 2019 conference, mentioned the use of 70,000 virtual machines. The management of virtual machines required a different set of tools. In this area, VMware was very successful in the Intel market, whereas IBM's usage of the Hardware Management Console (HMC) was prolific in POWER for creating Logical Partitions (LPARs), or the PowerVM. Hardware virtualization had its own overheads, and it has been very popular for running multiple operating systems machines on the same physical machine.

Multiple monolithic applications have different OS requirements and languages, and it was possible to run the runtime on the same hardware but using multiple virtual machines. During this period of hardware virtualization, work on enterprise applications using the Service-Oriented Architecture (SOA) and the Enterprise Service Bus (ESB) started to evolve, which led to large monolithic applications.

Software virtualization

The next wave of innovation started with software virtualization with the use of containerization technology. Though not new, software virtualization started to get serious traction when it became easier to start adopting through tools. Docker was an early pioneer in this space in order to make software virtualization available to general IT professionals.

Solomon Hykes started dotCloud in 2010 and renamed it Docker in 2013. Software virtualization became possible due to advances in technology to provide namespace, filesystem, and processes isolation while still using the same kernel running in a bare-metal environment or in a virtual machine. 

Software virtualization using containers provides better resource utilization compared to running multiple virtual machines. This leads to 30% to 40% effective resource utilization. Usually, a virtual machine takes seconds to minutes to initialize, whereas containerization shares the same kernel space, so the start up time is a lot quicker than it is with a virtual machine.

As a matter of fact, Google used software virtualization at a very large scale and used containerization for close to 10 years. This revealed the existence of their project, known as Borg. When Google published a research paper in 2015 in the EuroSys conference (https://goo.gl/Ez99hu) about its approach in managing data centers using containerization technology, it piqued interest among many technologists and, at the very same time, Docker exploded in popularity during 2014 and 2015, which made software virtualization simple enough to use. 

One of the main benefits of software virtualization (also known as containerization) was to eliminate the dependency problem for a particular piece of software. For example, the Linux glibc is the main building block library, and there are hundreds of libraries that have dependencies on a particular version of glibc. We could build a Docker container that has a particular version of glibc, and it could run on a machine that has a later version of glibc. Normally, these kinds of deep dependencies have a very complex way of maintaining two different software stacks that have been built using different versions of glibc, but containers made this very simple. Docker is credited for making a simple user interface that made software packaging easy and accessible to developers.

Software virtualization made it possible to run different monolithic applications that can run within the same hardware (bare metal) or within the same virtual machine. This also led to the birth of smaller services (a complete business function) being packaged as independent software units. This is when the era of microservices started. 

Container orchestration

It is easy to manage a few containers and their deployment. When the number of containers increases, a container orchestration platform makes deployment and management simpler and easier through declarative prescriptions. As containerization proliferated in 2015, the orchestration platform for containerization also evolved. Docker came with its own open source container orchestration platform known as Docker Swarm, which was a clustering and scheduling tool for Docker containers. 

Apache Mesos, though not exactly similar to Docker Swarm, was built using the same principles as the Linux kernel. It was an abstract layer between applications and the Linux kernel. It was meant for distributed computing and acts as a cluster manager with an API for resource management and scheduling.

Kubernetes was the open source evolution of Google's Borg project, and its first version was released in 2015 through the Cloud Native Computing Foundation (https://cncf.io) as its first incubator project. 

Major companies such as Google, Red Hat, Huawei, ZTE, VMware, Cisco, Docker, AWS, IBM, and Microsoft are contributing to the Kubernetes open source platform, and it has become a modern cluster manager and container orchestration platform. It's not a surprise that Kubernetes has become the de facto platform and is now used by all major cloud providers, with 125 companies working on it and more than 2,800+ contributors adding to it (https://www.stackalytics.com/cncf?module=kubernetes).

As container orchestration began to simplify cluster management, it became easy to run microservices in a distributed environment, which made microservices-based applications loosely coupled systems with horizontal scale-out possibilities. 

Horizontal scale-out distributed computing is not new, with IBM's shared-nothing architecture for the Db2 database (monolithic application) being in use since 1998. What's new is the loosely coupled microservices that can run and scale out easily using a modern cluster manager.

Monolithic applications that used a three-tier architecture, such as Model, View, Controller (MVC) or SOA, were one of the architectural patterns on bare metal or virtualized machines. This type of pattern was adopted well in static data center environments where machines could be identified through IP addresses, and the changes were managed through DNS. This started to change with the use of distributed applications that could run on any machine (which meant the IP address could change) in the case of failures. This shift slowly started from a static data center approach to a dynamic data center approach, where identification is now done through the name of the microservice and not the IP address of the machine or container pod where the workload runs.

This fundamental shift from static to dynamic infrastructure is the basis for the evolution from monolithic to a microservices architecture. Monolithic applications are tightly coupled and have a single code base that is released in one instance for the entire application stack. Changing a single component without affecting others is a very difficult process, but it provides simplicity. On the other hand, microservices applications are loosely coupled and multiple code bases can be released independently of each other. Changing a single component is easy, but it does not provide simplicity, as was the case with monolithic applications. 

We will cover a brief history of monolithic and microservices applications in the next section in order to develop a context. This will help us transition to the specific goals of this book.

Monolithic applications

The application evolution journey from monolithic to microservices can be seen in the following diagram:

Monolithic applications were created from small applications and then built up to create a tiered architecture that separated the frontend from the backend, and the backend from the data sources. In this architecture, the frontend manages user interaction, the middle tier manages the business logic, and the backend manages data access. This can be seen in the following diagram:

In the preceding diagram, the middle tier, also known as the business logic, is tightly bound to the frontend and the backend. This is a one-dimensional monolithic experience where all the tiers are in one straight line.

The three-tier modular architecture of the client-server, consisting of a frontend tier, an application tier, and a database tier, is almost 20+ years old now. It served its purpose of allowing people to build complex enterprise applications with known limitations regarding complexity, software upgrades, and zero downtime. 

A large development team commits its code to a source code repository such as GitHub. The deployment process from code commits to production used to be manual before the CICD pipeline came into existence. The releases needed to be manually tested, although there were some automated test cases. Organizations used to declare a code freeze while moving the code into production. The application became overly large, complex, and very difficult to maintain in the long term. When the original code developers were no longer available, it became very difficult and time-consuming to add enhancements.

To overcome the aforementioned limitations, the concept of SOA started to evolve around 2002 onward and the Enterprise Service Bus (ESB) evolved to establish a communication link between different applications in SOA. 

Brief history of SOA and ESB

The one-dimensional model of the three-tier architecture was split into a multi-dimensional SOA, where inter-service communication was enabled through ESB using the Simple Object Access Protocol (SOAP) and other web services standards.

SOA, along with ESB, could be used to break down a large three-tier application into services, where applications were built using these reusable services. The services could be dynamically discovered using service metadata through a metadata repository. With SOA, each functionality is built as a coarse-grained service that's often deployed inside an application server. 

Multiple services need to be integrated to create composite services that are exposed through the ESB layer, which becomes a centralized bus for communication. This can be seen in the following diagram:

The preceding diagram shows the consumer and provider model connected through the ESB. The ESB also contains significant business logic, making it a monolithic entity where the same runtime is shared by developers in order to develop or deploy their service integrations.

In the next section, we'll talk about API gateways. The concept of the API gateway evolved around 2008 with the advent of smartphones, which provide rich client applications that need easy and secure connectivity to the backend services.

API Gateway

The SOA/web services were not ideal for exposing business functionality as APIs. This was due to the complex nature of web service-related technologies in which SOAP is used as a message format for service-to-service communication. SOAP was also used for securing web services and service-to-service communication, as well as for defining service discovery metadata. SOAP lacked a self-service model, which hindered the development of an ecosystem around it.

We use application programming interface (API), as a term, to expose a service over REST (HTTP/JSON) or a web service (SOAP/HTTP). An API gateway was typically built on top of existing SOA/ESB implementations for APIs that could be used to expose business functionality securely as a managed service. This can be seen in the following diagram:

In the preceding diagram, the API gateway is used to expose the three-tier and SOA/ESB-based services in which the business logic contained in the ESB still hinders the development of the independent services. 

With containerization availability, the new paradigm of microservices started to evolve from the SOA/ESB architecture in 2012 and seriously took off in 2015.

Drawbacks of monolithic applications

Monolithic applications are simple to develop, deploy, and scale as long as they are small in nature.

As the size and complexity of monoliths grow, various disadvantages arise, such as the following:

Development is slow.

Large monolithic code bases intimidate new developers.

The application is difficult to understand and modify.

Software releases are painful and occur infrequently.

Overloaded IDE, web container.

Continuous deployment is difficult

– 

Code Freeze period to deploy.

Scaling the application can be difficult due to an increase in data volume.

Scaling development can be difficult.

Requires long-term commitment to a technology stack.

Lack of reliability due to difficulty in testing the application thoroughly.

Enterprise application development is coordinated among many smaller teams that can work independently of each other. As an application grows in size, the aforementioned complexities lead to them looking for better approaches, resulting in the adoption of microservices.

Microservices applications

A very small number of developers recognized the need for new thinking very early on and started working on the evolution of a new architecture, called microservices, early in 2014.

Early pioneers

A few individuals took a forward leap in moving away from monolithic to small manageable services adoption in their respective companies. Some of the most notable of these people include Jeff Bezos, Amazon's CEO, who famously implemented a mandate for Amazon (https://bit.ly/2Hb3NI5) in 2002. It stated that all employees have to adopt a service interface methodology where all communication calls would happen over the network. This daring initiative replaced the monolith with a collection of loosely coupled services. One nugget of wisdom from Jeff Bezos was two-pizza teams – individual teams shouldn't be larger than what two pizzas can feed. This colloquial wisdom is at the heart of shorter development cycles, increased deployment frequency, and faster time to market.

Netflix adopted microservices early on. It's important to mention Netflix's Open Source Software Center (OSS) contribution through https://netflix.github.io. Netflix also created a suite of automated open source tools, the Simian Army (https://github.com/Netflix/SimianArmy), to stress-test its massive cloud infrastructure. The rate at which Netflix has adopted new technologies and implemented them is phenomenal.

Lyft adopted microservices and created an open source distributed proxy known as Envoy (https://www.envoyproxy.io/) for services and applications, and would later go on to become a core part of one of the most popular service mesh implementations, such as Istio and Consul. 

Though this book is not about developing microservices applications, we will briefly discuss the microservices architecture so that it is relevant from the perspective of a service mesh. 

Since early 2000, when machines were still used as bare metal, three-tier monolithic applications ran on more than one machine, leading to the concept of distributed computing that was very tightly coupled. Bare metal evolved into VMs and monolithic applications into SOA/ESB with an API gateway. This trend continued until 2015 when the advent of containers disrupted the SOA/ESB way of thinking toward a self-contained, independently managed service. Due to this, the term microservice was coined.

The first mention of microservice as a term was used in a workshop of software architects in 2011 (https://bit.ly/1KljYiZ) when they used the term microservice to describe a common architectural style as a fine-grained SOA.

Chris Richardson created https://microservices.io in January 2014 to document architecture and design patterns.

James Lewis and Martin Fowler published their blog post (https://martinfowler.com/articles/microservices.html) about microservices in March 2014, and this blog post popularized the term microservices. 

The microservices boom started with easy containerization that was made possible by Docker and through a de facto container orchestration platform known as Kubernetes, which was created for distributed computing.

What is a microservice?

The natural transition of SOA/ESB is toward microservices, in which services are decoupled from a monolithic ESB. Let's go over the core points of microservices:

Each service is autonomous, which is developed and deployed independently.

Each microservice can be scaled independently in relation to others if it receives more traffic without having to scale other microservices. 

Each microservice is designed based on the business capabilities at hand so that each service serves a specific business goal with a simple time principle that it does only one thing, and does it well.

Since services do not share the same execution runtime, each microservice can be developed in different languages or in a polyglot fashion, providing agility in which developers pick the best programming language to develop their own service.

The microservices architecture eliminated the need for a centralized ESB. The business logic, including inter-service communication, is done through smart endpoints and dumb pipes. This means that the centralized business logic of ESBs is now distributed among the microservices through smart endpoints, and a primitive messaging system or a dumb pipe is used for

service-to-service communication 

using a lightweight protocol such as REST or gRPC. 

The evolution of SOA/ESB to the microservices pattern was mainly influenced by the idea of being able to adapt to smaller teams that are independent of each other and to provide a self-service model for the consumption of services that were created by smaller teams. At the time of writing, microservices is a winning pattern that is being adopted by many enterprises to modernize their existing monolithic application stack.

Evolution of microservices

The following diagram shows the evolution of the application architecture from a three-tier architecture to SOA/ESB and then to microservices in terms of flexibility toward scalability and decoupling:

Microservices have evolved from being tiered and the SOA architecture and are becoming the accepted pattern for building modern applications. This is due to the following reasons:

Extreme scalability

Extreme decoupling

Extreme agility

These are key points regarding the design of a distributed scalable application where developers can pick the best programming language of their choice to develop their own service.

A major differentiation between monolithic and microservices is that, with microservices, the services are loosely coupled, and they communicate using dumb pipe or low-level REST or gRPC protocols. One way to achieve loose coupling is through the use of a separate data store for each service. This helps services isolate themselves from each other since a particular service is not blocked due to another service holding a data lock. Separate data stores allow the microservices to scale up and down, along with their data stores, independently of all the other services.

It is also important to point out the early pioneers in microservices, which we will discuss in the next section.

Microservices architecture

The aim of a microservice architecture is to completely decouple app components from one another so that they can be maintained, scaled, and more. It's an evolution of the app architecture, SOA, and publishing APIs:

SOA

: Focuses on reuse, technical integration issues, and technical APIs

Microservices

: Focus on functional decomposition, business capabilities, and business APIs

In Martin Fowler's paper, he states that the microservice architecture would have been better named the micro-component architecture because it is really about breaking apps up into smaller pieces (micro-components). For more information, see Microservices, by Martin Fowler, at https://martinfowler.com/articles/microservices.html. Also, check out Kim Clark's IBM blog post on microservices at https://developer.ibm.com/integration/blog/2017/02/09/microservices-vs-soa, where he argues microservices as micro-components. 

The following diagram shows the microservice architecture in which different clients consume the same services. Each service can use the same/different language and can be deployed/scaled independently of each other:

Each microservice runs its own process. Services are optimized for a single function and they must have one, and only one, reason to change. The communication between services is done through REST APIs and message brokers. The CICD is defined per service. The services evolve at a different pace. The scaling policy for each service can be different.

Benefits and drawbacks of microservices

The explosion of microservices is not an accident, and it is mainly due to rapid development and scalability:

Rapid development

Develop and deploy a single service independently. 

Focus only on the interface and the functionality of the service and not the functionality of the entire system.

Scalability

: Scale a service independently without affecting others. This is simple and easy to do in a Kubernetes environment.

The other benefits of microservices are as follows:

Each service can use a different language (

better polyglot adaptability).

Services are developed on their own timetables so that the new versions are delivered independently of other services.

The development of microservices is suited for cross-functional teams.

Improved fault isolation.

Eliminates any long-term commitment to a technology stack.

However, the microservice is not a panacea and comes with drawbacks:

The complexity of a distributed system.

Increased resource consumption.

Inter-service communication.

Testing dependencies in a microservices-based application without a tool can be very cumbersome.

When a service fails, it becomes very difficult to identify the cause of a failure.