Microservices with Spring Boot and Spring Cloud - Magnus Larsson - E-Book

Microservices with Spring Boot and Spring Cloud E-Book

Magnus Larsson

0,0
34,79 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Want to build and deploy microservices, but don’t know where to start? Welcome to Microservices with Spring Boot and Spring Cloud.

This edition features the most recent versions of Spring, Java, Kubernetes, and Istio, demonstrating faster and simpler handling of Spring Boot, local Kubernetes clusters, and Istio installation. The expanded scope includes native compilation of Spring-based microservices, support for Mac and Windows with WSL2, and an introduction to Helm 3 for packaging and deployment. A revamped security chapter now follows the OAuth 2.1 specification and makes use of the newly launched Spring Authorization Server from the Spring team.

You’ll start with a set of simple cooperating microservices, then add persistence and resilience, make your microservices reactive, and document their APIs using OpenAPI.

Next, you’ll learn how fundamental design patterns are applied to add important functionality, such as service discovery with Netflix Eureka and edge servers with Spring Cloud Gateway. You’ll deploy your microservices using Kubernetes and adopt Istio, then explore centralized log management using the Elasticsearch, Fluentd, and Kibana (EFK) stack, and then monitor microservices using Prometheus and Grafana.

By the end of this book, you'll be building scalable and robust microservices using Spring Boot and Spring Cloud.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 836

Veröffentlichungsjahr: 2021

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Microservices with Spring Boot and Spring Cloud

Second Edition

Build resilient and scalable microservices using Spring Cloud, Istio, and Kubernetes

Magnus Larsson

BIRMINGHAM—MUMBAI

Microservices with Spring Boot and Spring Cloud

Second Edition

Copyright © 2021 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Producer: Caitlin Meadows

Acquisition Editor – Peer Reviews: Saby Dsilva

Project Editor: Rianna Rodrigues

Content Development Editor: Lucy Wan

Copy Editor: Safis Editing

Technical Editor: Aniket Shetty

Proofreader: Safis Editing

Indexer: Pratik Shirodkar

Presentation Designer: Ganesh Bhadwalkar

First published: September 2019

Second edition: July 2021

Production reference: 1270721

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80107-297-7

www.packt.com

Contributors

About the author

Magnus Larsson has been in the IT industry for 35 years, working as a consultant for large companies in Sweden such as Volvo, Ericsson, and AstraZeneca. In the past, he struggled with the challenges associated with distributed systems. Today, these challenges can be handled with open source tools such as Spring Cloud, Kubernetes, and Istio. Over the last years, Magnus has been helping customers use these tools and has also given several presentations and written blog posts on the subject.

I would like to thank the following people:

Caitlin Meadows, Lucy Wan, Rianna Rodrigues, and Aniket Shetty from Packt Publishing for their support.

To my wife, Maria, thank you for all of your support and understanding throughout the process of writing this book.

About the reviewer

Kirill Merkushev is an engineer with a wide background in server-side development, infrastructure, and test automation. Starting off as an intern in Personal Services at Yandex, he quickly became a team lead, helping others to automate any kind of development process. He worked on a number of internal projects with amazing people who really love their job! In that kind of environment, it was incredibly easy for him to learn new approaches, frameworks, and languages. Given the size of Yandex and its services, it was a great chance to try out things at scale. For example, early reactive libraries in Java, the freshly released Spring Boot, the rock-solid Apache Camel, and golang.

During that time he became an open source expert, maintaining several projects including the Jenkins GitHub plugin, Aerokube Selenoid, and dozens of small libraries. After 7 years at Yandex, an opportunity to work in Germany in a small but quite promising health-tech startup called Vivy brought him to Berlin, where new challenges emerged, like how to build an event-sourced system, use encryption for good, and operate an internal Apache Pulsar cluster.

Now he is a happy power user of Testcontainers, father of two kids, Brompton rider, and a reviewer of this book!

I'd like to thank Sergei Egorov, who has shared tons of knowledge with me; Andrei Andryashin, who helped me with my first server-side development issues; Artem Eroshenko, who taught me how to give my first public talks; obviously, my wife, who makes it possible for me to code and review books in a comfortable environment any time of day; and my sons, who can already understand that daddy is actually working when he sits all day long in front of a PC!

Contents

Preface

Who this book is for

What this book covers

To get the most out of this book

Get in touch

Part I: Getting Started with Microservice Development Using Spring Boot

Introduction to Microservices

Technical requirements

My way into microservices

Benefits of autonomous software components

Challenges with autonomous software components

Enter microservices

A sample microservice landscape

Defining a microservice

Challenges with microservices

Design patterns for microservices

Service discovery

Problem

Solution

Solution requirements

Edge server

Problem

Solution

Solution requirements

Reactive microservices

Problem

Solution

Solution requirements

Central configuration

Problem

Solution

Solution requirements

Centralized log analysis

Problem

Solution

Solution requirements

Distributed tracing

Problem

Solution

Solution requirements

Circuit breaker

Problem

Solution

Solution requirements

Control loop

Problem

Solution

Solution requirements

Centralized monitoring and alarms

Problem

Solution

Solution requirements

Software enablers

Other important considerations

Summary

Introduction to Spring Boot

Technical requirements

Spring Boot

Convention over configuration and fat JAR files

Code examples for setting up a Spring Boot application

The magic @SpringBootApplication annotation

Component scanning

Java-based configuration

Spring WebFlux

Code examples of setting up a REST service

Starter dependencies

Property files

Sample RestController

springdoc-openapi

Spring Data

Entity

Repositories

Spring Cloud Stream

Code examples for sending and receiving messages

Docker

Summary

Questions

Creating a Set of Cooperating Microservices

Technical requirements

Introducing the microservice landscape

Information handled by the microservices

Product service

Review service

Recommendation service

Product composite service

Infrastructure-related information

Temporarily replacing service discovery

Generating skeleton microservices

Using Spring Initializr to generate skeleton code

Setting up multi-project builds in Gradle

Adding RESTful APIs

Adding an API and a util project

The API project

The util project

Implementing our API

Adding a composite microservice

API classes

Properties

Integration component

Composite API implementation

Adding error handling

The global REST controller exception handler

Error handling in API implementations

Error handling in the API client

Testing APIs manually

Adding automated microservice tests in isolation

Adding semi-automated tests of a microservice landscape

Trying out the test script

Summary

Questions

Deploying Our Microservices Using Docker

Technical requirements

Introduction to Docker

Running our first Docker commands

Challenges with running Java in Docker

Limiting available CPUs

Limiting available memory

Using Docker with one microservice

Changes in source code

Building a Docker image

Starting up the service

Running the container detached

Managing a landscape of microservices using Docker Compose

Changes in the source code

Starting up the microservice landscape

Automating tests of cooperating microservices

Troubleshooting a test run

Summary

Questions

Adding an API Description Using OpenAPI

Technical requirements

Introduction to using springdoc-openapi

Adding springdoc-openapi to the source code

Adding dependencies to the Gradle build files

Adding OpenAPI configuration and general API documentation to the ProductCompositeService

Adding API-specific documentation to the ProductCompositeService interface

Building and starting the microservice landscape

Trying out the OpenAPI documentation

Summary

Questions

Adding Persistence

Technical requirements

Chapter objectives

Adding a persistence layer to the core microservices

Adding dependencies

Storing data with entity classes

Defining repositories in Spring Data

Writing automated tests that focus on persistence

Using Testcontainers

Writing persistence tests

Using the persistence layer in the service layer

Logging the database connection URL

Adding new APIs

Calling the persistence layer from the service layer

Declaring a Java bean mapper

Updating the service tests

Extending the composite service API

Adding new operations in the composite service API

Adding methods in the integration layer

Implementing the new composite API operations

Updating the composite service tests

Adding databases to the Docker Compose landscape

The Docker Compose configuration

Database connection configuration

The MongoDB and MySQL CLI tools

Manual tests of the new APIs and the persistence layer

Updating the automated tests of the microservice landscape

Summary

Questions

Developing Reactive Microservices

Technical requirements

Choosing between non-blocking synchronous APIs and event-driven asynchronous services

Developing non-blocking synchronous REST APIs

An introduction to Project Reactor

Non-blocking persistence using Spring Data for MongoDB

Changes in the test code

Non-blocking REST APIs in the core services

Changes in the APIs

Changes in the service implementations

Changes in the test code

Dealing with blocking code

Non-blocking REST APIs in the composite services

Changes in the API

Changes in the service implementation

Changes in the integration layer

Changes in the test code

Developing event-driven asynchronous services

Handling challenges with messaging

Consumer groups

Retries and dead-letter queues

Guaranteed order and partitions

Defining topics and events

Changes in the Gradle build files

Consuming events in the core services

Declaring message processors

Changes in the service implementations

Adding configuration for consuming events

Changes in the test code

Publishing events in the composite service

Publishing events in the integration layer

Adding configuration for publishing events

Changes in the test code

Running manual tests of the reactive microservice landscape

Saving events

Adding a health API

Using RabbitMQ without using partitions

Using RabbitMQ with partitions

Using Kafka with two partitions per topic

Running automated tests of the reactive microservice landscape

Summary

Questions

Part II: Leveraging Spring Cloud to Manage Microservices

Introduction to Spring Cloud

Technical requirements

The evolution of Spring Cloud

Using Netflix Eureka for service discovery

Using Spring Cloud Gateway as an edge server

Using Spring Cloud Config for centralized configuration

Using Resilience4j for improved resilience

Sample usage of the circuit breaker in Resilience4j

Using Spring Cloud Sleuth and Zipkin for distributed tracing

Summary

Questions

Adding Service Discovery Using Netflix Eureka

Technical requirements

Introducing service discovery

The problem with DNS-based service discovery

Challenges with service discovery

Service discovery with Netflix Eureka in Spring Cloud

Setting up a Netflix Eureka server

Connecting microservices to a Netflix Eureka server

Setting up the configuration for development use

Eureka configuration parameters

Configuring the Eureka server

Configuring clients to the Eureka server

Trying out the discovery service

Scaling up

Scaling down

Disruptive tests with the Eureka server

Stopping the Eureka server

Stopping a review instance

Starting up an extra instance of the product service

Starting up the Eureka server again

Summary

Questions

Using Spring Cloud Gateway to Hide Microservices behind an Edge Server

Technical requirements

Adding an edge server to our system landscape

Setting up Spring Cloud Gateway

Adding a composite health check

Configuring a Spring Cloud Gateway

Routing rules

Trying out the edge server

Examining what is exposed outside the Docker engine

Trying out the routing rules

Calling the product composite API through the edge server

Calling the Swagger UI through the edge server

Calling Eureka through the edge server

Routing based on the host header

Summary

Questions

Securing Access to APIs

Technical requirements

Introduction to OAuth 2.0 and OpenID Connect

Introducing OAuth 2.0

Introducing OpenID Connect

Securing the system landscape

Protecting external communication with HTTPS

Replacing a self-signed certificate at runtime

Securing access to the discovery server

Changes in the Eureka server

Changes in Eureka clients

Adding a local authorization server

Protecting APIs using OAuth 2.0 and OpenID Connect

Changes in both the edge server and the product-composite service

Changes in the product-composite service only

Changes to allow Swagger UI to acquire access tokens

Changes in the test script

Testing with the local authorization server

Building and running the automated tests

Testing the protected discovery server

Acquiring access tokens

Acquiring access tokens using the client credentials grant flow

Acquiring access tokens using the authorization code grant flow

Calling protected APIs using access tokens

Testing Swagger UI with OAuth 2.0

Testing with an external OpenID Connect provider

Setting up and configuring an account in Auth0

Applying the required changes to use Auth0 as an OpenID provider

Changing the configuration in the OAuth resource servers

Changing the test script so it acquires access tokens from Auth0

Running the test script with Auth0 as the OpenID Connect provider

Acquiring access tokens using the client credentials grant flow

Acquiring access tokens using the authorization code grant flow

Calling protected APIs using the Auth0 access tokens

Getting extra information about the user

Summary

Questions

Centralized Configuration

Technical requirements

Introduction to the Spring Cloud Configuration server

Selecting the storage type of the configuration repository

Deciding on the initial client connection

Securing the configuration

Securing the configuration in transit

Securing the configuration at rest

Introducing the config server API

Setting up a config server

Setting up a routing rule in the edge server

Configuring the config server for use with Docker

Configuring clients of a config server

Configuring connection information

Structuring the configuration repository

Trying out the Spring Cloud Configuration server

Building and running automated tests

Getting the configuration using the config server API

Encrypting and decrypting sensitive information

Summary

Questions

Improving Resilience Using Resilience4j

Technical requirements

Introducing the Resilience4j resilience mechanisms

Introducing the circuit breaker

Introducing the time limiter

Introducing the retry mechanism

Adding the resilience mechanisms to the source code

Adding programmable delays and random errors

Changes in the API definitions

Changes in the product-composite microservice

Changes in the product microservice

Adding a circuit breaker and a time limiter

Adding dependencies to the build file

Adding annotations in the source code

Adding fail-fast fallback logic

Adding configuration

Adding a retry mechanism

Adding the retry annotation

Adding configuration

Adding automated tests

Trying out the circuit breaker and retry mechanism

Building and running the automated tests

Verifying that the circuit is closed under normal operations

Forcing the circuit breaker to open when things go wrong

Closing the circuit breaker again

Trying out retries caused by random errors

Summary

Questions

Understanding Distributed Tracing

Technical requirements

Introducing distributed tracing with Spring Cloud Sleuth and Zipkin

Adding distributed tracing to the source code

Adding dependencies to build files

Adding configuration for Spring Cloud Sleuth and Zipkin

Adding Zipkin to the Docker Compose files

Trying out distributed tracing

Starting up the system landscape with RabbitMQ as the queue manager

Sending a successful API request

Sending an unsuccessful API request

Sending an API request that triggers asynchronous processing

Monitoring trace information passed to Zipkin in RabbitMQ

Using Kafka as a message broker

Summary

Questions

Part III: Developing Lightweight Microservices Using Kubernetes

Introduction to Kubernetes

Technical requirements

Introducing Kubernetes concepts

Introducing Kubernetes API objects

Introducing Kubernetes runtime components

Creating a Kubernetes cluster using Minikube

Working with Minikube profiles

Working with the Kubernetes CLI, kubectl

Working with kubectl contexts

Creating a Kubernetes cluster

Trying out a sample deployment

Managing a local Kubernetes cluster

Hibernating and resuming a Kubernetes cluster

Terminating a Kubernetes cluster

Summary

Questions

Deploying Our Microservices to Kubernetes

Technical requirements

Replacing Netflix Eureka with Kubernetes Services

Introducing how Kubernetes will be used

Using Spring Boot's support for graceful shutdown and probes for liveness and readiness

Introducing Helm

Running Helm commands

Looking into a Helm chart

Helm templates and values

The common library chart

The ConfigMap template

The Secrets template

The Service template

The Deployment template

The components charts

The environment charts

Deploying to Kubernetes for development and test

Building Docker images

Resolving Helm chart dependencies

Deploying to Kubernetes

Changes in the test script for use with Kubernetes

Testing the deployment

Testing Spring Boot's support for graceful shutdown and probes for liveness and readiness

Deploying to Kubernetes for staging and production

Changes in the source code

Deploying to Kubernetes

Cleaning up

Summary

Questions

Implementing Kubernetes Features to Simplify the System Landscape

Technical requirements

Replacing the Spring Cloud Config Server

Changes required to replace the Spring Cloud Config Server

Replacing the Spring Cloud Gateway

Changes required to replace the Spring Cloud Gateway

Automating certificate provisioning

Testing with Kubernetes ConfigMaps, Secrets, Ingress, and the cert-manager

Rotating certificates

Deploying to Kubernetes for staging and production

Verifying that the microservices work without Kubernetes

Changes in the Docker Compose files

Testing with Docker Compose

Summary

Questions

Using a Service Mesh to Improve Observability and Management

Technical requirements

Introducing service meshes using Istio

Introducing Istio

Injecting Istio proxies into microservices

Introducing Istio API objects

Simplifying the microservice landscape

Replacing Kubernetes Ingress controller with Istio ingress gateway

Replacing the Zipkin server with Istio's Jaeger component

Deploying Istio in a Kubernetes cluster

Setting up access to Istio services

Creating the service mesh

Source code changes

Content in the _istio_base.yaml template

Content in the _istio_dr_mutual_tls.yaml template

Running commands to create the service mesh

Observing the service mesh

Securing a service mesh

Protecting external endpoints with HTTPS and certificates

Authenticating external requests using OAuth 2.0/OIDC access tokens

Protecting internal communication using mutual authentication (mTLS)

Ensuring that a service mesh is resilient

Testing resilience by injecting faults

Testing resilience by injecting delays

Performing zero-downtime updates

Source code changes

Virtual services and destination rules

Deployments and services

Tying things together in the prod-env Helm chart

Deploying v1 and v2 versions of the microservices with routing to the v1 version

Verifying that all traffic initially goes to the v1 version of the microservices

Running canary tests

Running blue/green deployment

A short introduction to the kubectl patch command

Performing the blue/green deployment

Running tests with Docker Compose

Summary

Questions

Centralized Logging with the EFK Stack

Technical requirements

Introducing Fluentd

Overview of Fluentd

Configuring Fluentd

Deploying the EFK stack on Kubernetes

Building and deploying our microservices

Deploying Elasticsearch and Kibana

A walkthrough of the manifest files

Running the deploy commands

Deploying Fluentd

A walkthrough of the manifest files

Running the deploy commands

Trying out the EFK stack

Initializing Kibana

Analyzing the log records

Discovering the log records from microservices

Performing root cause analyses

Summary

Questions

Monitoring Microservices

Technical requirements

Introduction to performance monitoring using Prometheus and Grafana

Changes in source code for collecting application metrics

Building and deploying the microservices

Monitoring microservices using Grafana dashboards

Installing a local mail server for tests

Starting up the load test

Using Kiali's built-in dashboards

Importing existing Grafana dashboards

Developing your own Grafana dashboards

Examining Prometheus metrics

Creating the dashboard

Trying out the new dashboard

Exporting and importing Grafana dashboards

Setting up alarms in Grafana

Setting up a mail-based notification channel

Setting up an alarm on the circuit breaker

Trying out the circuit breaker alarm

Summary

Questions

Installation Instructions for macOS

Technical requirements

Installing tools

Installing Homebrew

Using Homebrew to install tools

Install tools without Homebrew

Post-installation actions

Verifying the installations

Accessing the source code

Using an IDE

The structure of the code

Installation Instructions for Microsoft Windows with WSL 2 and Ubuntu

Technical requirements

Installing tools

Installing tools on Windows

Installing WSL 2 – Windows Subsystem for Linux v2

Installing Ubuntu 20.04 on WSL 2

Installing Windows Terminal

Installing Docker Desktop for Windows

Installing Visual Studio Code and its extension for Remote WSL

Installing tools on the Linux server in WSL 2

Installing tools using apt install

Installing the Spring Boot CLI using sdk install

Installing the remaining tools using curl and install

Verifying the installations

Accessing the source code

The structure of the code

Native Compiled Java Microservices

Technical requirements

When to native compile Java source code

Introducing the GraalVM project

Introducing the Spring Native project

Compiling source code to a native image

Changes in the source code

Updates to the Gradle build files

Providing native hints as annotations

When underlying frameworks and libraries don't support native compilation

Installing the tracing agent

Installing the tracing agent on macOS

Installing the tracing agent on Ubuntu under WSL 2 in Microsoft Windows

Running the tracing agent

Creating the native images

Testing with Docker Compose

Testing with Kubernetes

Summary

Questions

Other Books You May Enjoy

Index

Landmarks

Cover

Index

Share your thoughts

Once you've read Microservices with Spring Boot and Spring Cloud, Second Edition, we'd love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we're delivering excellent quality content.

PART I

Getting Started with Microservice Development Using Spring Boot

In this part, you will learn how to use some of the most important features of Spring Boot to develop microservices.

This part includes the following chapters:

Chapter 1, Introduction to MicroservicesChapter 2, Introduction to Spring BootChapter 3, Creating a Set of Cooperating MicroservicesChapter 4, Deploying Our Microservices Using DockerChapter 5, Adding an API Description Using OpenAPIChapter 6, Adding PersistenceChapter 7, Developing Reactive Microservices

1

Introduction to Microservices

This book does not blindly praise microservices. Instead, it's about how we can use their benefits while being able to handle the challenges of building scalable, resilient, and manageable microservices.

As an introduction to this book, the following topics will be covered in this chapter:

How I learned about microservices and what experience I have of their benefits and challengesWhat is a microservice-based architecture?Challenges with microservicesDesign patterns for handling challengesSoftware enablers that can help us handle these challengesOther important considerations that aren't covered in this book

Technical requirements

No installations are required for this chapter. However, you may be interested in taking a look at the C4 model conventions, https://c4model.com, since the illustrations in this chapter are inspired by the C4 model.

This chapter does not contain any source code.

My way into microservices

When I first learned about the concept of microservices back in 2014, I realized that I had been developing microservices (well, kind of) for a number of years without knowing it was microservices I was dealing with. I was involved in a project that started in 2009 where we developed a platform based on a set of separated features. The platform was delivered to a number of customers that deployed it on-premises. To make it easy for customers to pick and choose what features they wanted to use from the platform, each feature was developed as an autonomous software component; that is, it had its own persistent data and only communicated with other components using well-defined APIs.

Since I can't discuss specific features in this project's platform, I have generalized the names of the components, which are labeled from Component A to Component F. The composition of the platform into a set of components is illustrated as follows:

Figure 1.1: The composition of the platform

From the illustration, we can also see that each component has its own storage for persistent data, and is not sharing databases with other components.

Each component is developed using Java and the Spring Framework, packaged as a WAR file and deployed as a web app in a Java EE web container, for example, Apache Tomcat. Depending on the customer's specific requirements, the platform can be deployed on single or multiple servers. A two-node deployment may look as follows:

Figure 1.2: A two-node deployment scenario

Benefits of autonomous software components

From this project, I learned that decomposing the platform's functionality into a set of autonomous software components provides a number of benefits:

A customer can deploy parts of the platform in its own system landscape, integrating it with its existing systems using its well-defined APIs.

The following is an example where one customer decided to deploy Component A, Component B, Component D, and Component E from the platform and integrate them with two existing systems in the customer's system landscape, System A and System B:

Figure 1.3: Partial deployment of the platform

Another customer can choose to replace parts of the platform's functionality with implementations that already exist in the customer's system landscape, potentially requiring some adoption of the existing functionality in the platform's APIs. The following is an example where a customer has replaced Component C and Component F in the platform with their own implementation:

Figure 1.4: Replacing parts of the platform

Each component in the platform can be delivered and upgraded separately. Thanks to the use of well-defined APIs, one component can be upgraded to a new version without being dependent on the life cycle of the other components.

The following is an example where Component A has been upgraded from version v1.1 to v1.2. Component B, which calls Component A, does not need to be upgraded since it uses a well-defined API; that is, it's still the same after the upgrade (or it's at least backward-compatible):

Figure 1.5: Upgrading a specific component

Thanks to the use of well-defined APIs, each component in the platform can also be scaled out to multiple servers independently of the other components. Scaling can be done either to meet high availability requirements or to handle higher volumes of requests. In this specific project, it was achieved by manually setting up load balancers in front of a number of servers, each running a Java EE web container. An example where Component A has been scaled out to three instances looks as follows:

Figure 1.6: Scaling out the platform

Challenges with autonomous software components

My team also learned that decomposing the platform introduced a number of new challenges that we were not exposed to (at least not to the same degree) when developing more traditional, monolithic applications:

Adding new instances to a component required manually configuring load balancers and manually setting up new nodes. This work was both time-consuming and error-prone.The platform was initially prone to errors caused by the other systems it was communicating with. If a system stopped responding to requests that were sent from the platform in a timely fashion, the platform quickly ran out of crucial resources, for example, OS threads, specifically when exposed to a large number of concurrent requests. This caused components in the platform to hang or even crash. Since most of the communication in the platform is based on synchronous communication, one component crashing can lead to cascading failures; that is, clients of the crashing components could also crash after a while. This is known as a chain of failures.Keeping the configuration in all the instances of the components consistent and up to date quickly became a problem, causing a lot of manual and repetitive work. This led to quality problems from time to time.Monitoring the state of the platform in terms of latency issues and hardware usage (for example, usage of CPU, memory, disks, and the network) was more complicated compared to monitoring a single instance of a monolithic application.Collecting log files from a number of distributed components and correlating related log events from the components was also difficult, but feasible since the number of components was fixed and known in advance.

Over time, we addressed most of the challenges that were mentioned in the preceding list with a mix of in-house-developed tools and well-documented instructions for handling these challenges manually. The scale of the operation was, in general, at a level where manual procedures for releasing new versions of the components and handling runtime issues were acceptable, even though they were not desirable.

Enter microservices

Learning about microservice-based architectures in 2014 made me realize that other projects had also been struggling with similar challenges (partly for other reasons than the ones I described earlier, for example, the large cloud service providers meeting web-scale requirements). Many microservice pioneers had published details of lessons they'd learned. It was very interesting to learn from these lessons.

Many of the pioneers initially developed monolithic applications that made them very successful from a business perspective. But over time, these monolithic applications became more and more difficult to maintain and evolve. They also became challenging to scale beyond the capabilities of the largest machines available (also known as vertical scaling). Eventually, the pioneers started to find ways to split monolithic applications into smaller components that could be released and scaled independently of each other. Scaling small components can be done using horizontal scaling, that is, deploying a component on a number of smaller servers and placing a load balancer in front of it. If done in the cloud, the scaling capability is potentially endless – it is just a matter of how many virtual servers you bring in (given that your component can scale out on a huge number of instances, but more on that later on).

In 2014, I also learned about a number of new open source projects that delivered tools and frameworks that simplified the development of microservices and could be used to handle the challenges that come with a microservice-based architecture. Some of these are as follows:

Pivotal released Spring Cloud, which wraps parts of the Netflix OSS in order to provide capabilities such as dynamic service discovery, configuration management, distributed tracing, circuit breaking, and more.I also learned about Docker and the container revolution, which is great for minimizing the gap between development and production. Being able to package a component not only as a deployable runtime artifact (for example, a Java war or jar file) but as a complete image, ready to be launched as a container on a server running Docker, was a great step forward for development and testing.

For now, think of a container as an isolated process. We will learn more about containers in Chapter 4, Deploying Our Microservices Using Docker.

A container engine, such as Docker, is not enough to be able to use containers in a production environment. Something is needed that can ensure that all the containers are up and running and that can scale out containers on a number of servers, thereby providing high availability and increased compute resources.These types of product became known as container orchestrators. A number of products have evolved over the last few years, such as Apache Mesos, Docker in Swarm mode, Amazon ECS, HashiCorp Nomad, and Kubernetes. Kubernetes was initially developed by Google. When Google released v1.0 in 2015, they also donated Kubernetes to CNCF (https://www.cncf.io/). During 2018, Kubernetes became kind of a de facto standard, available both pre-packaged for on-premises use and as a service from most of the major cloud providers.

As explained in https://kubernetes.io/blog/2015/04/borg-predecessor-to-kubernetes/, Kubernetes is actually an open source-based rewrite of an internal container orchestrator, named Borg, used by Google for more than a decade before the Kubernetes project was founded.

In 2018, I started to learn about the concept of a service mesh and how a service mesh can complement a container orchestrator to further offload microservices from responsibilities to make them manageable and resilient.

A sample microservice landscape

Since this book can't cover all aspects of the technologies I just mentioned, I will focus on the parts that have proven to be useful in customer projects I have been involved in since 2014. I will describe how they can be used together to create cooperating microservices that are manageable, scalable, and resilient.

Each chapter in this book will address a specific concern. To demonstrate how things fit together, I will use a small set of cooperating microservices that we will evolve throughout this book. The microservice landscape will be described in Chapter 3, Creating a Set of Cooperating Microservices; for now, it is sufficient to know that it looks like this:

Figure 1.7: The microservice-based system landscape used in the book

Note that this is a very small system landscape of cooperating microservices. The surrounding support services that we will add in the coming chapters might look overwhelmingly complex for these few microservices. But keep in mind that the solutions presented in this book aim to support a much larger system landscape.

Now that we have been introduced to the potential benefits and challenges of microservices, let's start to look into how a microservice can be defined.

Defining a microservice

A microservice architecture is about splitting up monolithic applications into smaller components, which achieves two major goals:

Faster development, enabling continuous deploymentsEasier to scale, manually or automatically

A microservice is essentially an autonomous software component that is independently upgradeable, replaceable, and scalable. To be able to act as an autonomous component, it must fulfill certain criteria, as follows:

It must conform to a shared-nothing architecture; that is, microservices don't share data in databases with each other!It must only communicate through well-defined interfaces, either using APIs and synchronous services or preferably by sending messages asynchronously. The APIs and message formats used must be stable, well-documented, and evolve by following a defined versioning strategy.It must be deployed as separate runtime processes. Each instance of a microservice runs in a separate runtime process, for example, a Docker container.Microservice instances are stateless so that incoming requests to a microservice can be handled by any of its instances.

Using a set of cooperating microservices, we can deploy to a number of smaller servers instead of being forced to deploy to a single big server, like we have to do when deploying a monolithic application.

Given that the preceding criteria have been fulfilled, it is easier to scale up a single microservice into more instances (for example, using more virtual servers) compared to scaling up a big monolithic application.

Utilizing autoscaling capabilities that are available in the cloud is also a possibility, but is not typically feasible for a big monolithic application. It's also easier to upgrade or even replace a single microservice compared to upgrading a big monolithic application.

This is illustrated by the following diagram, where a monolithic application has been divided into six microservices, all of which have been deployed into separate servers. Some of the microservices have also been scaled up independently of the others:

Figure 1.8: Dividing a monolith into microservices

A very frequent question I receive from customers is:

How big should a microservice be?

I try to use the following rules of thumb:

Small enough to fit in the head of a developerBig enough to not jeopardize performance (that is, latency) and/or data consistency (SQL foreign keys between data that's stored in different microservices are no longer something you can take for granted)

So, to summarize, a microservice architecture is, in essence, an architectural style where we decompose a monolithic application into a group of cooperating autonomous software components. The motivation is to enable faster development and to make it easier to scale the application.

With a better understanding of how to define a microservice, we can move on and detail the challenges that come with a system landscape of microservices.

Challenges with microservices

In the Challenges with autonomous software components section, we have already seen some of the challenges that autonomous software components can bring (and they all apply to microservices as well) as follows:

Many small components that use synchronous communication can cause a chain of failure problem, especially under high loadKeeping the configuration up to date for many small components can be challengingIt's hard to track a request that's being processed and involves many components, for example, when performing root cause analysis, where each component stores log records locallyAnalyzing the usage of hardware resources on a component level can be challenging as wellManual configuration and management of many small components can become costly and error-prone

Another downside (but not always obvious initially) of decomposing an application into a group of autonomous components is that they form a distributed system. Distributed systems are known to be, by their nature, very hard to deal with. This has been known for many years (but in many cases neglected until proven differently). My favorite quote to establish this fact is from Peter Deutsch who, back in 1994, stated the following:

The 8 fallacies of distributed computing: Essentially everyone, when they first build a distributed application, makes the following eight assumptions. All prove to be false in the long run and all cause big trouble and painful learning experiences:

1. The network is reliable

2. Latency is zero

3. Bandwidth is infinite

4. The network is secure

5. Topology doesn't change

6. There is one administrator

7. Transport cost is zero

8. The network is homogeneous

– Peter Deutsch, 1994

In general, building microservices based on these false assumptions leads to solutions that are prone to both temporary network glitches and problems that occur in other microservice instances. When the number of microservices in a system landscape increases, the likelihood of problems also goes up. A good rule of thumb is to design your microservice architecture based on the assumption that there is always something going wrong in the system landscape. The microservice architecture needs to be designed to handle this, in terms of detecting problems and restarting failed components. Also, on the client side, ensure that requests are not sent to failed microservice instances. When problems are corrected, requests to the previously failing microservice should be resumed; that is, microservice clients need to be resilient. All of this needs, of course, to be fully automated. With a large number of microservices, it is not feasible for operators to handle this manually!

The scope of this is large, but we will limit ourselves for now and move on to learn about design patterns for microservices.

Design patterns for microservices

This topic will cover the use of design patterns to mitigate challenges with microservices, as described in the preceding section. Later in this book, we will see how we can implement these design patterns using Spring Boot, Spring Cloud, Kubernetes, and Istio.

The concept of design patterns is actually quite old; it was invented by Christopher Alexander back in 1977. In essence, a design pattern is about describing a reusable solution to a problem when given a specific context. Using a tried and tested solution from a design pattern can save a lot of time and increase the quality of the implementation compared to spending time on inventing the solution ourselves.

The design patterns we will cover are as follows:

Service discoveryEdge serverReactive microservicesCentral configurationCentralized log analysisDistributed tracingCircuit breakerControl loopCentralized monitoring and alarms

This list is not intended to be comprehensive; instead, it's a minimal list of design patterns that are required to handle the challenges we described previously.

We will use a lightweight approach to describing design patterns, and focus on the following:

The problemA solutionRequirements for the solution

Throughout in this book, we will delve more deeply into how to apply these design patterns. The context for these design patterns is a system landscape of cooperating microservices where the microservices communicate with each other using either synchronous requests (for example, using HTTP) or by sending asynchronous messages (for example, using a message broker).

Service discovery

The service discovery pattern has the following problem, solution, and solution requirements.

Problem

How can clients find microservices and their instances?

Microservices instances are typically assigned dynamically allocated IP addresses when they start up, for example, when running in containers. This makes it difficult for a client to make a request to a microservice that, for example, exposes a REST API over HTTP. Consider the following diagram:

Figure 1.9: The service discovery issue

Solution

Add a new component – a service discovery service – to the system landscape, which keeps track of currently available microservices and the IP addresses of its instances.

Solution requirements

Some solution requirements are as follows:

Automatically register/unregister microservices and their instances as they come and go.The client must be able to make a request to a logical endpoint for the microservice. The request will be routed to one of the available microservice instances.Requests to a microservice must be load-balanced over the available instances.We must be able to detect instances that currently are unhealthy, so that requests will not be routed to them.

Implementation notes: As we will see, in Chapter 9, Adding Service Discovery Using Netflix Eureka, Chapter 15, Introduction to Kubernetes, and Chapter 16, Deploying Our Microservices to Kubernetes, this design pattern can be implemented using two different strategies:

Client-side routing: The client uses a library that communicates with the service discovery service to find out the proper instances to send the requests to.Server-side routing: The infrastructure of the service discovery service also exposes a reverse proxy that all requests are sent to. The reverse proxy forwards the requests to a proper microservice instance on behalf of the client.

Edge server

The edge server pattern has the following problem, solution, and solution requirements.

Problem

In a system landscape of microservices, it is in many cases desirable to expose some of the microservices to the outside of the system landscape and hide the remaining microservices from external access. The exposed microservices must be protected against requests from malicious clients.

Solution

Add a new component, an edge server, to the system landscape that all incoming requests will go through:

Figure 1.10: The edge server design pattern

Implementation notes: An edge server typically behaves like a reverse proxy and can be integrated with a discovery service to provide dynamic load-balancing capabilities.

Solution requirements

Some solution requirements are as follows:

Hide internal services that should not be exposed outside their context; that is, only route requests to microservices that are configured to allow external requestsExpose external services and protect them from malicious requests; that is, use standard protocols and best practices such as OAuth, OIDC, JWT tokens, and API keys to ensure that the clients are trustworthy

Reactive microservices

The reactive microservice pattern has the following problem, solution, and solution requirements.

Problem

Traditionally, as Java developers, we are used to implementing synchronous communication using blocking I/O, for example, a RESTful JSON API over HTTP. Using a blocking I/O means that a thread is allocated from the operating system for the length of the request. If the number of concurrent requests goes up, a server might run out of available threads in the operating system, causing problems ranging from longer response times to crashing servers. Using a microservice architecture typically makes this problem even worse, where typically a chain of cooperating microservices is used to serve a request. The more microservices involved in serving a request, the faster the available threads will be drained.

Solution

Use non-blocking I/O to ensure that no threads are allocated while waiting for processing to occur in another service, that is, a database or another microservice.

Solution requirements

Some solution requirements are as follows:

Whenever feasible, use an asynchronous programming model, sending messages without waiting for the receiver to process them.If a synchronous programming model is preferred, use reactive frameworks that can execute synchronous requests using non-blocking I/O, without allocating a thread while waiting for a response. This will make the microservices easier to scale in order to handle an increased workload.Microservices must also be designed to be resilient and self-healing. Resilient meaning being capable of producing a response even if one of the services it depends on fails; self-healing meaning that once the failing service is operational again, the microservice must be able to resume using it.

In 2013, key principles for designing reactive systems were established in The Reactive Manifesto (https://www.reactivemanifesto.org/).

According to the manifesto, the foundation for reactive systems is that they are message-driven; they use asynchronous communication. This allows them to be elastic, that is, scalable, and resilient, that is, tolerant to failures. Elasticity and resilience together enable a reactive system to always respond in a timely fashion.

Central configuration

The central configuration pattern has the following problem, solution, and solution requirements.

Problem

An application is, traditionally, deployed together with its configuration, for example, a set of environment variables and/or files containing configuration information. Given a system landscape based on a microservice architecture, that is, with a large number of deployed microservice instances, some queries arise:

How do I get a complete picture of the configuration that is in place for all the running microservice instances?How do I update the configuration and make sure that all the affected microservice instances are updated correctly?

Solution

Add a new component, a configuration server, to the system landscape to store the configuration of all the microservices, as illustrated by the following diagram:

Figure 1.11: The central configuration design pattern

Solution requirements

Make it possible to store configuration information for a group of microservices in one place, with different settings for different environments (for example, dev, test, qa, and prod).

Centralized log analysis

Centralized log analysis has the following problem, solution, and solution requirements.

Problem

Traditionally, an application writes log events to log files that are stored in the local filesystem of the server that the application runs on. Given a system landscape based on a microservice architecture, that is, with a large number of deployed microservice instances on a large number of smaller servers, we can ask the following questions:

How do I get an overview of what is going on in the system landscape when each microservice instance writes to its own local log file?How do I find out if any of the microservice instances get into trouble and start writing error messages to their log files?