Building and Delivering Microservices on AWS - Amar Deep Singh - E-Book

Building and Delivering Microservices on AWS E-Book

Amar Deep Singh

0,0
32,39 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Reliable automation is crucial for any code change going into production. A release pipeline enables you to deliver features for your users efficiently and promptly. AWS CodePipeline, with its powerful integration and automation capabilities of building, testing, and deployment, offers a unique solution to common software delivery issues such as outages during deployment, a lack of standard delivery mechanisms, and challenges faced in creating sustainable pipelines. You’ll begin by developing a Java microservice and using AWS services such as CodeCommit, CodeArtifact, and CodeGuru to manage and review the source code. You’ll then learn to use the AWS CodeBuild service to build code and deploy it to AWS infrastructure and container services using the CodeDeploy service. As you advance, you’ll find out how to provision cloud infrastructure using CloudFormation templates and Terraform. The concluding chapters will show you how to combine all these AWS services to create a reliable and automated CodePipeline for delivering microservices from source code check-in to deployment without any downtime. Finally, you’ll discover how to integrate AWS CodePipeline with third-party services such as Bitbucket, Blazemeter, Snyk, and Jenkins. By the end of this microservices book, you’ll have gained the hands-on skills to build release pipelines for your applications.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 553

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Building and Delivering Microservices on AWS

Master software architecture patterns to develop and deliver microservices to AWS Cloud

Amar Deep Singh

BIRMINGHAM—MUMBAI

Building and Delivering Microservices on AWS

Copyright © 2023 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Preet Ahuja

Publishing Product Manager: Niranjan Naikwadi and Suwarna Rajput

Senior Editor: Arun Nadar

Content Development Editor: Sujata Tripathi

Technical Editor: Nithik Cheruvakodan

Copy Editor: Safis Editing

Project Coordinator: Aryaa Joshi

Proofreader: Safis Editing

Indexer: Rekha Nair

Production Designer: Vijay Kamble

Marketing Coordinator: Agnes D'souza

First published: May 2023

Production reference: 1150523

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80323-820-3

www.packtpub.com

Writing this book has been challenging to manage between work and my personal life. I would like to thank my loving wife, Nidhi, and my kids, Aarav, Anaya, and Navya, for their dedicated support and encouragement throughout this two-year journey.

I would also like to thank my parents, Mr. Balveer Singh (late), Mrs. Rajbala Devi, and my brother, Dr. Ghanendra Singh, for their blessings and inspiration to do more and keep moving.

I also like to thank my colleagues, friends, walking buddy Kartikey Sharma, and cricket team for their unconditional love and support in making this happen.

And lastly, thanks to Jeff Carpenter for being an inspiration and providing necessary support.

– Amar Deep Singh

Foreword

Microservices have become so ubiquitous in software systems, especially in cloud computing, that it’s hard to believe the trend has only been around for a decade. The microservice boom was in itself a response to the challenges of a previous paradigm: service-oriented architecture (SOA).

The initial promise of SOA for producing flexible, composable systems had been gradually drowned out as vendors lured developers toward complex products such as application servers and enterprise service buses, configured by a never-ending flow of XML that proved difficult to understand and maintain. The microservice boom of the 2010s was a welcome reaction against the complexity and bloat of the SOA world, focusing on a simpler set of conventions centered around representational state transfer (REST) APIs and more human-readable formats such as JSON.

It was in this season of rapid innovation that Amar Deep and I first encountered the powerful combination of microservices and public cloud infrastructure, with AWS as the early leader. We eagerly followed the maturation of DevOps principles espoused by Netflix and other early adopters as they made various frameworks open source and popularized methodologies such as chaos engineering. These visionaries showed us how to produce microservice-based systems that demonstrated high availability and performance at scale.

However, as might have been expected, microservices have reached that inevitable stage of the hype curve where cautionary tales and articles suggesting “you must be this tall to use microservices” are rampant. These practitioners advise new projects, to begin with a monolithic architecture to minimize development risk and time-to-market and only migrate to microservices when the monolith begins to strain under operational needs. So, how can you know whether microservices are right for your project and whether you’re executing them effectively?

This book emerges at an opportune moment to help answer that question, combining the essential patterns and principles to approach microservice development properly from day one with the practical guidance needed to maintain your microservices for the long haul.

Specifically, you’ll learn multiple architecture patterns for microservices, including how to identify them, how they interact with each other, and how they make use of supporting infrastructure for data storage and movement. Then you’ll learn how to use tools such as Git, AWS CodeCommit, AWS CodeGuru, AWS CodeArtifact, and AWS CodeBuild to write and test your microservices and orchestrate all these tools with AWS CodePipeline to deploy to environments, including AWS Elastic Compute Cloud (EC2), AWS Elastic Container Service (ECS), AWS Elastic Kubernetes Service (EKS), and AWS Lambda. You’ll also learn how to extend your pipelines beyond AWS using Jenkins and deploy to on-premises servers.

With tons of practical, relatable domain examples throughout, Amar Deep has provided an outstanding end-to-end resource to help you deliver microservices successfully with AWS. With this guide, you’ll design, develop, test, deploy, monitor, and secure your microservices with confidence now and in the future.

Jeff Carpenter

Software engineer at DataStax and the author of Cassandra: The Definitive Guide (O’Reilly) and Managing Cloud Native Data on Kubernetes (O’Reilly).

Contributors

About the author

Amar Deep Singh is an author, architect, and technology leader with 16 years of experience in developing and designing enterprise solutions. He currently works for US Bank as an engineering director and helps digital teams in cloud adoption and migration. He has worked in the banking, hospitality, and healthcare domain and transformed dozens of legacy enterprise applications into cloud-native applications. He specializes in modernizing legacy applications and has expertise in developing highly available, scalable, and reliable distributed systems. He holds several professional certifications, including AWS Certified Solutions Architect at the Professional level, AWS Certified Security and Machine Learning Specialist, TOGAF Certified Enterprise Architect, Certified Jenkins Engineer, Microsoft Certified Professional, and Terraform Associate certification.

I want to thank my wife, Nidhi, my parents, Mr. Balveer Singh, and Mrs. Rajbala Devi, and my kids, Aarav, Anaya, and Navya, for their unconditional love and support during this journey. Nidhi has been a great support and always encouraged me whenever I feel low or discouraged.

About the reviewers

Sourabh Narendra Bhavsar is a senior full stack developer and an agile and cloud practitioner with over eight years of experience in the software industry. He has done a post-graduate program in artificial intelligence and machine learning at the University of Texas at Austin, a master's in business administration (marketing), and a bachelor's of engineering (IT) at the University of Pune, India. He currently works at Rabobank in the Netherlands as a lead technical member, where he is responsible for designing and developing microservice-based solutions and implementing various types of workflow and orchestration engines. Sourabh believes in continuous learning and enjoys exploring emerging technologies. When not coding, he likes to play the tabla and read about astrology.

Kathirvel Muniraj has worked in analytical banking applications for more than eight years in the cloud solutions industry and specializes in DevOps and DevSecOps. His experience includes Kubernetes, MLOps, and DevSecOps.

He has completed many global certifications such as Certified DevSecOps Professional, Certified Kubernetes Security Specialist, Microsoft Certified: DevOps Engineer Expert, Certified Kubernetes Administrator, and AWS Certified Sysops Administrator.

I'd like to thank my family and friends who understand the time and commitment. I'd like to thank my mentor, Mohammad Samiullah Mulla for his guidance, encouragement, support, and motivation over the past 4 years. I am also thankful to my whole family for supporting me and tolerating my busy schedule while still standing by my side. I owe my accomplishments and triumphs in life to the unwavering guidance and assistance of a person who prefers to remain anonymous. I have endearingly nicknamed this individual 'Bujji'.

Table of Contents

Preface

Part 1: Pre-Plan the Pipeline

1

Software Architecture Patterns

What is software architecture?

Architecture patterns overview

A layered architecture pattern

A microkernel/plugin architecture pattern

A pipeline architecture pattern

A space-based architecture pattern

An event-driven architecture pattern

A serverless architecture pattern

A service-oriented architecture pattern

Enterprise services

A microservices architecture pattern

A service-based architecture pattern

Summary

2

Microservices Fundamentals and Design Patterns

A monolithic application architecture

Understanding a microservices architecture

Microservices design patterns

Decomposition patterns

Database patterns

Command Query Responsibility Segregation (CQRS)

SAGA pattern

Integration patterns

The aggregator pattern

The branch pattern

The client-side UI composition pattern

Observability patterns

Circuit breaker pattern

Blue-green deployment pattern

Summary

3

CI/CD Principles and Microservice Development

Understanding continuous integration and continuous delivery

CI

CD

Continuous deployment

CI/CD pipeline

Microservice development

Tools and technologies used

Setting up the development environment

Running the application

Summary

4

Infrastructure as Code

What is IaC

The benefits of IaC

CloudFormation

Template

Stacks

Change sets

CloudFormation process

Terraform

Terraform concepts

Backend

Input variables

Outputs

Terraform Commands

Setting up the environment

Summary

Part 2: Build the Pipeline

5

Creating Repositories with AWS CodeCommit

What is a VCS?

Traceability

History

Branching and merging

Increased efficiency

Easy merge conflict resolution

Transparent code reviews

Reduced duplication and errors

Increased team productivity

Compliance

Introduction to Git

Git commands

What is CodeCommit?

Creating repositories

CodeCommit credentials

Pushing code to a CodeCommit repository

Beyond the basics of CodeCommit

Creating branches

Adding files to the repository

Pull requests and code merges

Commits

Git tags

Repository settings

Deleting a repository

Approval rule templates

Repository migration to CodeCommit

Summary

6

Automating Code Reviews Using CodeGuru

What is AWS CodeGuru?

CodeGuru Reviewer

Security detection

Secret detection

Code quality

The benefits of CodeGuru Reviewer

The limitations of CodeGuru Reviewer

CodeGuru Reviewer in action

CodeGuru Profiler

The benefits of CodeGuru Profiler

The limitations of CodeGuru Profiler

Setting up CodeGuru Profiler

Summary

7

Managing Artifacts Using CodeArtifact

What is an artifact?

Artifact repository

AWS CodeArtifact

The benefits of CodeArtifact

The limitations of CodeArtifact

CodeArtifact domains

CodeArtifact repositories

Connecting to the repository

Summary

8

Building and Testing Using AWS CodeBuild

What is AWS CodeBuild?

The benefits of using CodeBuild

The limitations of CodeBuild

Creating an AWS CodeBuild project

Testing using CodeBuild

Creating a report group

Understanding buildspec files

The buildspec file syntax

Creating a buildspec file

Starting a build

Build history

Report groups and history

Account metrics

Build notifications

Build triggers

Local build support

Summary

Part 3: Deploying the Pipeline

9

Deploying to an EC2 Instance Using CodeDeploy

What is CodeDeploy?

The benefits of CodeDeploy

The limitations of CodeDeploy

What is an application?

Deployment strategies

Deployment group

Deployment configuration

The CodeDeploy agent

What is an AppSpec file?

version

os

files

permissions

hooks

Creating the appspec file

The deployment infrastructure

App deployment to EC2 instances

Deleting a CodeDeploy application

Summary

10

Deploying to ECS Clusters Using CodeDeploy

What are containers?

An overview of Docker

Docker architecture

What is ECS?

Task definitions

Tasks

Services

ECS clusters

ECS architecture

What is Amazon Elastic Container Registry?

Manually deploying an application to ECS

Creating an ECS cluster

Creating task definitions

Running a task

Configuring CodeDeploy to install apps to ECS

ECS cluster and environment setup

Application version update

CodeDeploy setup

Summary

11

Setting Up CodePipeline Code

What is AWS CodePipeline?

The benefits of using CodePipeline

The limitations of CodePipeline

CodePipeline action types

The source action type

The build action type

The test action type

The deploy action type

The approval action type

The invoke action type

Creating a pipeline

The source stage

The build stage

The Docker build stage

Executing the pipeline

Summary

12

Setting Up an Automated Serverless Deployment

What is a serverless ecosystem?

What is AWS Lambda?

The benefits of using Lambda functions

The limitations of using Lambda functions

AWS Lambda development

RequestHandler

RequestStreamHandler

Sample Lambda development

Creating a Lambda function

AWS Lambda pipeline setup

Summary

13

Automated Deployment to an EKS Cluster

Kubernetes – an overview

Kubernetes architecture

Pod

Worker nodes

Control plane

kubectl

Kubernetes objects

ReplicaSet

Deployment

StatefulSet

Service

Ingress

Secrets

What is EKS?

Deploying application on EKS cluster

Creating an EKS cluster

Adding worker nodes

EKS permission mapping

Code changes

Setting up CodePipeline

Summary

14

Extending CodePipeline Beyond AWS

What is GitHub?

Creating a GitHub repository

Connecting GitHub with AWS developer tools

What is Bitbucket?

Extending CodePipeline

Creating a Bitbucket repository

Creating a Jenkins build server

Creating a Jenkins build job

Creating a Jenkins test job

Creating the private server

Installing AWS CLI

Registering an on-prem server

Installing and configuring the CodeDeploy agent

Creating a CodePipeline

Summary

Appendix

Creating an IAM Console user

Creating a user for Terraform authentication

AWS CLI installation

Creating an SNS topic

Git installation

Docker Desktop installation

Index

Other Books You May Enjoy

Preface

This book provides a step-by-step guide to developing a Java Spring Boot microservice and guides you through the process of automated deployment using AWS CodePipeline. It starts with an introduction to software architecture and different architecture patterns, then dives into microservices architecture and related patterns. This book will also help you to write the source code and commit it to CodeCommit repositories, review the code using CodeGuru, build artifacts, provision infrastructure using Terraform and CloudFormation, and deploy using AWS CodeDeploy toElastic Compute Cloud (EC2) instances, on-prem instances, ECS services, and Kubernetes clusters.

Who this book is for

This book is for software architects, DevOps engineers, site reliability engineers (SREs), and cloud engineers who want to learn more about automating their release pipelines to modify features and release updates. Some knowledge of AWS cloud, Java, Maven, and Git will help you to get the most out of this book.

What this book covers

Chapter 1, Software Architecture Patterns, teaches you about software architecture and about different software architecture patterns.

Chapter 2, Microservices Fundamentals and Design Patterns, describes microservices and different patterns related to microservices. In addition, this chapter explains different strategies and design patterns to break a monolithic application into a microservice.

Chapter 3, CI/CD Principles and Microservice Development, covers different CI/CD principles and explains how to create a sample Java Spring Boot application to be deployed as a microservice and expose a REpresentational State Transfer (REST) endpoint to ensure that our users can access this endpoint.

Chapter 4, Infrastructure as Code, explains what Infrastructure as Code (IaC) means and what tools and technologies you can use to provision different resources. We will explain how you can run a CloudFormation template and how you can create infrastructure using Terraform.

Chapter 5, Creating Repositories with AWS CodeCommit, explains what a version control system is and covers the basics of Git-based version control systems. This chapter explains the AWS CodeCommit service and its benefits and then guides users on committing application source code to the CodeCommit repository.

Chapter 6, Automating Code Reviews Using CodeGuru, walks through what the AWS CodeGuru artificial intelligence (AI) service is and how it can be used to review code automatically and scan for vulnerabilities.

Chapter 7, Managing Artifacts Using CodeArtifact, explains the AWS CodeArtifact service, its usage, and its benefits. This chapter walks through the different generated artifacts and how they can be securely stored with CodeArtifact.

Chapter 8, Building and Testing Using AWS CodeBuild, focuses on the AWS CodeBuild service and explains how you can use this service to customize the build and code testing process.

Chapter 9, Deploying to an EC2 Instance Using CodeDeploy, explains the AWS CodeDeploy service and how it can be used to deploy applications to EC2 instances and on-premises servers. This chapter takes a deep dive into different deployment strategies and configurations available to deploy applications.

Chapter 10, Deploying to ECS Clusters Using Code Deploy, focuses on explaining what a container is and how you can deploy Docker containers to an AWS ECS service. In this chapter, we configure CodeDeploy to automatically deploy sample applications to ECS containers.

Chapter 11, Setting Up CodePipeline, explains what CodePipeline is and how it can help us to orchestrate other AWS services to set up continuous development and delivery of the software.

Chapter 12, Setting Up an Automated Serverless Deployment, introduces you to serverless ecosystems and how AWS provides scalable solutions through Lambda, and how you can set up automated serverless Lambda deployment.

Chapter 13, Automated Deployment to an EKS Cluster, focuses on understanding Kubernetes and learning about the Elastic Kubernetes Service (EKS) provided by AWS and automated application deployment to an EKS cluster using CodePipeline.

Chapter 14, Extending CodePipeline Beyond AWS, focuses on extending AWS CodePipeline beyond AWS-related infrastructure and services. In this chapter, you will learn to integrate CodePipeline with Bitbucket and Jenkins and deploy to instances hosted outside AWS.

Appendix, focuses on creating Identity and Access Management (IAM) users and tools needed for the application development such as Docker Desktop, Git, and Maven, which are important but not part of the core chapters.

To get the most out of this book

You need to have some basic understanding of AWS cloud, Java, Maven, and Git to get started. Having some knowledge about Docker, Kubernetes, and Terraform will help you, although we will be covering the basics.

Software/hardware covered in the book

Operating system requirements

Java

Windows, macOS, or Linux

Terraform

AWS account

If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Building-and-Delivering-Microservices-on-AWS. If there’s an update to the code, it will be updated in the GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://packt.link/B4nWn.

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “The following buildspec.yml file describes how CodeBuild will run the maven package command to get the artifacts and include the Java JAR file, Dockerfile, appspec.yml, and other files in the output.”

A block of code is set as follows:

version: 0.0os: os-namefiles:  source-destination-files-mappingspermissions:  permissions-specificationshooks:  deployment-lifecycle-event-mappings

Any command-line input or output is written as follows:

terraform destroy -auto-approve

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “Now, click on the Create Deployment group button.”

Tips or important notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share Your Thoughts

Once you’ve read Delivering Microservices with AWS, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Download a free PDF copy of this book

Thanks for purchasing this book!

Do you like to read on the go but are unable to carry your print books everywhere?

Is your eBook purchase not compatible with the device of your choice?

Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.

Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application.

The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily

Follow these simple steps to get the benefits:

Scan the QR code or visit the link below

https://packt.link/free-ebook/9781803238203

Submit your proof of purchaseThat’s it! We’ll send your free PDF and other benefits to your email directly

Part 1: Pre-Plan the Pipeline

You will learn about software architecture and microservices development and the challenges that microservices bring, then about continuous integration/continuous delivery (CI/CD) and how it can help to deliver microservices. We will create a sample Spring Boot Java microservice to deploy in the AWS environment.

This part contains the following chapters:

Chapter 1, Software Architecture PatternsChapter 2, Microservices Fundamentals and Design PatternsChapter 3, CI/CD Principles and Microservice DevelopmentChapter 4, Infrastructure as Code

1

Software Architecture Patterns

In this chapter, you will learn about software architecture and what it consists of. You will learn about software architecture patterns and how to use these different patterns to develop software. After reading this chapter, you will have a fair understanding of layered architecture, microkernel architecture, pipeline architecture, service-oriented architecture, event-driven architecture, microservices architecture, and a few other major architectural patterns. This chapter will discuss real-world examples of each of these patterns, as follows:

What is software architecture?Architecture patterns overviewLayered architecture patternMajor architecture patterns

What is software architecture?

Before we start learning about microservices architecture and the different patterns related to it, we need to learn what software architecture is.

If we have to construct a building, a structural architect needs to lay out the design of the building and think about building capacity, the weight the foundation needs to hold, the number of floors the building will have, staircases, elevators for easy access, and the number of entry and exit gates.

Similar to a construction architect, a software architect is responsible for designing software and defining how software components will interact with each other.

Software architecture defines how your source code should be organized and how different elements in your code interact with each other in a software application using proven design patterns to achieve a business outcome.

The following diagram shows the interaction between source code components and their organization with the help of design patterns, which is what software architecture is built on:

Figure 1.1 – Software architecture structure

Now that we have defined what software architecture is, let’s compare it with the building example we discussed earlier. If you look closely, you will find that the construction project is very similar to an application and that each area in a building resembles a different aspect of software architecture. Each floor in a building can be thought of as a software layer in an application.

Building capacity is very similar to the load or number of requests your application can handle. The building’s foundation can be compared to the software infrastructure or hardware on which the application is deployed, while load capacity is directly related to the memory and space needed by the application. Staircases and elevators can be thought of as items used for your users to access your application and entry and exit gates can be treated as endpoints exposed by your application to outside systems. Design patterns can be thought of as a method you use to mix the concrete or how many iron rods you need to lay out a solid foundation.

Software architecture is built by organizing the source code components and their interaction with each other, constructed with the help of design patterns.

Important note

A design pattern defines a way to solve a common problem faced by the software industry. Design patterns don’t provide any implementation code for the problem but provide general guidelines on how a particular problem in a given context should be solved. These solutions are the best practices to use in similar situations. An architect has the flexibility to fine-tune solutions or mix different patterns or design new patterns to solve their specific problems or adjust solutions to achieve certain results.

Fine-tuning a solution or design pattern is known as an architectural trade-off, where you balance out your parameters to achieve a certain result. For example, let’s say you need to have your building foundation a few meters under the Earth to make a skyscraper, which increases your construction cost; however, if you want to make a building with only a few floors, then you don’t need to make your foundation so solid. Similarly, you can make adjustments to your software architecture/design pattern to achieve certain results. To mitigate risk to a project, the Architecture Tradeoff Analysis Method (ATAM) is used in the early phase of an architectural design.

How you build your design pattern or interaction is based on your use case and the architectural trade-offs you have made to achieve certain software goals; there is no right or wrong architecture – it is something that keeps evolving.

The ATAM process

The ATAM process collects quality attributes such as business goals, functional requirements, and non-functional requirements by bringing the stakeholders together. These quality attributes are used to create the different scenarios; then, architectural approaches and decisions run through these scenarios to create an analysis of risks, sensitivity points, and trade-offs. There can be multiple iterations of this analysis and each iteration fine-tunes the architecture. The solution proceeds from being generic to more specific to the problem and risk is mitigated.

Architecture patterns overview

Now that you have a fair understanding of software architecture and design patterns, let’s talk about architecture patterns and their types.

An architectural pattern defines a repeatable architectural solution to a common problem in a specific use case. In other words, an architecture pattern describes how you arrange your functional blocks and their interaction to get a certain outcome.

An architectural pattern is similar to a software design pattern, but the scope of an architectural pattern is broader, while a design pattern is focused on a very specific part of the source code and solves a smaller portion of the problem.

There are different types of architectural patterns to address different types of problems, so let’s learn about some of those that are widely used. We will provide a high-level overview of these architectural patterns as diving too deep into them is outside the scope of this book.

A layered architecture pattern

A layered architecture is made up of different logical layers, where each layer is abstracted from another. In a layered architecture, you break down a solution into several layers and each layer is responsible for solving a specific piece of that problem. This architecture pattern is also known as an N-tier architecture pattern.

In this architectural pattern, your components are divided into layers and each layer is independent of the other but connects with its immediate layer through an interface to exchange information. Layered architecture focuses on the clean separation of concerns between layers and these layers are usually closed to other layers other than their immediate top layer. In a layered architecture, a closed layer always connects to its immediate layer, while an open layer architecture allows you to skip a layer in the layered chain. The following diagram shows an example of a closed-layered architecture:

Figure 1.2 – Layered architecture pattern

In the preceding figure, we have divided an application component into four layers. The presentation layer contains any application code needed for the user interface. The business layer is responsible for implementing any business logic required by the application. The persistence layer is used for manipulating data and interacting with vendor-specific database systems. Finally, the database layer stores the application data in a database system.

All of these layers in a layered architecture only interact with the layer immediately below; this semantic is known as a closed layer. This approach gives you an advantage if you have to replace or change a layer; it doesn’t affect other layers in an application as there is no direct interaction between one layer and another non-immediate layer.

This style brings some complexity as well because some of the layers for certain scenarios can be just pass-through layers. In closed layers, you don’t do anything for a particular model or method but still have to go through all the layers, which can be a performance issue in certain scenarios. For example, while looking up a customer record on the user interface screen, no business logic is needed, but you still have to go through that layer to get a customer record.

To handle this kind of scenario, you can keep your layers open by allowing a direct call from the business layer to the persistence layer or a direct call from the presentation layer to the service layer, but that will bring more complexity to your architecture. Looking at the example shown in the following diagram, we have a five-layer architecture with an open style. Here, calls from one layer go directly to another layer by skipping some layers:

Figure 1.3 – Layered architecture pattern with open layers

In the preceding layered architecture pattern, if you need to change your persistence layer, then it is going to affect your service layer and, as a ripple effect, this will also affect your business layer. So, whether you want to keep your layers closed or open is an architectural trade-off you need to make.

A layered architecture pattern is very simple and easy to understand, and it can be easily tested as all the layers are part of the same monolithic application. Testing this architecture is easy as you don’t have to have all of your layers available at the same time – you can stub or mock the layer that isn’t available and simply test the other layers. For example, in the preceding diagram, if the presentation layer is not ready, you can test the other layers' flow independently or if any other layer isn’t ready, you can simply stub that out and mock the behavior that is expected from that layer and later on, that stub can be replaced by actual logic.

This architecture does scale well but it is not very agile as changing one layer will have a direct effect on the immediate neighboring layer and sometimes have a ripple effect on other layers as well; however, this is a great starting point for any application.

A microkernel/plugin architecture pattern

Microkernel architecture is also known as plugin architecture because of its plugin-based nature. Microkernel architecture is suitable for product-based development in which you have a core system or Minimum Viable Product (MVP) and you keep adding more functionality as needed so that it works to customize the core system and can be added or removed based on your needs. In this architecture, you need a registry in the core system that knows about the plugin you added and handles the request by utilizing the newly added module. Whenever you remove a plugin from the architecture, it will remove the reference from the registry.

Eclipse or Visual Studio Code IDEs are very good examples of microkernel architectures. As a core system, both of these are text editors and allow you to write code. Both of these IDEs can be extended by adding plugins to the core system; take a look at the following diagram, which explains the architecture of the Eclipse IDE:

Figure 1.4 – Eclipse IDE plugin-based architecture

In the preceding diagram, you can see that Eclipse is made up of hundreds of plugins; its core system is built with some basic functionalities that are extendable and allows any new plugin to be registered with the core system whenever you install a new plugin. Once a plugin has been installed, its functionality is available to use as a feature of the core platform, but if you remove that plugin, it only removes the functionality provided by the plugin.

Let’s take another example of a microkernel architecture pattern to understand it better. In a fictitious financial institution, a core system is designed to open an account by creating an account, opening the application, creating a user profile, generating an account statement, and using a system to communicate with the customers. A customer can see their application/account details on the account dashboard:

Figure 1.5 – Microkernel architecture for a financial institute

In this financial system example, the architecture core module can do normal account-opening activities, but this architecture is expandable, and we have added other functionalities in the form of plugins. This core module is being extended via plugin architecture to support credit card, home loan, auto loan, consumer banking, and business banking account types, and these plugins handle their specific type of functionality. By using a plugin architecture, each product type can apply its own rules.

The account communication module utilizes a microkernel architecture pattern and is being extended by adding support for email and SMS alerts. If an institution doesn’t want these features, they can simply not register/install these modules.

Similarly, validating a user device for login and verifying their identity and eligibility for a particular banking product has also been added in a microkernel fashion to the core system. Similarly, some banking products might have funding requirements, so you can add a funding plugin to supply that functionality to the core system.

To minimize the impact on the core system, plugins utilize a standard interface provided by the core system. This core system exposes an interface; all plugins must follow the same integration interface to be part of the core system.

If you want to utilize a third-party plugin, which doesn’t have a similar interface, you have to develop an adapter that connects to the core system and perform any translation needed.

This architecture pattern also falls into the monolithic architecture category as your entire package is part of the same application runtime.

This architecture pattern is pretty simple, agile, and easy to test as each module change can be isolated and modules can be added or removed as needed. Microkernel architecture is highly performant as all components become part of the same installation, so communication is fast. However, the scalability of the solution is still a challenge since all of these plugins are part of the same solution and you have to scale the entire system together; this architecture doesn’t give you the flexibility to scale any individual plugin. Another challenge with this architecture is stability, so if you have an issue with the core system, it will impact the functionality of the entire system.

A pipeline architecture pattern

Pipeline architecture patterns decompose a complex task into a series of separate elements, which can be reused. The pipeline pattern is also known as the pipes and filters pattern, which is a monolithic pattern. In this architecture pattern, a bigger task is broken down into smaller series of tasks, and those tasks are performed in sequential order; the output of one pipe becomes the input of another one.

For example, an application does a variety of tasks, which can be different in nature and complexity, so rather than performing all of those tasks in a single monolithic component, they can be broken down into a modular design and each task can be done by a separate component (filter). These components connect by sharing the same data format, with the output of one step becoming the input of another one. This pattern resembles a pipeline where data is flowing from one component to another like water flowing through a pipe, and due to this nature, it is called pipeline architecture. Pipes in this architectural style are unidirectional and pass data in only one direction:

Figure 1.6 – A deployment pipeline example

The preceding diagram shows a typical deployment pipeline for a Java application, which is a great example of pipeline architecture.

A development pipeline starts with a developer committing code into a source code repository and performs the following tasks:

The pipeline checks out the source code to the build server from the source code repository and then starts building it in the next stage.The source code is built and, if it is a compiler-based language, then code is compiled in this step.Once the source code has been built, unit test cases are executed and, on successful execution, the build progresses to the next step.The static scanning step performs the static (non-running) source code analysis to find any vulnerabilities or code issues. SonarCube and PMD are two famous static code analyzer tools.In the next step, the pipeline performs security scanning using tools such as Black Duck, Fortify, and Twistlock to find any runtime vulnerabilities.Once these quality gates have passed, the pipeline builds the final package for deployment and passes it to the deployment phase.In the deployment phase, the pipeline deploys the packaged application into the dev environment and passes control to the integration test phase.The integration phase runs the test suite, which is designed for validating the application, and makes sure that the end-to-end application is running with no issues.Once the integration phase has passed and the application has been validated, it is promoted to the higher environment for deployment. In this case, it is the production environment.Once the production deployment has been successful, the application is validated and verified for a successful production rollout and the pipeline is marked as completed.

We will talk more about the deployment pipeline in upcoming chapters as this book focuses on automating the deployment using AWS CodePipeline.

Now that we’ve had a refresher on pipeline architecture patterns, let’s talk a little bit about the different types of filters used in a pipeline architecture:

Producer: Producers are the starting point of this architecture and have an outbound pipe only. In the preceding example, the Source Code Checkout stage is a producer.Transformer: Transformer filters take the input, process it, and send the output to another filter. In the preceding example, the Build stage is a transformer as it takes the source file and generates a compiled version of the source code.Tester: Tester filters are usually pass-through filters that take the input and call the other filters in the pipeline; sometimes, they can also discard the output. In our example, Static Scanning is a tester as it will perform scanning, and if anything fails, it will fail the phase without going through the different stages.Consumer: Consumer filters are the ending point of a pipeline architecture and can be recognized by the fact that they do not feed the output into another stage of the pipeline. In our example, the Production Validation stage is a consumer.

The pipeline or filter and pipe architecture is very simple and flexible and is easy to test, but scalability and performance are a problem since each stage of the pipeline must be completed sequentially. Small blockers in one of the pipeline steps can cause a ripple effect in subsequent steps and cause a complete bottleneck in the system.

A space-based architecture pattern

Scaling an application is a challenging task, so to scale this, you might need to increase the number of web servers, application servers, and database servers. However, this will make your architecture complex because you need high performance and scalability to serve thousands of concurrent users.

For horizontally scaling a database layer, you need to use some sort of sharding, which makes it more complex and difficult to manage. In a Space-Based Architecture (SBA), you scale your application by removing the database and instead have memory grids to manage the data. In an SBA, instead of scaling a particular tier in your application, you scale the entire layers together, known as a processing unit.

Important note

In vertical scaling, you add more resources such as memory and compute power to a single machine to meet the load demand, while in horizontal scaling, you join two or more machines together to handle the load.

SBAs are widely used in distributed computing to increase the scalability and performance of a solution. This architecture is based on the concept of tuple space.

Tuple space

Tuple space is an implementation of the associative memory paradigm for parallel/distributed computing. There’s a processing unit, which generates the data and posts it to distributed memory as tuples; then, the other processing units read it based on the pattern match.

The following diagram shows the different components of an SBA:

Figure 1.7 – SBA pattern

SBA comprises several components:

Processing Unit: A processing unit is your application deployed on a machine and backed by an in-memory data grid to support any database transactions. This in-memory data grid is replicated to other processing units by the replication engine.Messaging Grid: This is a component of the virtualized middleware and is responsible for handling client requests and session management. Any request coming to the virtualized middleware is handled by the messaging grid, and it redirects that request to one of the available processing units.Data Grid: The data grid is responsible for data replication between different processing units. In a SBA pattern, the data grid is a distributed cache. The cache will typically use a database for the initial seeding of data into the grid, and to maintain persistence in case a processing unit fails.Processing Grid: This is an optional component of a space-based architecture. The processing grid is used for coordinating and combining requests to multiple processing units; if a request needs to be handled by multiple processing units, then the processing grid is responsible for managing all that. For example, if one processing unit handles inventory management while another handles order management, then the processing grid orchestrates those requests.Deployment Manager: The deployment manager is responsible for managing processing units; it can add or remove processing units based on load or other factors, such as cost.

In an SBA, all processing units are self-sufficient in processing client requests, but they are combined to make it more powerful for performance and scalability and use virtualized middleware to manage all that.

This architecture pattern is used for highly performant applications where you want to scale the architecture as traffic increases without compromising the performance. This architecture provides horizontal scaling by adding new processing units. Processing units are independent of each other and new units can be added or removed by the deployment manager at any time. Due to the complex nature of this architecture, it is not easy to test. The operational cost of creating this architecture is a little high as you need to have some products in place to create in-memory data grids and replicate those to other processing units.

An event-driven architecture pattern

Event-driven architecture (EDA) is one of the most popular architecture patterns; it is where your application is divided into multiple components and each component integrates using asynchronous event messages. EDA is a type of distributed system, where the application is divided into individual processes that communicate using events. This architecture pattern is made up of loosely coupled components that integrate using these messages. These messages are temporarily stored in messaging queues/topics. Each message is divided into two parts – a header and the actual message payload.

Let’s look at a simple example of an EDA for a hotel chain called “Cool Hotel,” which has chosen to deploy its reservation system using the EDA. A hotel Property Management System updates its inventory for available rooms, which is updated in Inventory System using an event message:

Figure 1.8 – Cool Hotel EDA example

Once Inventory System has been updated, it communicates that change to its partners through an event called Partner Channel, where different partners consume the message and update their listing.

Whenever a reservation is made either directly through the Cool Hotel website or through a partner listing, a reservation message is pushed to the Reservation Processing system, which then has the responsibility for multiple tasks. It updates the availability of the room by sending an event to Inventory Store and also generates an event, which is sent to the Client Alert system. As we have shown, all of our system components are very loosely coupled and are not aware of each other; they only communicate with each other through events.

There are several components in an EDA:

Event generator: This is the first logical layer in the architecture. A component in an architecture that produces an event for other components to consume is known as an event generator or producer. An event producer can be an email client, a sensor, or an e-commerce system.Event channel: The event channel is the second logical layer in the EDA. An event channel propagates the information from the event generator to the event processing engine. This component temporarily stores the information in a queue and hands it over for processing asynchronously whenever the processing engine is available.

An event channel can hold the data for a certain time and remove it when either a retention period is reached or the processing engine can collect the event. This depends entirely on the underlying implementation of the event channel; Active MQ, AWS Kinesis, AWS SQS, and Kafka are a few examples of popular event channel implementations.

Event processing engine: The event processing engine is the third logical layer in the EDA. This layer is responsible for selecting the appropriate event and then filtering and processing the event message received from the event channel. This layer is responsible for executing the action triggered by an event. In the preceding Cool Hotel example, the email notification service and inventory service are examples of event processing systems.

EDA is based on eventual consistency and it is near real-time due to the asynchronous nature of its events. This architecture is not a good choice if you need changes in real time. This architecture increases your availability but there is a trade-off you need to make in terms of consistency as your data can be inconsistent for a small amount of time until the asynchronous message has been processed.

This architecture brings agility to your product and it is highly scalable and improves performance, but it is hard to test these systems end to end. EDA is not simple to implement and you have to rely on a messaging system to find a reliable event channel.

EDA brings more complexities into the architecture as you have to deal with message acknowledgment, message re-delivery in case of failure, and dead letter queue analysis and processing for any corrupted events.

Dead letter queue

In a messaging system, a dead letter queue is a specialized queue that handles the messages that have been rejected or have overflowed from the main queue due to a message format issue, non-existing queue, message size limits, message rate limit, or software issues because of which a message is not processed successfully. Usually, dead letter queues work as a backup for main queues/topics and are used for offline processing and analyzing messages to identify any potential issues in the consumers or producers due to which messages are rejected.

A serverless architecture pattern

This is one of the more modern architectural patterns and has gained more and more traction recently. The name would make you believe that there is no server involved in this architecture, but that is not true. In this architecture pattern, rather than being responsible for all aspects of server infrastructure and management, you are utilizing the infrastructure owned by a third party and paying for the service you utilized. Serverless architecture is different from Infrastructure-as-a-Service (IaaS) offerings; in this architecture pattern, you are not responsible for managing servers, operating system updates, security patches, scaling up or down to meet demand, and so on.

Serverless architecture falls into two categories. Let’s look at what they are.

Backend as a Service (BaaS)

In this architecture style, instead of creating a backend service, you are utilizing a third-party provider. For example, your application requires you to validate the identity of a user. You could develop an authentication scheme and spend time and money on hosting the service, or you could leverage a third-party identity system such as Amazon Cognito to validate users in your system.

Taking this approach, you save on development efforts and costs. You reduce your overhead by not owning and managing the servers used for authentication; you only need to pay for the usage of the service.

This architectural model is not very customizable, and you have to rely a lot on the features provided by the third party. This approach may not be suitable for all use cases since the degree of customization required by some workloads may not be available from third-party providers. Another disadvantage of using this approach is that if your provider makes an incompatible change to their API/software, then you also need to make that change.

Functions as a Service (FaaS)

In this architectural style, you don’t have to use a third-party service directly, but you must write code to execute on third-party infrastructure. This architecture style provides you with great flexibility to write features with no direct hardware ownership overhead. You only pay for the time when your code is being executed and serving your client. The FaaS pattern allows you to quickly create business services without focusing on server maintenance, security, and scalability. This service scales based on the demand and creates the required hardware as needed. As a consumer, you just have to upload your code and provide a runtime configuration and your code will be ready to use.

With the increasing popularity of cloud computing, serverless architectures are a hot topic; AWS Lambda, Azure Functions, and Google Cloud Functions are examples of the FaaS architecture style.

This serverless pattern provides a lot of scalability and flexibility for customizing the solution as needed, but it has some downsides, and it is not suitable for all use cases. For example, as of December 2022, AWS Lambda runtime execution is limited to 15 minutes. If you have a task that takes longer than 15 minutes, then a Lambda function isn’t a good choice.

Let’s look at an example where AWS Lambda is a good solution. In the hotel industry, inventory rollover and rate rollover are scenarios where property owners want to set their inventory the same as last year and also want rates to be pretty much the same. Suppose you have a big event – for example, on July 4, which is Independence Day in the USA, your room rate will be high due to occupancy, so you want to maximize your profit and don’t want to sell your rooms for a lower rate than you might have a week before. So, property owners want to automatically set the same rate and room inventory as last year, but still want the flexibility to make a manual change or override, if necessary, through their property management system. For this kind of problem, you don’t need to have a service that is running 24/7 because you could instead have a scheduled job, which can be run on a daily or a weekly basis.

AWS Lambda can be a very good solution to this problem because it doesn’t require a server to do this job. An AWS Lambda function can be triggered by a CloudWatch schedule or EventBridge schedule event and call rate/inventory service to perform the rollover after providing the authentication token retrieved from the Amazon identity service (Cognito). Once this has been completed, the server that ran your function can be terminated by AWS. By using the serverless approach, you didn’t need to run your server all the time, and when your function was done executing, AWS was able to return the underlying hardware to the available pool. As of December 2021, sustainability is an AWS Well-Architected Framework pillar, so by using the Lambda approach here, you are not just able to control your costs but you are also helping save the environment by not running your server and burning more fuel:

Figure 1.9 – Serverless architecture example for hotel inventory/rate rollover

In the preceding example, we use three AWS serverless resources. The first is an AWS Aurora database as a service, which helps write and read the data so that you don’t have to worry about how database servers are being managed or backed up.

The next service is AWS Lambda, which executes your code that performs the actual update to the database. You are not maintaining any server to run this code; you simply write a function in Python, Go, .NET, Java, or any of the other available runtimes, upload the code to AWS Lambda, and configure an AWS CloudWatch/EventBridge trigger (which can use either cron-style static repetition or can be set up to look for events in a log) to invoke the code. The final service is Amazon Cognito, which is used by AWS Lambda to retrieve an authorization token for the rate/inventory service; this is an example of the BaaS serverless architecture we discussed earlier.

Serverless architecture is a good design choice when you need scalability and your tasks can be executed within 15 minutes, and you don’t want to run your servers. It is also a very cost-effective architecture and brings agility to your solution because you only pay for the memory you provision for the Lambda, as well as the execution time. The initial request for these functions can be challenging as underlying infrastructure provisioning can take some time and you might need to prewarm your functions before they can take requests.

A service-oriented architecture pattern

Service-oriented architecture (SOA) is one of the widely used architecture patterns in enterprises and promotes the usage of services in an enterprise context. SOA is an enterprise integration pattern that connects different heterogeneous systems to carry out business functions.

In SOA, services communicate with each other in a protocol-agnostic way, but they may use different protocols such as REST, SOAP, AMQP, and so on. They share the information using a common platform known as the Enterprise Service Bus (ESB), which provides support for protocol-agnostic communication.

An ESB

An ESB is a centralized software component that is responsible for integration between different applications. Its primary responsibility is to perform data transformations, handle connectivity, messaging, and routing of requests, provide protocol-neutral communication, and potentially combine multiple requests if needed. An ESB helps in implementing complex business processes and implementing the data transformation, data validation, and security layers when integrating different services.

SOA is different from microservices architecture, which I will be talking about later in this chapter.

SOA is enterprise-focused since services target the entire enterprise. There are five types of services we focus on in an SOA; let’s take a look.

Business services

Business services are designed around the functionalities an organization has to perform to run its business operations. These services are coarse-grained and not very fine-grained:

Figure 1.10 – Sample banking SOA example

You can identify a business service by filling in the blank in the following sentence: “we are in the business of ________.” For example, concerning a financial company, you can say the following:

We are in the business of home loansWe are in the business of credit cards

So, to support these two businesses, you need to create customers, so you might have a service in your architecture to create the customer but you can’t say that we are in the business of creating customers. So, any service that identifies your business is considered a business service in an SOA. If you take an example of a banking SOA architecture, as shown in the preceding figure, you can place Auto Loan Service, Credit Card Service, Banking Service, and Home Loan Service in the business services category.

Enterprise services

In an SOA, services that are used across an organization or enterprise are known as enterprise services. For example, authenticating a user is an organization-level issue, and you don’t want to have several applications implementing their logic to authenticate users if this can be done by a single service throughout the enterprise.

Another example of an enterprise service is managing customer information that can be shared by all other business services. For example, in a banking institution, you don’t want to maintain the customer information in home loan, auto loan, and core banking systems differently; the customer information might be different in each business unit. So, you should have an enterprise or shared service that is being utilized to maintain the customer information at a central place and being used by all of the business services across the enterprise. Enterprise services are usually owned by a shared service group or by an enterprise architect group.

In a banking SOA architecture, you would create capabilities that would be shared by multiple business applications. For example, many applications need to send emails to customers, so it’s sub-optimal for each one of them to “reinvent the wheel” of sending out emails; it’s better to have a dedicated email-sending task responsible for sending emails, handling undeliverable emails, and so on.

Application services

Application services are scoped to each application level and are fine-grained to a specific application. The scope of these services is specific to the individual application – for example, opening a business account, checking a customer’s credit card balance, or generating a credit card statement.

Application services have a very specific purpose for doing something related to the application context. Application services are owned by application teams within a line of business. In the banking SOA architecture example, you can see that adding a vehicle is a very specific service and limited to just doing one thing, so this is a very specific and application-level service.

Another example is adding home details for a home loan application. This is also very specific and categorized into an application-level service.

Infrastructure services

Infrastructure services are common concerns for the applications that don’t provide any features to the business application or application services but play a supporting role by providing platform-level features. Those services implement platform-level functions that are very generic and can be used by enterprise services, as well as by application services.

For example, logging is an integral part of any application, so that service will fall into the platform- or infrastructure-level services; any enterprise service or application service can use this for its logging requirements. Similarly, auditing data is not the core function of an application service, but governmental oversight or internal compliance organizations may require it, so an auditing service becomes a part of infrastructure services and can be invoked by application services as needed.

The ESB

In the middle of our sample banking architecture, we have an ESB, which is responsible for business process choreography, translating business data, changing message formats, or providing protocol-agnostic transmission. It is possible to have an SOA architecture without the need for an ESB, but then your services are dependent on each other directly and become tightly coupled with no abstraction.

SOA is not very simple to implement or easy to test, but this architecture is very scalable. The cost of implementing an SOA is a bit high as you have to be dependent on third-party software to have an ESB in the middle, although you might not need all the features provided by the ESB. Therefore, this architecture has a downside, and ESB can be a single point of failure.

A microservices architecture pattern

As the name suggests, the microservices architecture pattern promotes the usage of smaller services in a bounded context. Microservices architecture is a distributed architecture and is very similar to SOA, with some exceptions. In a microservices architecture, services are fine-grained and serve a specific purpose. In other words, microservices are lightweight and serve a very specific purpose while in SOA, services are more in the enterprise scope and cover a segment of functionality.

In the microservices architecture pattern, an application is divided into loosely coupled smaller self-contained components known as services. Each service runs in a process and connects to other services in a protocol-aware synchronous or asynchronous fashion if needed. Each microservice is responsible for carrying out a certain business function within a bounded context and the entire application is a collection of these loosely coupled services.

Unlike SOA, no message bus is involved in this architectural pattern. Microservices provide protocol-aware interoperability, so the caller of the service needs to know the contract and protocol of the service it is calling.