31,19 €
Building a scalable microservices platform that caters to business demands is critical to the success of that platform. In a microservices architecture, inter-service communication becomes a bottleneck when the platform scales. This book provides a reference architecture along with a practical example of how to implement it for building microservices-based platforms with NATS as the messaging backbone for inter-service communication.
In Designing Microservices Platforms with NATS, you’ll learn how to build a scalable and manageable microservices platform with NATS. The book starts by introducing concepts relating to microservices architecture, inter-service communication, messaging backbones, and the basics of NATS messaging. You’ll be introduced to a reference architecture that uses these concepts to build a scalable microservices platform and guided through its implementation. Later, the book touches on important aspects of platform securing and monitoring with the help of the reference implementation. Finally, the book concludes with a chapter on best practices to follow when integrating with existing platforms and the future direction of microservices architecture and NATS messaging as a whole.
By the end of this microservices book, you’ll have developed the skills to design and implement microservices platforms with NATS.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 381
Veröffentlichungsjahr: 2021
A modern approach to designing and implementing scalable microservices platforms with NATS messaging
Chanaka Fernando
BIRMINGHAM—MUMBAI
Copyright © 2021 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Aaron Lazar
Publishing Product Manager: Harshal Gundetty
Senior Editor: Ruvika Rao
Content Development Editor: Vaishali Ramkumar
Technical Editor: Maran Fernandes
Copy Editor: Safis Editing
Project Coordinator: Deeksha Thakkar
Proofreader: Safis Editing
Indexer: Sejal Dsilva
Production Designer: Roshan Kawale
First published: October 2021
Production reference: 1141021
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-80107-221-2
www.packt.com
To my mother, Seelawathi De Silva, and my father, Harry Sebastian, for their sacrifices and for exemplifying the power of determination. To my wife, Subhashini, for being my loving partner throughout our joint life journey, and to my little daughter, Saneli, for bringing me happiness and joy.
– Chanaka Fernando
Chanaka Fernando is a solution architect with 12+ years of experience in designing, implementing, and supporting enterprise-scale software solutions for customers across various industries including finance, education, healthcare, and telecommunications. He has contributed to the open source community with his work (design, implementation, and support) as the product lead of the WSO2 ESB, one of the founding members of the "Ballerina: cloud-native programming language" project, and his own work on GitHub. He has spoken at several WSO2 conferences and his articles are published on Medium, DZone, and InfoQ.
Chanaka has a bachelor's degree in electronics and telecommunications engineering from the University of Moratuwa.
To my wife, Subhashini, my mother-in-law, Margrett, my father-in-law, Bernard, and my parents. They made sure I had enough time and energy to spend on writing this book. To Harshal, for believing in the book's topic and giving me the opportunity to write for Packt. To Vaishali, Ruvika, Deeksha, and the rest of the Packt team, I thank you all for your tireless efforts. To Justice Nefe and Isuru Udana, both great technical reviewers, I am grateful for your comments and feedback.
Justice Nefe, is CEO of Borderless HQ, Inc. and has 5+ years' experience as a software engineer with a focus on large-scale systems. Justice has built products and services with tools including Node.js, Golang, Vue.js, Docker, Kubernetes, gRPC, GraphQL, and others; and has designed systems from monoliths to microservices, leveraged on Apache Pulsar/nats.io and NATS Streaming for event-driven microservices. Justice is currently experimenting with Rust and Flutter and is skilled in distributed systems development, enterprise software development, product development, and systems design, working with different teams large and small, as well as teams in different geographical zones all to create products and services that put a smile on the face of customers. Justice has created open source products and internal tools for the different teams they've worked with, and collaborates with key stakeholders to conceptualize and drive new and existing initiatives to market.
Isuru Udana Loku Narangoda is a software architect and associate director at WSO2 with more than 10 years of experience in the enterprise integration space. Isuru is one of the product leads of the WSO2 Enterprise Integrator and API Manager products, and provides technical leadership to the project. Isuru is an open source enthusiast, a committer, and holds the vice-president position of the Apache Synapse open source ESB project. Also, Isuru has participated in the Google Summer of Code program as a student as well as a mentor for several years.
The microservices architecture has developed into a mainstream approach to building enterprise-grade applications within the past few years. Many organizations, from large to medium to small start-ups, have started utilizing the microservices architecture to build their applications. With more and more people adopting the microservices approach to build applications, some practical challenges of the architecture have been uncovered. Inter-service communication is one challenge that most microservices teams experience when scaling applications to a larger number of instances.
At first, point-to-point inter-service communication was not working well, and the concept of smart endpoints and dumb pipes was proposed as an alternative approach. Instead of connecting microservices in a point-to-point manner, having a messaging layer to decouple the microservices looked like a better solution.
NATS messaging technology was originally developed as the messaging technology to be used in the Cloud Foundry platform. It was built to act as the always-on dial tone for inter-service communication. Its performance and the simple interface it exposed to interact with clients made it popular within the developer community.
In this book, we discuss how NATS messaging can be used to implement inter-service communication within a microservices architecture. We start with a comprehensive introduction to microservices, messaging, and NATS technology. Then we go through the architectural aspects and provide a reference implementation of an application using the Go programming language. We cover the security and observability aspects of the proposed solution and how that can co-exist in an enterprise platform. At the end of the book, we discuss the latest developments in microservices and NATS messaging and explore how these developments can shape our proposed solution.
This microservices book is for enterprise software architects and developers who design, implement, and manage complex distributed systems with microservices architecture concepts. Intermediate-level experience of any programming language and software architecture is required to make the most of this book. If you are new to the field of microservices architecture and NATS messaging technology, you can use this book as a learning guide to get into those areas.
Chapter 1, Introduction to the Microservices Architecture, provides a comprehensive introduction to the microservices architecture.
Chapter 2, Why Is Messaging Important in a Microservices Architecture?, discusses different messaging technologies and why microservices architectures require messaging.
Chapter 3, What Is NATS Messaging?, explores the NATS messaging technology by covering the concepts with practical examples.
Chapter 4, How to Use NATS in a Microservices Architecture, discusses the possible ways to use NATS messaging in a microservices context.
Chapter 5, Designing a Microservices Architecture with NATS, provides a reference architecture using a real-world application to build a microservices-based application with NATS.
Chapter 6, A Practical Example of Microservices with NATS, provides a reference implementation of an application using the microservices architecture along with NATS.
Chapter 7, Securing a Microservices Architecture with NATS, discusses the security of the overall microservices architecture, including NATS, with examples on securing NATS servers.
Chapter 8, Observability with NATS in a Microservices Architecture, explores various monitoring and troubleshooting requirements and available technologies with an example implementation.
Chapter 9, How Microservices and NATS Co-exist with Integration Platforms, discusses the aspects related to the integration of microservices-based applications with other enterprise systems.
Chapter 10, Future of the Microservices Architecture and NATS, explores the new developments in the microservices and NATS domains.
This book is written in such a way that you will get the best learning experience by reading the chapters in order. The book includes commands, code examples, and step-by-step instructions as and when necessary. Following these instructions will help immensely in understanding the concepts. The book also provides several exercises so that you can improve your understanding and apply the knowledge to real-world applications. Try to complete the exercises while reading the book.
In addition to the this software, you need the CFSSL tool to create certificates to try out the examples in Chapter 7, Securing a Microservices Architecture with NATS. This tool can be downloaded from here: https://github.com/cloudflare/cfssl.
All the examples in this book were tested using macOS. Most of the examples should work with both Windows and Linux operating systems.
If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book's GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.
You may benefit from following the author on Twitter (https://twitter.com/chanakaudaya), Medium (https://chanakaudaya.medium.com), and GitHub (https://github.com/chanakaudaya), or by adding them as a connection on LinkedIn (https://linkedin.com/in/chanakaudaya).
You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Designing-Microservices-Platforms-with-NATS/. If there's an update to the code, it will be updated in the GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
The Code in Action videos for this book can be viewed at http://bit.ly/2OQfDum.
We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://static.packt-cdn.com/downloads/9781801072212_ColorImages.pdf.
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "The request message is published to the patient.profile subject and the subscribers are listening on the same subject."
A block of code is set as follows:
func main() {
// Initialize Tracing
initTracing()
}
Any command-line input or output is written as follows:
$ nats-server --config node.conf --log nats.log
Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: "You could observe the +OK message coming from the server as a response to the PUB command."
Tips or important notes
Appear like this.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.
Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Once you've read Designing Microservices Platforms with NATS, we'd love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.
Your review is important to us and the tech community and will help us make sure we're delivering excellent quality content.
This first section provides an understanding of what the microservices architecture is and the benefits of using it for application development. It also discusses different messaging technologies and how these technologies can be utilized to build microservices-based applications. Then, it introduces NATS messaging technology by covering the concepts with practical examples.
This section contains the following chapters:
Chapter 1, Introduction to the Microservices ArchitectureChapter 2, Why Is Messaging Important in Microservices Architecture?Chapter 3, What Is NATS Messaging?The microservice architecture is an evolutionary approach to building effective, manageable, and scalable distributed systems. The overwhelming popularity of the internet and the smart digital devices that have outnumbered the world's population have made every human being a consumer of digital products and services. Business leaders had to re-evaluate their enterprise IT platforms to make sure that these platforms are ready for the consumer revolution due to the growth of their business. The so-called digital-native companies such as Google, Amazon, Netflix, and Uber (to name a few) started building their enterprise platforms to support this revolution. The microservice architecture evolved as a result of the work that was done at these organizations to build scalable, manageable, and available enterprise platforms.
When microservice-based platforms become larger and larger with hundreds or thousands of microservices inside them, these services need to communicate with each other using the point-to-point model before they become too complicated to manage. As a solution to this problem, centralized message broker-based solutions provided a less complex and manageable solution. Organizations that adopted the microservice architecture are still evaluating the best possible approach to solve the problem of communication among services. The so-called model of smart endpoints and dumb pipes also suggests using a message broker-based approach for this.
NATS is a messaging framework that acts as the always-on dial tone for distributed systems communication. It supports the traditional pub-sub messaging model, which is supported by most of the message brokers, as well as the request-response style communication model while supporting high message rates. It can be used as the messaging framework for the microservice architecture.
In this book, we will discuss the concepts surrounding the microservice architecture and how we can use the NATS messaging framework to build effective, manageable, and scalable distributed systems.
Distributed computing systems have evolved from the early days of mainframes and large servers, sitting in separate buildings, to serverless computing, where users do not even need to consider the fact that there is a server that is running their software component. It is a journey that continues even today and into the future. From the early scheduled jobs to simple programs written in Assembly to monolithic applications written in C, or from Java to ESB/SOA-based systems to microservices and serverless programs, the evolution continues.
IT professionals have been experimenting with different approaches to solve the complex problem of distributed computing so that it eventually produces the best experience for the consumers. The microservice architecture brings out several benefits to the distributed computing system's design and implementation, which was not feasible before. It became mainstream at a time where most of the surrounding technological advancements, such as containers, cloud computing, and messaging technologies, are also becoming popular. This cohesion of technologies made the microservice architecture even more appealing to solve complex distributed systems-related challenges.
In this chapter, we're going to cover the following main topics:
The evolution of distributed systemsWhat is a microservice architecture?Characteristics of the microservice architectureBreaking down a monolith into microservicesAdvantages of the microservice architectureThe quality of the human mind to ask for more has been the driving force behind many innovations. In the early days of computing, a single mainframe computer executed a set of batch jobs to solve a certain mathematical problem at an academic institution. Then, large business corporations wanted to own these mainframe computers to execute certain tasks that would take a long time to complete if done by humans. With the advancements in electrical and electronics engineering, computers became smaller and instead of having one computer sequentially doing all the tasks, business owners wanted to execute multiple tasks in parallel by using multiple computers. The effects of improved technology on electronic circuits and their reduced size resulted in a reduction in costs, and more and more organizations started using computers.
Instead of getting things done through a single computer, people started using multiple computers to execute certain tasks, and these computers needed to connect to communicate and share the results of their executions to complete the overall task. This is where the term distributed systems came into use.
A distributed system is a collection of components (applications) located on different networked computers that communicate and coordinate their tasks by passing messages to one another via a network to achieve a common goal.
Distributing a workload (a task at hand) among several computers poses challenges that were not present before. Some of those challenges are as follows:
Failure handlingConcurrencySecurity of dataStandardizing dataScalabilityLet's discuss these challenges in detail so that the distributed systems that we will be designing in this book can overcome these challenges well.
Communication between two computers flows through a network. This can be a wired network or a wireless network. In either case, the possibility of a failure at any given time is inevitable, regardless of the advancements in the telecommunications industry. As a designer of distributed systems, we should vary the failures and take the necessary measures to handle these failures. A properly designed distributed system must be capable of the following:
Detecting failuresMasking failuresTolerating failuresRecovery from failuresRedundancyWe will discuss handling network and system failures using the preceding techniques in detail in the upcoming chapters.
When multiple computers are operating to complete a task, there can be situations where multiple computers are trying to access certain resources such as databases, file servers, and printers. But these resources may be limited in that they can only be accessed by one consumer (computer) at a given time. In such situations, distributed computer systems can fail and produce unexpected results. Hence, managing the concurrency in a distributed system is a key aspect of designing robust systems. We will be discussing techniques such as messaging (with NATS) that can be used to address this concurrency challenge in upcoming chapters.
Distributed systems move data from one computer to another via a communication channel. These communication channels are sometimes vulnerable to various types of attacks by internal and external hackers. Hence, securing data transfers across the network is a key challenge in a distributed system. There are technologies such as Secure Socket Layer (SSL) that help improve the security of wire-level communication. It is not sufficient in a scenario where systems are exposing business data to external parties (for example, customers or partners). In such scenarios, applications should have security mechanisms to protect malicious users and systems from accessing valuable business data. Several techniques have evolved in the industry to protect application data.
Some of them are as follows:
Firewalls and proxies to filter traffic: Security through network policies and traffic rules.Basic authentication with a username and password: Protect applications with credentials provided to users in the form of a username and password.Delegated authentication with 2-legged and 3-legged OAuth flow (OAuth2, OIDC): Allow applications to access services on behalf of the users using delegated authentication.Two-Factor Authentication (2FA): Additional security with two security factors such as username/password and a one-time password (OTP).Certificate-based authentication (system-to-system): Securing application-to-application communication without user interaction using certificates.We will be exploring these topics in detail in the upcoming chapters.
The software components that are running on different computers may use different data formats and wire-level transport mechanisms to send/receive data to/from other systems. This will become a major challenge when more and more systems are introduced to the platform with different data and transport mechanisms. Hence, adhering to a common standard makes it easier to network different systems without much work. Distributed systems designers and engineers have come up with various standards in the past, such as XML, SOAP, and REST, and those standards have helped a lot in standardizing the interactions among systems. Yet there is a considerable number of essential software systems (such as ERP and CRM) that exchange data with proprietary standards and formats. On such occasions, the distributed system needs to adopt those systems via technologies by using an adapter or an enterprise service bus that can translate the communication on behalf of such systems.
Most systems start with one or two computers running a similar number of systems and networking, which is not a difficult task. But eventually, these systems become larger and larger and sometimes grow to hundreds or thousands of computers running a similar or a greater number of different systems.
Hence, it is essential to take the necessary action at the very early stages to address the challenge of scalability. There are various networking topologies available to design the overall communication architecture, as depicted in Figure 1.1 – Networking topologies. In most cases, architects and developers start with the simplest model of point-to-point and move into a mesh architecture or star (hub) architecture eventually.
The bus topology is another common pattern most of the distributed systems adhered to in the past, and even today, there are a significant number of systems using this architecture.
The software engineers and architects who worked on these initial distributed computing system's designs and implementations have realized that different use cases require different patterns of networking. Therefore, they came up with a set of topologies based on their experiences. These topologies helped the systems engineers to configure the networks efficiently based on the problem at hand. The following diagram depicts some of the most common topologies used in distributed systems:
Figure 1.1 – Networking topologies
These topologies helped engineers solve different types of real-world problems with distributed computing. In most cases, engineers and architects started with a couple of applications connected in a point-to-point manner. When the number of applications grows, this becomes a complicated network of point-to-point connections. These models were easy to begin with, yet they were harder to manage when the number of nodes grew beyond a certain limit. In traditional IT organizations, change is something people avoid unless it is critical or near a break-even point. This reserved mindset has made many enterprise IT systems fall into the category of either a mesh or a fully connected topology, both of which are hard to scale and manage. The following diagram shows a real-world example of how complicated an IT system would look like with this sort of topology:
Figure 1.2 – Distributed system with a mesh topology
The preceding diagram depicts an architecture where multiple applications are connected in a mesh topology that eventually became an unmanageable system. There are many such examples in real IT systems where deployments become heavily complicated, with more and more applications being introduced as a part of the business's evolution.
The IT professionals who were designing and implementing these systems realized the challenge and tried to find alternative approaches to building complex distributed systems. By doing so, they identified that a bus topology with a clear separation of responsibilities and services can solve this problem. That is where the service-oriented architecture (SOA) became popular, along with the centralized enterprise service bus (ESB).
The SOA-based approach helped IT professionals build applications (services) with well-defined interfaces that abstract the internal implementation details so that the consumers of these applications would only need to integrate through the interface. This approach reduced the tight coupling of applications, which eventually ended up in a complex mesh topology with a lot of friction for change.
The SOA-based approach allowed application developers to change their internal implementations more freely, so long as they adhered to the interface definitions. The centralized service bus (ESB) was introduced to network various applications that were present in the enterprise due to various business requirements. The following diagram depicts the enterprise architecture with the bus topology, along with an ESB in the middle acting as the bus layer:
Figure 1.3 – Distributed system with the bus topology using ESB
As depicted in the preceding diagram, this architecture worked well in most use cases, and it allowed the engineers and architects to reduce the complexity of the overall system while onboarding more and more systems that were required for business growth. One challenge with this approach was that more and more complex logic and the load were handled by the centralized ESB component, and it became a central point of failure unless you deployed that with high availability. This was inevitable with this architecture and IT professionals were aware of this challenge.
With the introduction of agile development methodologies, container-based deployments, and the popularity of cloud platforms, this ESB-based architecture looked obsolete, and people were looking for better approaches to reap the benefits of these new developments. This is the time where IT professionals identified major challenges with this approach. Some of them are as follows:
Scaling the ESB requires scaling all the services implemented in the ESB at once.Managing the deployment was difficult since changing one service could impact many other services.The ESB approach could not work with the agile development models and container-based platforms.Most people realized that going forward with the ESB style of networking topology for distributed systems was not capable of gaining the benefits offered by technological advancements in the computing world. This challenge was not only related to ESB, but also to many applications that were developed in a manner where more and more functionalities were built into the same application. The term monolithic application was used to describe such applications.
This was the time when a set of companies called Digital Native companies came from nowhere to rule the world of business and IT. Some popular examples are Google, Facebook, Amazon, Netflix, Twitter, and Uber. These companies became so large that they couldn't support their scale of IT demand with any of the existing models. They started innovating on the infrastructure demand as well as the application delivery demands as their primary motivations. As a result of that, two technologies evolved:
Container-based deploymentsThe microservice architectureThese two innovations go hand-in-hand to solve the problems of increased demand for the aforementioned companies. Those innovations later helped organizations of all sizes due to the many advantages they brought to the table. We will explore these topics in more detail in the upcoming chapters.
Any application that runs on a distributed system requires computing power to execute its assigned tasks. Initially, all the applications ran on a physical computer (or server) that had an operating system with the relevant runtime components (for example, JDK) included. This approach worked well until people wanted to run different operating systems on the same computer (or server). That is when virtualization platforms came into the picture and users were able to run several different operating systems on the same computer, without mixing up the programs running on each operating system. This approach was called virtual machines, or VMs.
It allowed the users to run different types of programs independent from each other on the same computer, similar to programs running on separate computers. Even though this approach provided a clear separation of programs and runtimes, it also consumed additional resources for running the operating system.
As a solution to this overuse of resources by the guest operating system and other complexities with VMs, container technology was introduced. A container is a standard unit of a software package that bundles all the required code and dependencies to run a particular application. Instead of running on top of a guest operating system, similar to VMs, containers run on the same host operating system of the computer (or server). This concept was popularized with the introduction of Docker Engine as an open source project in 2013. It leveraged the existing concepts in the Linux operating system, such as cgroups and namespaces. The major difference between container platforms such as Docker and VMs is the usage of the host operating system instead of the guest operating system. This concept is depicted in the following diagram:
Figure 1.4 – Containers versus virtual machines
The following table provides the key points of distinction between containers and VMs:
Table 1.1 – Containers versus virtual machines
So far, we've gone through the evolution of the design of distributed systems and their implementation and how that evolution paved the way to the main topic of this chapter, which is the microservice architecture. We'll try to define and understand the microservice architecture in detail in the next section.
When engineers decided to move away from large monolithic applications to SOA, they had several goals in mind to achieve the new model. Some of them are as follows:
Loose couplingIndependence (deployment, scaling, updating)Standard interfacesDiscovery and reusabilityEven though most of these goals were achieved with the technology that was available at the time, most of the SOA-based systems ended up as a collection of large monolithic applications that run on heavy servers or virtual machines. When modern technological advancements such as containers, domain-driven design, automation, and virtualized cloud infrastructure became popular, these SOA-based systems could not reap the benefits that were offered by the same.
For this reason and a few others, such as scalability, manageability, and robustness, engineers explored an improved architecture that could fulfill these modern enterprise requirements. Instead of going for a brand-new solution with a lot of breaking changes, enterprise architects identified the microservice architecture as an evolution of the distributed system design. Even though there is no one particular definition that is universally accepted, the core concept of the microservice architecture can be characterized like so:
"The term microservice architecture refers to a distributed computing architecture that is built using a set of small, autonomous services (microservices) that act as a cohesive unit to solve a business problem or problems."
The preceding definition explores a software architecture that is used to build applications. Let's expand this definition into two main sections.
Instead of doing many things, microservices focus on doing one thing and one thing well. That does not necessarily mean that it should be written in fewer than 100 lines of code or something like that. The number of code lines depends on many factors, such as the programming language of choice, usage of libraries, and the complexity of the task at hand. But one thing is clear in this definition, and that is that the scope of the microservice is limited to one particular task. This is like patient registration in a healthcare system or account creation in a banking system. Instead of designing the entire system as a large monolith, such as a healthcare application or banking application, we could design these applications in a microservice architecture by dividing these separate functional tasks into independent microservices. We will explore how to break a monolithic application down into a microservice architecture later in this chapter.
This is the feature of the microservice architecture that addresses most of the challenges faced by the service-oriented architecture. Instead of having tightly coupled services, with microservices, you need to have fully autonomous services that can do the following:
DevelopDeployScaleManage MonitorIndependently from each other, this allows the microservices to adapt to modern technological advancements such as agile development, container-based deployments, and automation, and fulfill business requirements more frequently than ever before.
The second part of this feature is the cohesiveness of the overall platform, where each microservice interacts with other microservices and with external clients with a well-defined standardized interface, such as an application programming interface (API), that hides the internal implementation detail.
In this section, we will discuss the different characteristics of a typical microservice architecture. Given the fact that the microservice architecture is still an evolving architecture, don't be surprised if the characteristics you see here are slightly different than what you have seen already. That is how the evolving architectures work. However, the underlying concepts and reasons would be the same in most cases:
Componentization via servicesEach service has a scope identified based on business functionsDecentralized governanceDecentralized data managementSmart endpoints and dumb pipesInfrastructure automationContainer-based deploymentsDesigning for failureAgile development approachEvolving architectureLet's discuss these characteristics in detail.
Breaking down large monolithic applications into separate services was one of the successful features of SOA, and it allowed engineers to build modular software systems with flexibility. The same concept is carried forward by the microservice architecture with much more focus. Instead of stopping at the modularity of the application, it urges for the autonomy of these services by introducing concepts such as domain-driven design, decentralized governance, and data management, all of which we will discuss in the next section.
This allows the application to be more robust. Here, the failure of one component (service) won't necessarily shut down the entire application since these components are deployed and managed independently. At the same time, adding new features to one particular component is much easier since it does not require deploying the entire application and testing every bit of its functionality.
The modular architecture is not something that was introduced with microservices. Instead, it has been the way engineers build complex and distributed systems. The challenge is with scoping or sizing these components. There are no definitions or restrictions regarding the component's sizes in the architectures that came before microservices. But microservices specifically focus on the scope and the size of each service.
The amount of work that is done by one microservice should be small enough so that it can be built, deployed, and managed independently. This is an area where most people struggle while adopting microservices since they think it is something that they should do right the first time. But the reality is that the more you work on the project, the better you become at defining the scope for a given microservice.
Instead of having one team governing and defining the language, tools, and libraries to use, microservices allow individual teams to select the best tool that is suitable for their scope or use case. This is often called the polyglot model of programming, where different microservices teams use different programming languages, databases, and libraries for their respective service. It does not stop there, though – it even allows each team to have its own software development life cycles and release models so that they don't have to wait until someone outside the team gives them approval. This does not necessarily mean that these teams do not engage with the experienced architects and tech leads in the organization. They will become a part of the team during the relevant sprints and work with these teams as a team member rather than an external stakeholder.
Sometimes, people tend to think that the microservice style is only suitable for stateless applications and they avoid the question of data management. But in the real world, most applications need to store data in persistent storage, and managing this data is a key aspect of application design. In monolithic applications, everything is stored in a single database in most cases, and sharing data across components happens through in-memory function calls or by sharing the same database or tables. This approach is not suitable for the microservice architecture and it poses many challenges, such as the following:
A failure in one component handling data can cause the entire application to fail.Identifying the root cause of the failure would be hard.The microservice architecture suggests the approach of having databases specific to each microservice so that it can keep the state of the microservice. In a situation where microservices need to share data between them, create a separate microservice for common data access and use that service to access the common database. This approach solves the two issues mentioned previously.
One of the key differences between the monolithic architecture and the microservice architecture is the way each component (or service) communicates with the other. In a monolith, the communication happens through in-memory function calls and developers can implement any sort of interconnections between these components within the program, without worrying about failures and complexity. But in a microservice architecture, this communication happens over the network, and engineers do not have the same freedom as in monolithic design.
Given the nature of the microservice approach, the number of services can grow rapidly from tens to hundreds to thousands in no time. This means that going with a mesh topology for inter-service communication can make the overall architecture super complex. Hence, it suggests using the concept of smart endpoints and dumb pipes, where a centralized message broker is used to communicate across microservices. Each microservice would be smart enough to communicate with any other service related to it by only contacting the central message broker; it does not need to be aware of the existence of other services. This decouples the sender and the receiver and simplifies the architecture significantly. We will discuss this topic in greater detail later in this book.
The autonomy provided by the architecture becomes a reality by automating the infrastructure that hosts the microservices. This allows the teams to rapidly innovate and release products to production with a minimum impact on the application. With the increased popularity of Infrastructure as a Service (IaaS) providers, deploying services has become much easier than ever before. Code development, review testing, and deployment can be automated through the continuous integration/continuous deployment (CI/CD) pipelines with the tools available today.
The adoption of containers as a mechanism to package software as independently deployable units provided the impetus that was needed for microservices. The improved resource utilization provided by the containers against the virtual machines made the concept of decomposing a monolithic application into multiple services a reality. This allowed these services to run in the same infrastructure while providing the advantages offered by the microservices.
The microservice architecture created many small services that required a mechanism to run these services without needing extra computing resources. The approach of virtual machines was not good enough to build efficient microservice-based platforms. Containers provided the required level of process isolation and resource utilization for microservices. The microservice architecture would have not been so successful if there were no containers.
Once the all-in-one monolithic application had been decomposed into separate microservices and deployed into separate runtimes, the major setback was communication over the network and the inevitable nature of the distributed systems, which is components failing. With the levels of autonomy we see in the microservices teams, there is a higher chance of failure.
The microservice architecture does not try to avoid this. Instead, it accepts this inevitable fact and designs the architecture for failure. This allows the application to be more robust and ready for failure rather than crashing when something goes wrong. Each microservice should handle failures within itself and common failure handling concepts such as retry, suspension, and circuit breaking need to be implemented at each microservice level.
The microservice architecture demands changes in not only the software architecture but also the organizational culture. The traditional software development models (such as the waterfall method) do not go well with the microservice style of development. This is because the microservice architecture demands small teams and frequent releases of software rather than spending months on software delivery with many different layers and bureaucracy. Instead, the microservice architecture works with a more product-focused approach, where each team consists of people with multiple disciplines that are required for a given phase of the product release.