34,79 €
In the last few years or so, microservices have achieved the rock star status and right now are one of the most tangible solutions in enterprises to make quick, effective, and scalable applications. The apparent rise of Typescript and long evolution from ES5 to ES6 has seen lots of big companies move to ES6 stack. If you want to learn how to leverage the power of microservices to build robust architecture using reactive programming and Typescript in Node.js, then this book is for you.
Typescript Microservices is an end-to-end guide that shows you the implementation of microservices from scratch; right from starting the project to hardening and securing your services. We will begin with a brief introduction to microservices before learning to break your monolith applications into microservices. From here, you will learn reactive programming patterns and how to build APIs for microservices. The next set of topics will take you through the microservice architecture with TypeScript and communication between services. Further, you will learn to test and deploy your TypeScript microservices using the latest tools and implement continuous integration. Finally, you will learn to secure and harden your microservice.
By the end of the book, you will be able to build production-ready, scalable, and maintainable microservices using Node.js and Typescript.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 507
Veröffentlichungsjahr: 2018
Copyright © 2018 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Commissioning Editor: Merint MathewAcquisition Editor: Denim PintoContent Development Editor: Priyanka SawantTechnical Editor: Ruvika RaoCopy Editor: Safis EditingProject Coordinator: Vaidehi SawantProofreader: Safis EditingIndexer: Aishwarya GangawaneGraphics: Jason MonteiroProduction Coordinator: Arvindkumar Gupta
First published: May 2018
Production reference: 1290518
Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.
ISBN 978-1-78883-075-1
www.packtpub.com
Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
Improve your learning with Skill Plans built especially for you
Get a free eBook or video every month
Mapt is fully searchable
Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
Parth Ghiya loves technologies and enjoys learning new things and facing the unknown. He has shown his expertise in multiple technologies, including mobile, web, and enterprise. He has been leading projects in all domains with regard to security, high-availability, and CI/CD. He has provided real-time data analysis, time series, and forecasting solutions too.
In his leisure time, he is an avid traveler, reader, and an immense foodie. He believes in technological independence and is a mentor, trainer, and contributor.
Dhaval Marthak has a wealth of experience in AngularJS; Node.js, other frontend technologies such as React, server-side languages such as C#, and hybrid applications with Backbase. Currently, he is associated with an organization based in Ahmedabad as a senior frontend developer, where he takes care of projects with regard to requirement analysis, architecture design, high-availability, security design, deployment, and build processes to help customers.
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
Title Page
Copyright and Credits
TypeScript Microservices
Packt Upsell
Why subscribe?
PacktPub.com
Contributors
About the author
About the reviewer
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the example code files
Download the color images
Conventions used
Get in touch
Reviews
Debunking Microservices
Debunking microservices
Rise of microservices
Wide selection of languages as per demand
Easy handling of ownership
Frequent deployments
Self-sustaining development units
What are microservices?
Principles and characteristics
No monolithic modules
Dumb communication pipes
Decentralization or self-governance
Service contracts and statelessness
Lightweight
Polyglot
Good parts of microservices
Self-dependent teams
Graceful degradation of services
Supports polyglot architecture and DevOps
Event-driven architecture
Bad and challenging parts of microservices
Organization and orchestration
Platform
Testing
Service discovery
Microservice example
Key considerations while adopting microservices
Service degradation
Proper change governance
Health checks, load balancing, and efficient gateway routing
Self-curing
Cache for failover
Retry until
Microservice FAQs
Twelve-factor application of microservices
Microservices in the current world
Netflix
Walmart
Spotify
Zalando
Microservice design aspects
Communication between microservices
Remote Procedure Invocation (RPI)
Messaging and message bus
Protobufs
Service discovery
Service registry for service-service communication
Server-side discovery
Client-side discovery
Registration patterns – self-registration
Data management
Database per service
Sharing concerns
Externalized configuration
Observability
Log aggregation
Distributed tracing
Microservice design patterns
Asynchronous messaging microservice design pattern
Backend for frontends
Gateway aggregation and offloading
Proxy routing and throttling
Ambassador and sidecar pattern
Anti-corruption microservice design pattern
Bulkhead design pattern
Circuit breaker
Strangler pattern
Summary
Gearing up for the Journey
Setting up primary environment
Visual Studio Code (VS Code)
PM2
NGINX
Docker
Primer to TypeScript
Understanding tsconfig.json
compilerOptions
include and exclude
extends
Understanding types
Installing types from DefinitelyTyped
Writing your own types
Using the dts-gen tool
Writing your own *.d.ts file
Debugging
Primer to Node.js
Event Loop
Understanding Event Loop
Node.js clusters and multithreading
Async/await
Retrying failed requests
Multiple requests in parallel
Streams
Writing your first Hello World microservice
Summary
Exploring Reactive Programming
Introduction to reactive programming
Why should I consider adopting reactive programming?
Reactive Manifesto
Responsive systems
Resilient to errors
Elastic scalable
Message-driven
Major building blocks and concerns
Observable streams
Subscription
emit and map
Operator
Backpressure strategy
Currying functions
When to react and when not to react (orchestrate)
Orchestration
Benefits
Disadvantages
Reactive approach
Benefits
Disadvantages
React outside, orchestrate inside
Reactive coordinator to drive the flow
Synopsis
When a pure reactive approach is a perfect fit
When pure orchestration is a perfect fit
When react outside, orchestrate inside is a perfect fit
When introducing a reactive coordinator is the perfect fit
Being reactive in Node.js
Rx.js
Bacon.js
HighLand.js
Key takeaways
Summary
Beginning Your Microservice Journey
Overview of shopping cart microservices 
Business process overview 
Functional view 
Deployment view 
Architecture design of our system 
Different microservices 
Cache microservice
Service registry and discovery
Registrator
Logger 
Gateway
Design aspects involved
Microservice efficiency model 
Core functionalities 
Supporting efficiencies
Infrastructure role
Governance
Implementation plan for shopping cart microservices 
What to do when the scope is not clear
Schema design and database selection 
How to segregate data between microservices
Postulate 1 – data ownership should be regulated via business capabilities 
Postulate 2 – replicate the database for speed and robustness 
How to choose a data store for your microservice
Design of product microservices
Microservice predevelopment aspects
HTTP code 
1xx – informational 
2xx – success 
3xx – redirections 
4xx – client errors 
5xx – server errors 
Why HTTP code is vital in microservices?
Auditing via logs 
PM2 process manager
Tracing requests
Developing some microservices for a shopping cart 
Itinerary 
Development setup and prerequisite modules
Repository pattern
Configuring application properties 
Custom health module 
Dependency injection and inversion of control 
Inversify 
Typedi 
TypeORM 
Application directory configurations 
src/data-layer 
src/business-layer
src/service-layer
src/middleware
Configuration files
Processing data
Ready to serve (package.json and Docker) 
package.json 
Docker 
Synopsis
Microservice design best practices
Setting up proper microservice scope 
Self-governing functions 
Polyglot architecture 
Size of independent deployable component 
Distributing and scaling services whenever required
Being Agile 
Single business capability handler
Adapting to shifting needs
Handling dependencies and coupling
Deciding the number of endpoints in a microservice
Communication styles between microservices
Specifying and testing the microservices contract 
Number of microservices in a container 
Data sources and rule engine among microservices 
Summary
Understanding API Gateway
Debunking API Gateway
Concerns API Gateway handles
Security
Dumb gateways
Transformation and orchestration
Monitoring, alerting, and high availability
Caching and error handling
Service registry and discovery
Circuit breakers
Versioning and dependency resolution
API Gateway design patterns and aspects
Circuit breakers and its role
Need for gateway in our shopping cart microservices
Handle performance and scalability
Reactive programming to up the odds
Invoking services
Discovering services
Handling partial service failures
Design considerations
Available API Gateways options
HTTP proxy and Express Gateway
Zuul and Eureka
API Gateway versus reverse proxy NGINX
RabbitMQ
Designing our gateway for shopping cart microservices
What are we going to use?
Summary
Service Registry and Discovery
Introduction to the service registry
What, why, and how of service registry and discovery
The why of service registry and discovery
How service registry and discovery?
Service registration
Service resolution
The what of service registry and discovery
Maintaining service registry
Timely health checks
Service discovery patterns
Client-side discovery pattern
Server-side discovery pattern
Service registry patterns
Self-registration pattern
Third-party registration pattern
Service registry and discovery options
Eureka
 Setting up the Eureka server
Registering with Eureka server
Discovering with Eureka server
Key points for Eureka
Consul
Setting up the Consul server
Talking with Consul server
Registering a service instance
Sending heartbeats and doing a health check
Deregistering an application
Subscribing to updates
Key points for Consul
Registrator
Key points for Registrator
How to choose service registry and discovery
If you select Consul
If you select Eureka
Summary
Service State and Interservice Communication
Core concepts – state, communication, and dependencies 
Microservice state 
Interservice communication
Commands
Queries 
Events 
Exchanging data formats 
Text-based message formats
Binary message formats
Dependencies
Communication styles
NextGen communication styles 
HTTP/2 
gRPC with Apache Thrift 
Versioning microservices and failure handling
Versioning 101 
When a developer's nightmare comes true 
Client resiliency patterns
Bulkhead and retry pattern 
Client-side load balancing or queue-based load leveling pattern 
Circuit breaker pattern 
The fallback and compensating transaction pattern
Case Study – The NetFlix Stack
Part A – Zuul and Polyglot Environment
Part B – Zuul, Load balancing and failure resiliency
Message queues and brokers
Introduction to pub/sub pattern
Sharing dependencies
The problem and solution 
Getting started with bit 
The problem of shared data
Cache
Blessing and curse of caching
Introduction to Redis
Setting up our distributed caching using redis
Summary
Testing, Debugging, and Documenting
Testing
What and how to test
The testing pyramid – what to test?
System tests
Service tests
Contract tests
Unit tests
Hands-on testing
Our libraries and test tool types
Chai
Mocha
Ava
Sinon
Istanbul
Contract tests using Pact.js
What is consumer-driven contract testing?
Introduction to Pact.js
Bonus (containerizing pact broker)
Revisiting testing key points
Debugging
Building a proxy in between to debug our microservices
Profiling process
Dumping heap
CPU profiling
Live Debugging/Remote Debugging
Key points for debugging
Documenting
Need of Documentation
Swagger 101
Swagger Editor and Descriptor
Key points for Swagger  and Descriptor
Swagger Editor
Swagger Codegen
Swagger UI
Swagger Inspector
Possible strategies to use Swagger
Top-down or design-first Approach
Bottom-up approach
Generating a project from a Swagger definition
Summary
Deployment, Logging, and Monitoring
Deployment
Deciding release plan
Deployment options
DevOps 101
Containers
Containers versus Virtual Machine (VMs)
Docker and the world of containers 
Docker components
Docker concepts
Docker command reference
Setting up Docker with NGINX, Node.js, and MongoDB
WebHooks in our build pipeline
Serverless architecture 
Logging 
Logging best practices 
Centralizing and externalizing log storage 
Structured data in logs 
Identifiers via correlational IDs 
Log levels and logging mechanisms 
Searchable logs 
Centralized custom logging solution implementation 
Setting up our environment 
Distributed tracing in Node.js
Monitoring 
Monitoring 101
Monitoring challenges
When to alert and when not to alert?
Monitoring tools 
PM2 and keymetrics 
Keymetrics to monitor application exceptions and runtime problems 
Adding custom metrics
Simple metrics
Counter metric 
Meter
Prometheus and Grafana
Production-ready microservice criteria
Summary
Hardening Your Application
Questions you should be asking while applying security
Core application/core microservice
Middleware
API Gateway
Team and operational activities
Security best practices for individual services/applications
Checking for known security vulnerabilities
Auditjs  
Snyk.io 
Preventing brute force attacks or flooding by adding rate limiters
Protecting against evil regular expressions
Blocking cross-site request forgeries
Tightening session cookies and effective session management
Adding helmet to configure security headers
Avoiding parameter pollution
Securing transmission
Preventing command injection/SQL injection
TSLint/static code analyzers
Security best practices for containers
Securing container builds and standardizing deployments
Securing containers at runtime
Security checklist
Service necessities
Service interactions
Development phase
Deployment
Operations
Scalability
AWS Load Balancer
Benefits of using a load balancer
Fault tolerance
High availability
Flexibility
Security
Health check parameters
Unhealthy threshold
Healthy threshold
Timeout
Health check protocol
Health check port
Interval
Configuring a load balancer
Autoscaling – practical hands on with AWS
Creating the launch configuration
Creating an autoscaling group and configuring it with autoscaling policies
Creating an application load balancer and adding a target group
Time to test
Scaling with Kubernetes
What problem does Kubernetes solve?
Kubernetes concepts
Summary
Other Books You May Enjoy
Leave a review - let other readers know what you think
In the last few years or so, microservices have achieved the rockstar status and are right now one of the most tangible solutions in enterprises to make quick, effective, and scalable applications. Microservices entail an architectural style and pattern in which a huge system is distributed into smaller services that are self-sufficient, autonomous, self-contained, and individually deployable.
The apparent rise of TypeScript and the long evolution from ES5 to ES6 has seen lots of big companies move to ES6 stack. TypeScript, due to its huge advantages like class and module supports, static type checking, and syntax similarity to JavaScript, has become the de facto solution for many enterprises. Node.js, due to its asynchronous, non-blocking, lightweight nature, and for, has been widely appointed by many companies. Node.js written in TypeScript opens doors to various opportunities.
However, microservices have their own difficulties to be dealt with, such as monitoring, scaling, distributing, and service discovery. The major challenge is deploying at scale, as we don't want to end up with system failures. Adopting microservices without actually knowing or addressing these issues would lead to a big issue. The most important part of this book concerns the pragmatic technological independence approach for dealing with microservices so as to leverage the best of everything.
In three parts, this book explains how these services work and the process to build any application the microservices way. You will encounter a design-based approach to architecture and guidance for implementing various microservices elements. You will get a set of recipes and practices for meeting practical, organizational, and cultural challenges to adoption. The goal of this book is to acquaint users with a practical, step-by-step approach for making reactive microservices at scale. This book will take readers into a deep dive with Node.js, TypeScript, Docker, Netflix OSS, and more. Readers of this book will understand how Node.js and TypeScript can be to deploy services that can run independently. Users will understand the evolving trend of serverless computing and the different Node.js capabilities, realizing the use of Docker for containerization. The user will learn how to autoscale the system using Kubernetes and AWS.
I am sure readers will enjoy each and every section of the book. Also, I believe this book adds value for not just Node.js developers but also others who want to play around with microservices and successfully implementing them in their businesses. Throughout this book, I have taken a practical approach by providing a number of examples, including a case study from the e-commerce domain. By the end of the book, you will have learned how to implement microservice architectures using Node.js, the TypeScript framework, and other utilities. These are battle-tested, robust tools for developing any microservice and are written to the latest specifications of Node.js.
This book is for JavaScript developers seeking to utilize their Node.js and TypeScript skills to build microservices and move away from the monolithic style of architecture. Prior knowledge of TypeScript and Node.js is assumed. This book will help answer some of the important questions concerning what problems a microservice solves and how an organization has to be structured to adopt microservices.
Chapter 1, Debunking Microservices, gives you an introduction to the world of microservices. It begins with the transition from monolithic to microservice architectures. This chapter gets you acquainted with the evolution of the microservices world; answers to frequently asked questions that come up about microservices, and familiarizes you with various microservice design aspects, twelve-factor applications for microservices; and various design patterns for microservice implementations along with their pros, cons, and when to and when not to use them.
Chapter 2, Gearing up for the Journey, introduces necessary concepts in Node.js and TypeScript. It begins with preparing our development environment. It then talks about basic TypeScript essentials such as types, tsconfig.json, writing your custom types for any node module, and the Definitely Typed repository. Then, we move to Node.js, where we write our application in TypeScript. We will learn about some essentials such as Node clusters, Event Loops, multithreading, and async/await. Then, we move to writing our first hello world TypeScript microservice.
Chapter 3, Exploring Reactive Programming, gets into the world of reactive programming. It explores the concepts of reactive programming along with its approaches and advantages. It explains how reactive programming is different from traditional approaches and then moves to practical examples with Highland.js, Bacon.js, and RxJS. It concludes by comparing all the three libraries along with their pros and cons.
Chapter 4, Beginning Your Microservices Journey, begins our microservices case study—the shopping cart microservices. It begins with the functional requirements of the system, followed by overall business capabilities. We start the chapter with architectural aspects, design aspects, and overall components in the ecosystem, before moving to the data layers of microservices, wherein we will have an in-depth discussion on the types of data and how the database layer should be. Then, we develop microservices with the approach of separation of concerns and learn about some microservices best practices.
Chapter 5, Understanding API Gateway, explores the designs involved in an API Gateway. It tells why an API Gateway is required and what its functions are. We will explore various API Gateway designs and the various options available in them. We will look at the circuit breaker and why it plays an important role as part of a client resiliency pattern.
Chapter 6, Service Registry and Discovery, talks about introducing a service registry in your microservices ecosystem. The number of services and/or location of services can remain in flux based on load and traffic. Having a static location disturbs the principles of microservices. This chapter deals with the solution and talks about service discovery and registry patterns in depth. The chapter further explains various options available and discusses Consul and Eureka in detail.
Chapter 7, Service State and Interservice Communication, focuses on interservice communication. Microservices need to collaborate with each other in order to achieve a business capability. The chapter explores various modes of communication. It then talks about next-gen communication styles including RPC and the HTTP 2.0 protocol. We learn about service state and at where the state can be persisted. We go through various kinds of database system along with their use cases. The chapter explores the world of caching, sharing code among dependencies, and versioning strategies, and details how to get client resiliency patterns to handle failures.
Chapter 8, Testing, Debugging, and Documenting, outlines life after development. We learn how to write test cases and plunge into the world of BDD and TDD through some famous toolsets. We see contract testing for microservices—a way to ensure that there are no groundbreaking changes. We then see debugging and how to profile our service and all the options available to us. We move on to documenting and understanding the needs of documentation and all the toolsets involved around Swagger, the tool that we will use.
Chapter 9, Deployment, Logging, and Monitoring, covers deployment and various options involved in it. We see a build pipeline, and we get acquainted with continuous integration and continuous delivery. We understand Docker in detail and dockerize our application by adding Nginx in front of our microservice. We move on to logging and understand the need for a customized, centralized logging solution. We finally move on to monitoring and understanding the various options available, such as keymetrics, Prometheus, and Grafana.
Chapter 10, Hardening Your Application, looks at hardening the application, addressing security and scalability. It talks about security best practices that should be in place to avoids any mishaps. It gives a detailed security checklist that can be used at the time of deployment. We will then learn about scalability with AWS and Kubernetes. We will see how to solve scaling problems with AWS by automatically adding or removing an EC2 instance. The chapter concludes with Kubernetes and its example.
Appendix, What's New in Node.js 10.x and NPM 6.x?, covers about the new update in Node.js v10.x and NPM v6.x. The Appendix is not present in the book, but it is available for the download int the following link: https://www.packtpub.com/sites/default/files/downloads/Whats_New_in_Node.js_10.x_and_NPM_6.x.pdf
My aim for the book was to have this book's topic be relevant, useful, and, most important of all, focused on practical examples useful to career and business-based use cases. I hope you enjoy reading the book as much as I loved writing the book, which surely means that you will have a blast!
This book requires a Unix machine with prerequisites installed. A basic understanding of Node.js and TypeScript is very much needed before proceeding with the book. Most of the code will work in Windows too but with different ways to install it; hence, Linux (Oracle VM Box with Ubuntu is also a perfect fit) is recommended. To get the most out of this book, try to work out the examples and apply the concepts to examples of your own as soon as possible. Most of the programs or case studies in these programs utilize open source software that can be installed or set up with ease. However, a few instances do require setting up a few paid setups.
In Chapter 9, Deployment, Logging, and Monitoring, we need an account on logz.io so as to have a complete ELK setup ready rather than individually managing it. A trial version is available for 30 days, but you can extend some plans. An account for key metrics is needed to uncover its full potential.
In Chapter 10, Hardening Your Application, you need to procure an AWS account for scaling and deployment purposes. You also need to procure Google Cloud Platform for independently testing out Kubernetes rather than going through the manual setup process. The free tier is available for both accounts, but you need to sign up with a credit/debit card.
You can download the example code files for this book from your account at www.packtpub.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
Log in or register at
www.packtpub.com
.
Select the
SUPPORT
tab.
Click on
Code Downloads & Errata
.
Enter the name of the book in the
Search
box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR/7-Zip for Windows
Zipeg/iZip/UnRarX for Mac
7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/TypeScript-Microservices. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://www.packtpub.com/sites/default/files/downloads/TypeScriptMicroservices_ColorImages.pdf.
There are a number of text conventions used throughout this book.
CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "An example of all the operators in the preceding table can be found at rx_combinations.ts, rx_error_handing.ts, and rx_filtering.ts."
A block of code is set as follows:
let asyncReq1=await axios.get('https://jsonplaceholder.typicode.com/posts/1');console.log(asyncReq1.data);let asyncReq2=await axios.get('https://jsonplaceholder.typicode.com/posts/1');console.log(asyncReq2.data);
Any command-line input or output is written as follows:
sudo dpkg -i <file>.deb
sudo apt-get install -f # Install dependencies
Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Open up your instance and then go to the Load Balancing | Load balancers tab."
Feedback from our readers is always welcome.
General feedback: Email [email protected] and mention the book title in the subject of your message. If you have questions about any aspect of this book, please email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packtpub.com.
Whether you are a tech lead, developer, or a tech savant eager to adapt to new modern web standards, the preceding line represents your current life situation in a nutshell. Today's mantra for the successful business, fail quickly, fix and rise soon, quicker delivery, frequent changes, adaption to changing technologies, and fault-tolerant systems are some of the general daily requirements. For the very same reason, during recent times, the technology world has seen a quick change in architectural designs that have led industry leaders (such as Netflix, Twitter, Amazon, and so on) to move away from monolithic applications and adopt microservices. In this chapter, we will debunk microservices and study their anatomy, and learn their concepts, characteristics, and advantages. We will learn about microservice design aspects and see some microservice design patterns.
In this chapter, we will talk about the following topics:
Debunking microservices
Key considerations for microservices
Microservice FAQs
How microservices satisfy the twelve-factors of the application
Microservices in the current world
Microservice design aspects
Microservice design patterns
The core idea behind microservice development is if the application is broken down into smaller independent units, with each group performing its functionality well, then it becomes straightforward to build and maintain an application. The overall application then just becomes the sum of individual units. Let's begin by debunking microservices.
Today's world is evolving exponentially, and it demands an architecture that can satisfy the following problems that made us rethink traditional architecture patterns and gave rise to microservices.
There is a great need for technological independence. At any point in time, there is a shift in languages and adoption rates change accordingly. Companies such as Walmart have left the Java stack and moved towards the MEAN stack. Today's modern applications are not just limited to the web interface and extends its need for mobile and smartwatch application too. So, coding everything in one language is not at all a feasible option. We need an architecture or ecosystem where multiple languages can coexist and communicate with each other. For example, we may have REST APIs exposed in Go, Node.js, and Spring Boot—a gateway as the single point of contact for the frontend.
Today's applications not only include a single web interface, but go beyond into mobiles, smart watches, and virtual reality (VR). A separation of logic into individual modules helps to control everything as each team owns a single unit. Also, multiple things should be able to run in parallel, hence achieving faster delivery. Dependencies between teams should be reduced to zero. Hunting down the right person to get the issue fixed and get the system up and running demands a microservice architecture.
Applications need to constantly evolve to keep up with an evolving world. When Gmail started, it was just a simple mailing utility and now it has evolved into much more than that. These frequent changes demand frequent deployments in such a way that the end user doesn't even know that a new version is being released. By dividing into smaller units, a team can thus handle frequent deployments with testing and get the feature into customers hands quickly. There should be graceful degradation, that is, fail fast and get it over with.
A tight dependency between different modules soon cascades to the entire application and it goes down. This requires smaller independent units in such a way that if one unit is not operational, then the entire application is not affected by it.
Now let's understand in depth about microservices, their characteristics, their advantages, and all the challenges while implementing a microservice architecture.
There is no universal definition of microservices. Simply stating—a microservice can be any operational block or unit, which handles its single responsibility very efficiently.
Microservices are modern styles to build autonomous, self-sustaining, loosely coupled business capabilities that sum up as an entire system. We will look into the principles and characteristics of microservices, the benefit that microservices provide, and the potential pitfalls to keep an eye out for.
There are a few principles and characteristics that define microservices. Any microservice pattern would be distinguished and explained further by these points.
A microservice is just another new project satisfying a single operational business requirement. A microservice is linked with business unit changes and thus it has to be loosely coupled. It should be that a microservice can continuously serve the changing business requirements irrespective of the other business units. For other services, it is just a matter of consumption, the mode of consumption should not change. Implementations can change in the background.
Microservices promote basic, time-tested, asynchronous communication mechanisms among microservices. As per this principle, the business logic should stay inside the endpoint and not be amalgamated with the communication channel. The communication channel should be dumb and just communicate in the communication protocol decided. HTTP is a favorable communication protocol, but a more reactive approach—queues is prevalent these days. Apache Kafka, and RabbitMQ are some of the prevalent dumb communication pipes providers.
While working with microservices, there is often a change of failure. A contingency plan that eventually stops the failure from propagating to the entire system. Furthermore, each microservice may have its own data storage need. Decentralization manages just the need for that. For example, in our shopping module we can store our customer and his transactions-related information in SQL databases, but since the product data is highly unstructured, we store it in NoSQL-related databases. Every service should be able to take a decision on what to do in fail case scenarios.
Microservices should be well defined through service contracts. A service contract basically gives information about how to consume the service and what all the parameters are that need to be passed to that service. Swagger and AutoRest are some of the widely adopted frameworks for creating service contracts. Another salient characteristic is that nothing is stored and no state is maintained by the microservice. If there is a need to persist something, then it will be persisted in a cache database or some datastore.
Microservices, being lightweight, help to replicate a setup easily in any hosting environment. Containers are more preferred than hypervisors. Lightweight application containers help us to maintain a lower footprint, thus by binding a microservice to some context. Well-designed microservices should perform only one function and do that operation well enough. Containerized microservices are easily portable, thus enabling easy auto-scaling.
Everything is abstract and unknown behind the service API in microservice architecture. In the preceding example of shopping cart microservices, we can have our payment gateway entirely as a service deployed in the cloud (serverless architecture), while the rest of the services can be in Node.js. The internal implementations are completely hidden behind the microservices and the only concern to be taken care of is that the communication protocol should be the same throughout.
Now, let's see what advantages microservice architecture has to offer us.
Adopting microservices has several advantages and benefits. We will look at the benefits and higher business values we get while using microservices.
Microservices architecture enables us to scale any operation independently, have availability on demand, and introduce new services very quickly without zero to very few configurations. Technological dependence is also greatly reduced. For example, in our shopping microservice architecture, the inventory and shopping module can be independently deployed and worked upon. The inventory service will just assume that the product will exist and work accordingly. The inventory service can be coded in any language as long as the communication protocol between inventory and product service is met.
Failure in any system is natural, graceful degradation is a key advantage of microservices. Failures are not cascaded to the entire system. Microservices are designed in such a way that microservices adhere to agreed service level agreements; if the service level agreements (SLAs) are not met, then the service is dropped. For example, coming back to our shopping microservice example, if our payment gateway is down, then further requests to that service are stopped until the service is up and running.
Microservices make use of resources as per need or effectively create polyglot architecture. For example, in shopping microservices, you can store products and customer data in a relational database, but any audit or log-related data you can store in Elasticsearch or MongoDB. As each microservice operates in its bounded context, this can enable experimentation and innovation. The cost of change impact will be very less. Microservices enables DevOps to full level. Many DevOps tools and techniques are needed for a successful microservice architecture. Small microservices are easy to automate, easy to test, contaminate the failure if needed, and are easy to scale. Docker is one of the major tools for containerizing microservices.
A well-architected microservice will support asynchronous event-driven architecture. Event-driven architecture helps as any event can be traced into—each action would be the outcome of any event, we can tap into any event to debug an issue. Microservices are designed with the publisher-subscriber pattern, meaning adding any other service that just subscribes to that event will be a mere task. For example, you are using a shopping site and there's a service for add to cart. Now, we want to add new functionality so that whenever a product is added to a cart, the inventory should be updated. Then, an inventory service can be prepared that just has to subscribe to the add to cart service.
Now, we will look into the complexities that microservice architecture introduces.
With great power comes greater challenges. Let's look at the challenging parts of designing microservices.
It is one of the topmost challenges while adapting microservice architecture. This is more of a non-functional challenge wherein new organizational teams need to be formed and they need to be guided in adopting the microservice, agile, and scrum methodologies. They need to be simulated in such an environment that they can work independently. Their developed outcome should be integrated into the system in such a way that it is loosely coupled and is easily scaled.
Creating the perfect environment needs a proper team, and a scalable fail-safe infrastructure across all data centers. Going to the right cloud provider (AWS or GCP or Azure), adding automation, scalability, high availability, managing containers, and instances of microservices are some of the key considerations. Further microservices demand other component needs such as an enterprise service bus, document databases, cache databases, and so on. Maintaining these components becomes an added task while dealing with microservices.
Being completely independent testing out services with dependencies is extremely challenging. As a microservice gets introduced into the ecosystem, proper governance and testing are needed, otherwise it will be a single point of failure for the system. Several levels of testing are needed for any microservice. It should start from whether the service is able to access cross-cutting concerns (cache, security, database, logs) or not. The functionality of the service should be tested, followed by testing of the protocol through which it is going to communicate. Next is collaborative testing of the microservice with other services. After that is the scalability testing followed by fail-safe testing.
Locating services in a distributed environment can be a tedious task. Constant change and delivery is the dire requirement for today's constantly evolving world. In such situations, service discovery can be challenging as we want independent teams and minimal dependencies across teams. Service discovery should be such that a dynamic location can be provided for microservices. The location of a service may constantly change depending on deployments and auto-scaling or failures. Service discovery should also keep a lookout for services that are down or are performing poorly.
The following is a diagram of shopping microservices, which we are going to implement throughout this book. As we can see, each service is independently maintained and there are independent modules or smaller systems—Billing Module, Customer Module, Product Module, and Vendor Module. To coordinate with every module, we have API Gateway and Service Registry. Adding any additional service becomes very easy as the service registry will maintain all the dynamic entries and will be updated accordingly:
A microservice architecture introduces well-defined boundaries, which makes it possible to isolate failures within the boundaries. But being like other distributed systems, there is likely a chance of failure at the application level. To minimize the impact, we need to design fault-tolerant microservices, which react in a predefined way to certain types of failure. While adapting to microservice architecture, we add one more network layer to communicate with rather than in-memory method calls, which introduces extra latency and one more layer to manage. Given next are a few considerations that if handled with care while designing microservices for failure, will benefit the system in the long run.
Microservices architecture allows you to isolate failures, thus enabling you to isolate the failures and get graceful degradation as failures are contained within the boundaries of the service and are not cascaded. For example, in social networking sites, the messaging service may go down, but that won't stop the end users from using social networks. They can still browse posts, share statuses, check-in locations, and so on. Services should be made such that they adhere to certain SLAs. If a microservice stops meeting its SLA, then that service should be restored to back up. Netflix's Hystrix is based on the same principle.
Introducing change without any governance can be a huge problem. In a distributed system, services depend on each other. So when you introduce a new change, the utmost consideration should be given as if any side or unwanted effects are introduced, then its effect should be minimal. Various change management strategies and automatic rollout options should be available. Also, proper governance should be there in code management. Development should be done via TDD or BDD, and if the agreed percentage is met upon, only then should it be rolled out. Releases should be done gradually. One useful strategy is the blue-green or red-black deployment strategy, wherein you run two production environments. You rollout the change in only one environment and point out the load balancer to a newer version only after your change is verified. This is more likely when maintaining a staging environment.
Depending on business requirements, the microservice instance can start, restart, stop on some failure, run low on memory, and auto-scale, which may make it temporarily or permanently unavailable. Therefore, the architecture and framework should be designed accordingly. For example, a Node.js server, being single-threaded, stops immediately in the case of failure, but using graceful tools such as PM2 forever keeps them running. A gateway should be introduced that will be the only point of contact for the microservice consumer. The gateway can be a load balancer that should skip unhealthy microservices instances. The load balancer should be able to collect health information metrics and route traffic accordingly, it should smartly analyze the traffic on any particular microservice, and it should trigger auto-scaling if needed.
Self-curing design can help the system to recover from disasters. Microservices implementation should be able to recover lost services and functionality automatically. Tools such as Docker restart services whenever they fail. Netflix provides wide tools as an orchestration layer to achieve self-healing. Eureka service registry and Hystrix circuit breaker are commonly used. Circuit breakers make your service calls more resilient. They track each microservice endpoint's status. Whenever timeout is encountered, Hystrix breaks the connection, triggers the need for curing that microservice, and reverts to some fail-safe strategy. Kubernates is another option. If a pod or any container inside the pod goes down, Kubernates brings up the system and maintains the replica set intact.
Failover caching helps to provide necessary data whenever there are temporary failures or some glitches. The cache layer should be designed so that it can smartly decide how long the cache can be used in a normal situation or during failover situations. Setting cache standard response headers in HTTP can be used. The max-age header specifies the amount of time a resource will be considered fresh. The stale-if-error header determines how long the resource should be served from the cache. You can also use libraries such as Memcache, Redis, and so on.
Due to its self-healing capabilities, a microservice usually gets up and running in no time. Microservice architecture should have retry logic until condition capabilities, as we can expect that the service will recover or the load balancer will redirect the service request to another healthy instance. Frequent retries can also have a huge impact on the system. A general idea is increasing the waiting time between retries after each failure. Microservices should be able to handle idempotency issues; let's say you are retrying to purchase an order, then there shouldn't be double purchases on the customer. Now, let's take time to revisit the microservice concept and understand the most common questions asked about microservice architecture.
While understanding any new terms, we often come across several questions. The following are some of the most frequently asked questions that we come across while understanding microservices:
Aren't microservices just like
service-oriented architecture (
SOA)? Don't I already have them? When should I start?
If you have been in the software industry for a long time, then seeing microservices would probably get you remembering SOA. Microservices does take the concept of modularity and message-based communication from SOA, but there are many differences between them. While SOA focuses more on code reuse, microservices follow the play in your own bundled context rule. Microservices are more of a subset of SOA. Microservices can be scaled on demand. Not all microservice implementations are the same. Using Netflix's implementation in your medical domain is probably a bad idea as any mistake in the medical report will be worth a human life. The simple answer for a working microservice can be to have a clear goal of the operation that the service is meant to perform and if it doesn't perform then what it should do in failures. There have been various answers to when and how to beginwith microservices. Martin Fowler, one of the pioneers in microservices, states to start with the monolith and then gradually move to microservices. But the question here is—is there enough investment to go into the same phase again in this technological innovation era? The short answer is going early in microservices has huge benefits as it will address all concerns from the very beginning.
How will we deal with all the parts? Who's in charge?
Microservices introduce localization and self-rule. Localization means that the huge work that was done earlier will no longer be done by the central team. Embracing self-rule means trusting all teams to let them make their own decisions. This way, software changes or even migrations becomes very easy and fast to manage. Having said that, it doesn't mean that there's no central body at all. With more microservices, the architecture becomes more complex. The central team then should handle all centralized controls such as security, design patterns, frameworks, enterprise security bus, and so on. Certain self-governance processes should be introduced, such as SLAs. Each microservice should adhere to these SLAs and system design should be smart in such a way that if SLAs are not met, then the microservice should be dropped.
How do I introduce change or how do I begin with microservice development?
Almost all successful microservice stories have begun with a monolith that got too big to be managed and was broken up. Changing some part of the architecture all of a sudden will have a huge impact, it should be introduced as gradually kind of divide and rule. Consider asking yourself the following questions for deciding which part to break in the monolith—How is my application built and packaged? How is my application code written? Can I have different data sources and how will my application function when I introduce multiple data sources?—Based on the answers of these parts, refactor that part and measure and observe the performance of that application. Make sure that the application stays in its bounded context. Another part that you can begin is the part whose performance is worst in the current monolithic. Finding these bottlenecks that hinder change would be good for organizations. Introducing centralized operations will eventually allow multiple things to be run in parallel and benefit the greater good of the company.
What kind of tools and technologies are required?
While designing microservice architecture, proper thought should be given to the technology or framework selection for any particular stage. For example, an ideal environment for microservice features, cloud infrastructure, and containers. Containers give heterogeneous and easy to port or migrate systems. Using Docker brings resiliency and scalability on demand in microservices. Any part of microservices, such as the API Gateway or the service registry should be such that it should be API friendly, adaptable to dynamic changes, and not be a single point of failure. Containers require shifting on and off to a server, track all application upgrades for which proper framework either—Swarm or Kubernates for orchestrating framework deployments. Lastly, some monitoring tools to keep health checks on all microservices and take actions needed. Prometheus is one such famous tool.
How do I govern a microservices system?
With lots of parallel service development going on, there is a primitive need to have a centralized governing policy. Not only do we need to take care of certifications and server audits, but also centralized concerns such as security, logging, scalability, and distributed concerns such as team ownership, sharing concerns between various services, code linters, service-specific concerns, and so on. In such a case, some standard guidelines can be made such as each team should provide a Docker configuration file that bundles the software right from getting dependencies to building it and producing a container that has the specifics of the service. The Docker image can then be run in any standard way, or using orchestration tools such as Amazon EC2, GCP, or Kubernates.
Should all the microservices be coded in the same language?
The generic answer to this question is it is not a prerequisite. Microservices interact with each other via predefined protocols such as HTTP, Sockets, Thrift, RPC, and so on, which we will see in much detail later on. This means different services can be written in completely different technological stacks. The internal language implementation of the microservice is not important as the external outcome, that is, the endpoint and API. As long as the communication protocols are maintained, language implementation is not important, while it is an added advantage for not just having one language, but adding too many languages will also result in an added complexity of system developers to maintain language environment needs. The entire ecosystem should not be a wild jungle where you grow anything.
Cloud-based systems now have a standard set of guidelines. We will look at the famous twelve-factor applications and how microservices adhere to those guidelines.
The twelve-factor application is a methodology for Software as a Service (SaaS) or web applications or software deployed in the cloud. It tells us about the characteristics of the output expected from such applications. It essentially is just outlining necessities for making well-structured and scalable cloud applications:
Codebase
: We maintain a single code base
here
for each microservice, with a configuration specific to their own environments, such as development, QA, and prod. Each microservice would have its own repository in a version control system such as Git, mercurial, and so on.
Dependencies
: All microservices will have their dependencies as part of the application bundle. In Node.js, there is
package.json
, which mentions all the development dependencies and overall dependencies. We can even have a private repository from where
dependencies
will be pulled.
Configs
: All configurations should be externalized, based on the server environment. There should be a separation of config from code. You can set environment variables in Node.js or use Docker compose to define other variables.
Backing services
: Any service consumed over the network such as database, I/O operations, messaging queries, SMTP, the cache will be exposed as microservices and using Docker compose and be independent of the application.
Build, release, and run
: We will use automated tools like Docker and Git in distributed systems. Using Docker we can isolate all the three phases using its push, pull, and run commands.
Processes
: Microservices designed would be stateless and would share nothing, hence enabling zero fault tolerance and easy scaling. Volumes will be used to persist data thus avoiding data loss.
Port binding
: Microservices should be autonomous and self-contained. Microservices should embed service listeners as part of service itself. For example— in Node.js application using HTTP module, service network exposing services for handling ports for all processes.
Concurrency
: Microservices will be scaled out via replication. Microservices are scaled out rather than scaled up. Microservices can be scaled or shrunk based on the flow of workload diversity. Concurrency will be dynamically maintained.
Disposability
: To maximize the robustness of application with fast startup and graceful shutdown. Various options include restart policies, orchestration using Docker swarm, reverse proxy, and load balancing with service containers.
Dev/prod parity
: Keep development/production/staging environments exactly alike. Using containerized microservices helps via
build once, run anywhere strategy
. The same image is deployed across various DevOps stage.
Logs
: Creating separate microservice for logs for making it centralized, to treat as event streams and send it to frameworks such as
elastic stack
(
ELK
).
Admin processes
: Admin or any management tasks should be packed as one of the processes, so they can be easily executed, monitored, and managed. This will include tasks like database migrations, one-time scripts, fixing bad data, and so on.
Now, let's look at the pioneer implementers of microservices in the current world, the advantages they have got, and the roadmap ahead. The common objective of why these companies adopted microservices was getting rid of monolithic hell. Microservices even saw its adoption at the frontend. Companies such as Zalando use microservices principles to have composition at the UI level too.
Netflix is one of the front-runners in microservice adoption. Netflix processes billions of viewing events per day. It needed a robust and scalable architecture to manage and process this data. Netflix used polyglot persistence to get the strength of each of the technological solutions they adopted. They used Cassandra for high volume and lower latency writes operations and a hand-made model with tuned configurations for medium volume write operations. They have Redis for high volume and lower latency reads at the cache level. Several frameworks that Netflix tailor-made are now open source and available for use:
Netflix Zuul
An edge server or the gatekeeper to the outside world. It doesn't allow unauthorized requests to pass through. It is the only point of contact for the outside world.
Netflix Ribbon
A load balancer that is used by service consumers to find services at runtime. If more than one instances of microservices are found, ribbon uses load balancing to evenly distribute the load.
Netflix Hystrix
A circuit breaker that is used to keep the system up and running. Hystrix breaks the connection for services that are going to fail eventually and only joins the connection when services are up again.
Netflix Eureka
Used for service discovery and registration. It allows services to register themselves at runtime.
Netflix Turbine
Monitoring tool to check the health of running microservices.
Just checking the stars on these repositories will give an idea of the rate of adoption of microservices using Netflix's tools.
Walmart is one of the most popular companies on Black Friday. During Black Friday, it has more than 6 million page views per minute. Walmart adopted to microservices architecture to adopt to the world of 2020 to have 100% availability with reasonable costs. Migrating to microservices gave a huge uplift to the company. Conversion rates went up by 20%. They have zero downtime on Black Friday. They saved 40% of their computational power and got 20-50% cost savings overall.
Spotify has 75 million active users per month with an average session length of 23 minutes. They adopted a microservice architecture and polyglot environment. Spotify is a company of 90 teams, 600 developers, and five offices across two continents, all working on the same product. This was a major factor in reducing dependencies as much as possible.
Zalando
