35,99 €
Learn essential microservices concepts while developing scalable applications with Express, Docker, Kubernetes, and Docker Swarm using Node 10
Key FeaturesWrite clean and maintainable code with JavaScript for better microservices developmentDive into the Node.js ecosystem and build scalable microservices with Seneca, Hydra, and Express.jsDevelop smart, efficient, and fast enterprise-grade microservices implementationBook Description
Microservices enable us to develop software in small pieces that work together but can be developed separately; this is one reason why enterprises have started embracing them. For the past few years, Node.js has emerged as a strong candidate for developing microservices because of its ability to increase your productivity and the performance of your applications.
Hands-On Microservices with Node.js is an end-to-end guide on how to dismantle your monolithic application and embrace the microservice architecture - right from architecting your services and modeling them to integrating them into your application. We'll develop and deploy these microservices using Docker. Scalability is an important factor to consider when adding more functionality to your application, and so we delve into various solutions, such as Docker Swarm and Kubernetes, to scale our microservices. Testing and deploying these services while scaling is a real challenge; we'll overcome this challenge by setting up deployment pipelines that break up application build processes in several stages. Later on, we'll take a look at serverless architecture for our microservices and its benefits against traditional architecture. Finally, we share best practices and several design patterns for creating efficient microservices.
What you will learnLearn microservice conceptsExplore different service architectures, such as Hydra and SenecaUnderstand how to use containers and the process of testingUse Docker and Swarm for continuous deployment and scalingLearn how to geographically spread your microservicesDeploy a cloud-native microservice to an online providerKeep your microservice independent of online providersWho this book is for
This book is for JavaScript developers seeking to utilize their skills to build microservices and move away from the monolithic architecture. Prior knowledge of Node.js is assumed.
Diogo Resende is a developer with more than 15 years of experience. He has worked with Node.js almost from the beginning. His computer education and experience in many industries and telecommunication projects have given him a wide background knowledge of other architecture components and approaches that influence the overall performance of an application.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 220
Veröffentlichungsjahr: 2018
Copyright © 2018 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Commissioning Editor: Kunal ChaudhariAcquisition Editor:Nigel FernandesContent Development Editor: Arun NadarTechnical Editor:Leena PatilCopy Editor: Safis EditingProject Coordinator:Sheejal ShahProofreader: Safis EditingIndexer: Tejal Daruwale SoniGraphics:Jason MonteiroProduction Coordinator: Deepika Naik
First published: June 2018
Production reference: 1270618
Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.
ISBN 978-1-78862-021-5
www.packtpub.com
Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
Improve your learning with Skill Plans built especially for you
Get a free eBook or video every month
Mapt is fully searchable
Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
Diogo Resende is a developer with more than 15 years of experience. He has worked with Node.js almost from the beginning. His computer education and experience in many industries and telecommunication projects have given him a wide background knowledge of other architecture components and approaches that influence the overall performance of an application.
Bruno Joseph D'mello works at Accion labs as a senior software developer. He has 6 years of experience in web application development, in domains such as entertainment, social media, enterprise, and IT services. Bruno follows Kaizen and enjoys the freedom of architecting new things on the web. He has also contributed some of his knowledge by authoring books such as Web Development in Node.js and MongoDB - Second Edition, What You Need to Know about Node.js, and JavaScript and JSON Essentials.
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
Title Page
Copyright and Credits
Hands-On Microservices with Node.js
PacktPub.com
Why subscribe?
PacktPub.com
Contributors
About the author
About the reviewer
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the example code files
Conventions used
Get in touch
Reviews
The Age of Microservices
Introducing microservices
Introducing Node.js
Modules
Arrow functions
Classes
Promises and async/await
Spread and rest syntax
Default function parameters
Destructuring
Template literals
Advantages of using Node.js
Node.js Package Manager
Asynchronous I/O
Community
From monolith to microservices
Patterns of microservices
Decomposable
Autonomous
Scalable
Communicable
Disadvantages of microservices
Summary
Modules and Toolkits
Express
Micro
Seneca
Hydra
Summary
Building a Microservice
Using Express
Uploading images
Checking an image exists in the folder
Downloading images
Using route parameters
Generating thumbnails
Playing around with colors
Refactor routes
Manipulating images
Using Hydra
Using Seneca
Plugins
Summary
State and Security
State
Storing state
MySQL
RethinkDB
Redis
Conclusion
Security
Summary
Testing
Types of testing methodologies
Using frameworks
Integrating tests
Using chai
Adding code coverage
Covering all code
Mocking our services
Summary
Deploying Microservices
Using virtual machines
Using containers
Deploying using Docker
Creating images
Defining a Dockerfile
Managing containers
Cleaning containers
Deploying MySQL
Using Docker Compose
Mastering Docker Compose
Summary
Scaling, Sharding, and Replicating
Scaling your network
Replicating our microservice
Deploying to swarm
Creating services
Running our service
Sharding approach
Replicating approach
Sharding and replicating
Moving to Kubernetes
Deploying with Kubernetes
Summary
Cloud-Native Microservices
Preparing for cloud-native
Going cloud-native
Creating a new project
Deploying a database service
Creating a Kubernetes cluster
Creating our microservice
Deploying our microservice
Summary
Design Patterns
Choosing patterns
Architectural patterns
Front Controller
Layered
Service Locator
Observer
Publish-Subscribe
Using patterns
Planning your microservice
Obstacles when developing
Summary
Other Books You May Enjoy
Leave a review - let other readers know what you think
This book is an end-to-end guide on how to split your monolithic Node.js application into several microservices. We'll cover some of the toolkits available, such as Express, Hydra, and Seneca, and create a simple microservice. We'll introduce you to continuous integration using Mocha to add a test suite, we'll use chai to test the HTTP interface, and we'll use nyc to see the test coverage.
We'll cover the concept of containers and use Docker to make our first deployment. We'll then use other tools, such as Docker Swarm, to help us to scale our service. We'll see how to do the same using Kubernetes, both locally and be using Google Cloud Platform, always using the same minimal microservice architecture and with minimal changes to the code.
The book is targeted at people who know the basics of Node.js and want to enter the world of microservices, get to know its advantages and techniques, and understand why it's so popular. It can also be useful for developers in other similar programming languages, such as Java or C#.
Chapter 1, Age of Microservices, covers the evolution of computing and how development has changed and shifted from paradigm to paradigm depending on processing capacity and user demand, ultimately resulting in the age of microservices.
Chapter 2, Modules and Toolkits, introduces you to some modules that help you create a microservice, detailing different approaches: from very raw and simple modules, such as Micro and Express, to full toolkits, such as Hydra and Seneca.
Chapter 3, Building a Microservice, covers the development of a simple microservice using the most common module, Express, with a very simple HTTP interface.
Chapter 4, State and Security, covers the development of our microservice: from using the server filesystem to moving to a more structured database service, such as MySQL.
Chapter 5, Testing, shows how to use Mocha and chai to add test coverage to our previous microservice.
Chapter 6, Deploying Microservices, introduces you to Docker and helps you create a container image to use to run our microservice.
Chapter 7, Scaling, Sharding, and Replicating, covers the concept of replication when using Docker Swarm and Kubernetes locally to scale our microservice.
Chapter 8, Cloud-Native Microservices, shows how to migrate our microservice from the local Kubernetes to Google Cloud Platform, as an example of a fully cloud-native microservice.
Chapter 9, Design Patterns, enumerates some of the most common architectural design patterns and reviews the continuous integration and deployment loop used throughout the book.
You should have basic Node.js skills and be somewhat comfortable with the language. We will cover Docker and Kubernetes, and it can be helpful to know the concepts of containers—but it's not mandatory.
You need to have Node.js (and npm) installed. We recommend using the current stable version, but you're free to use a previous version if it's an LTS one, with possible adaptions. If you want to deploy Kubernetes locally, you'll need to install it later on.
You can download the example code files for this book from your account at www.packtpub.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
Log in or register at
www.packtpub.com
.
Select the
SUPPORT
tab.
Click on
Code Downloads & Errata
.
Enter the name of the book in the
Search
box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR/7-Zip for Windows
Zipeg/iZip/UnRarX for Mac
7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Hands-On-Microservices-with-Node.js. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
Feedback from our readers is always welcome.
General feedback: Email [email protected] and mention the book title in the subject of your message. If you have questions about any aspect of this book, please email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packtpub.com.
Decades ago, more specifically in 1974, Intel introduced 8080 to the world, which is an 8-bit processor with a 2 MHz clock speed and 64 KB of memory. This processor was used in Altair and began the revolution in personal computers.
It was sold pre-assembled or as a kit for hobbyists. It was the first computer to have enough power to actually be used for calculations. Even though it had some poor design choices and needed an engineering major to be able to use and program it, it started the spread of personal computers to the general public.
The technology evolved rapidly and the processor industry followed Moore's law, almost doubling speed every two years. Processors were still single core, with a low-efficiency ratio (power consumption per clock cycle). Because of this, servers usually did one specific job, called a service, like serving HTTP pages or managing a Lightweight Directory Access Protocol (LDAP) directory. Services were the monolith, with very few components, and were compiled altogether to be able to take the most out of the hardware processor and memory.
In the 90s, the internet was still only available for the few. Hypertext, based on HTML and HTTP, was in its infancy. Documents were simple and browsers developed language and protocol as they pleased. Competition for market share was ferocious between Internet Explorer and Netscape. The latter introduced JavaScript, which Microsoft copied as JScript:
After the turn of the century, processor speed continued to increase, memory grew to generous sizes, and 32-bit became insufficient for allocating memory addresses. The all-new 64-bit architecture appeared and personal computer processors hit the 100 W consumption mark. Servers gained muscle and were able to handle different services. Developers still avoided breaking the service into parts. Interprocess communication was considered slow and services were kept in threads, inside a single process.
The internet was starting to become largely available. Telcos started offering triple play, which included the internet bundled with television and phone services. Cellphones became part of the revolution and the age of the smartphone began.
JSON appeared as a subset of the JavaScript language, although it's considered a language-independent data format. Some web services began to support the format.
The following is an example of servers with a couple of services running, but still having only one processor.
Processor evolution then shifted. Instead of the increased speed that we were used to, processors started to appear with two cores, and then four cores. Eight cores followed, and it seemed the evolution of the computer would follow this path for some time.
This also meant a shift in architecture in the development paradigms. Relying on the system to take advantage of all processors is unwise. Services started to take advantage of this new layout and now it's common to see services having at least one processor per core. Just look at any web server or proxy, such as Apache or Nginx.
The internet is now widely available. Mobile access to the internet and its information corresponds to more or less half of all internet access.
In 2012, the Internet Engineering Task Force (IETF) began its first drafts for the second version of HTTP or HTTP/2, and World Wide Web Consortium (W3C) did the same for HTML/HTML5, as both standards were old and needed a remake. Thankfully, browsers agreed on merging new features and specifications and developers no longer have the burden of developing and testing their ideas on the different browser edge cases.
The following is an example of servers with more services running as we reach a point where each server has more than one processor:
Access to information in real time is a growing demand. The Internet of Things (IoT) multiplies the number of devices connected to the internet. People now have a couple of devices at home, and the number will just keep rising. Applications need to be able to handle this growth.
On the internet, HTTP is the standard protocol for communication. Routers usually do not block it, as it is considered a low traffic protocol (in contrast with video streams). This is actually not true nowadays, but it's now so widely used that changing this behavior would probably cause trouble.
Nowadays, it's actually so common to have the HTTP serving developer API working with JSON that most programming languages that release any version after 2015 probably support this data format natively.
As a consequence of processor evolution, and because of the data-demanding internet we now have, it's important to not only be able to scale a service or application to the several available cores, but also to scale outside a single hardware machine.
Many developers started using and following the Service-Oriented Architecture (SOA) principle. It's a principle where the architecture is focused on services, and each service presents itself to others as an application component and provides information to other application components, passing messages over some standard communication protocol.
Microservices, which are a variation of SOA, have become more and more appealing. Many projects have embraced this architecture, and it's not difficult to understand why. With the constant increase in demand for information, applications become more complex, especially with more information being transferred from new data sources to new data visualization devices.
New communication technologies have emerged, social communities spring up like mushrooms, and people expect an application to be able to merge into today's cyber lifestyle.
Microservices come to the rescue by defining a simple strategy: break every complex service into a small, simpler service that is aiming for common functionality. The idea is that services should be small and lightweight - so small that they can be easily maintained, developed, and tested, and so lightweight that they can be responsive and scale more easily:
The preceding diagram is an example of an application that has been split into small microservices (marked as green and blue), with one for the frontend interface, another one for the API, and one just for authentication.
The idea is to decompose the business logic into small and reusable parts, easily understandable in separate chunks, enabling parallel development by different teams or groups. This way, people can develop parts without being worried about breaking an other's code. Each part should be considered a black box to other parts.
It is only important that communication is well-described. It's common for microservices to communicate over HTTP and use JSON as the data format. There are other formats available, such as XML, but they have fallen into desuetude. It's also common to use AMQP as an inter-service communication, but usually not as a public API service.
To summarize, there are several advantages of using this architecture:
Maintenance
: Services, when separate, become easier to develop, test, and deploy because they should be simpler and small
Design enforcement
: A proper and good design is enforced on the application being developed
Knowledge encapsulation
: Services will have specific objectives, such as delivering emails, which will lead to service re-usage and knowledge about specific tasks being grouped together in services
Replaceable
: Services become easier to swap because their functionality and communication is well-known
Technology agnostic
: Each service can be developed using the best tools and languages to build it correctly
Performant
: Services are small and lightweight, and, as mentioned previously, use the best tools available
Upgradable
: Services should be interchangeable and upgradable separately
Productivity
: When complexity starts to grow, productivity will be better than in a monolith application
There are also costs associated with this architecture, namely:
Dependencies
: Because of this architecture being technology agnostic, different dependencies for different services may arise
Complexity
: For small applications, the bootstrap complexity is bigger compared to the monolith
End-to-end testing
: It becomes more complex to test the application from end to end as the number of services to inter-connect is definitely bigger than in a monolith application
The graph is not to be taken very seriously; it's just an approximation of the difference between monolith and microservice architectures. In the beginning, when complexity was just beginning, productivity for microservices was poor as the architecture bootstrap demanded more work and thought.
As complexity started to increase, monolith applications became more difficult to manage and productivity began to decrease. On the other hand, as the microservices architecture started to separate services, productivity increased as the bootstrap already passed and each service was easier to manage.
Some may argue that microservices productivity will not grow as complexity will eventually also hit every service, but that's not true if a team follows the number one rule: if the complexity of a service is too much, split the service into smaller ones.
This architecture design brings long-term advantages if used correctly and across several applications. Services can be reused, which can potentially lead to more intensive usage, which will eventually lead to a more resilient and better-tested service.
Also, future applications can bootstrap faster if a development team has already bootstrapped one before. Previous services can also be integrated, which might lead to gaining an initial application testbed faster.
Using a microservices approach also helps to eliminate any long-term commitment to a technology stack. In the near future, when a team feels the need to change the stack, they can start new services using the new stack, and upgrade the old services one by one if they want to, without compromising the entire application.
Node.js has become a very popular language, so to speak. It's not actually a language, it's a wraparound language, like JavaScript, or ECMAScript. JavaScript was developed for the browser and it is actually small by definition. Then, browsers created a layer of access to the page elements and events, called DOM. That's one of the reasons why people hate the language so much. Node.js takes only the base language and adds an API so that developers have access to I/O, namely, the filesystem and network.
Ryan Dahl started developing Node.js back in 2009. He felt the need for a performant and less blocking program than the ones that were available. Node.js used Google's V8 JavaScript engine from the beginning and was first introduced at the JSConf in Berlin in 2009.
Looking just at the language, it's actually a sound and small, functional, object-oriented, prototype-based language. Everything is an object or inherits from it. Even numbers and functions inherit from an object. The good parts are as follows:
Functions are first-class objects
Functions and block-scoped variables
Closures and anonymous functions
Loose typing (can be seen as a bad aspect)
Node.js introduced JavaScript to a group of API modules that enable developers to access the filesystem, run and manage processes, and communicate over the network. Since it was first designed to replace a traditional web server, it also has HTTP and HTTPS modules to perform the roles of the client or server. Some other modules exist around these ones, which can be built as separate modules from the core (like DNS or URL), but live and are maintained inside the core.
At first, Node.js was very unstable, and not only the code's stability, but also the API's stability. Methods could change between versions dramatically. Modules got deprecated and replaced by others rapidly (search for the sys module for more information). Only brave developers would use it in production.
By the time it hit version 0.8, it became more reliable and the API had been stabilized. Large companies started supporting it and the community grew. Although there was a fork in 2014 because of internal conflicts, the community survived and the two code trees merged back in 2015:
Because of its use of Google's V8, and because it has a small and stable API, it's very fast and reliable. The reason for having a small API is that one of the guidelines of the community is to only have core functionality in the core API since everything else should go into a module. This has also become a major advantage of Node.js. It has a huge community with hundreds of thousands of modules available.
If you stop and think about it, this is the microservices approach – having separate modules to do one job and one job only, and do it well. You can easily find good, stable, and mature modules for specific needs, which are used by thousands of developers. These mature modules are all easy to deploy and have test suites to ensure they keep stable and functional.