Hands-On Microservices with Rust - Denis Kolodin - E-Book

Hands-On Microservices with Rust E-Book

Denis Kolodin

0,0
40,81 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

A comprehensive guide in developing and deploying high performance microservices with Rust




Key Features



  • Start your microservices journey and get a broader perspective on microservices development using RUST 2018,


  • Build, deploy, and test microservices using AWS


  • Explore advanced techniques for developing microservices such as actor model, Requests Routing, and threads





Book Description



Microservice architecture is sweeping the world as the de facto pattern for building web-based applications. Rust is a language particularly well-suited for building microservices. It is a new system programming language that offers a practical and safe alternative to C.







This book describes web development using the Rust programming language and will get you up and running with modern web frameworks and crates with examples of RESTful microservices creation. You will deep dive into Reactive programming, and asynchronous programming, and split your web application into a set of concurrent actors. The book provides several HTTP-handling examples with manageable memory allocations. You will walk through stateless high-performance microservices, which are ideally suitable for computation or caching tasks, and look at stateful microservices, which are filled with persistent data and database interactions. As we move along, you will learn how to use Rust macros to describe business or protocol entities of our application and compile them into native structs, which will be performed at full speed with the help of the server's CPU.







Finally, you will be taken through examples of how to test and debug microservices and pack them into a tiny monolithic binary or put them into a container and deploy them to modern cloud platforms such as AWS.




What you will learn



  • Get acquainted with leveraging Rust web programming


  • Get to grips with various Rust crates, such as hyper, Tokio, and Actix


  • Explore RESTful microservices with Rust


  • Understand how to pack Rust code to a container using Docker


  • Familiarize yourself with Reactive microservices


  • Deploy your microservices to modern cloud platforms such as AWS





Who this book is for



This book is for developers who have basic knowledge of RUST, and want to learn how to build, test, scale, and manage RUST microservices. No prior experience of writing microservices in RUST is assumed.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 583

Veröffentlichungsjahr: 2019

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Hands-On Microservices with Rust

 

 

 

 

Build, test, and deploy scalable and reactive microservices with Rust 2018

 

 

 

 

 

 

 

 

 

 

Denis Kolodin

 

 

 

 

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

Hands-On Microservices with Rust

Copyright © 2019 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Commissioning Editor: Richa TripathiAcquisition Editor: Shririam ShekharContent Development Editor: Manjusha MantriTechnical Editor: Mayank DubeyCopy Editor: Safis EditingLanguage Support Editors: Mary McGowan and Safis EditingProject Coordinator: Prajakta NaikProofreader: Safis EditingIndexer: Priyanka DhadkeGraphics: Jisha chirayilProduction Coordinator:Tom Scaria

First published: January 2019 

Production reference: 1310119

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-78934-275-8

www.packtpub.com

 
mapt.io

Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Mapt is fully searchable

Copy and paste, print, and bookmark content

Packt.com

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks. 

Contributors

About the author

Denis Kolodin has been developing high-loaded network applications for more than 12 years. He has mastered and used different kinds of programming languages, including C, Java, and Python, for developing a variety of apps, from high-frequency trading robots to video broadcasting servers. Nowadays, he enjoys creating peer-to-peer networking applications and is inspired by distributed systems such as cryptocurrencies.He has been using Rust since version 1.0 and is delighted with the features it provides, specifically WebAssembly support. He's the author of Yew Framework, which collected more than 6,000 stars on GitHub. He writes with Rust continuously as part of his job and believes that Rust will be used for everything in the future, including backend, frontend, operating systems, embedded software, games, and smart contracts.

About the reviewer

Daniel Durante is an avid coffee drinker/roaster, motorcyclist, archer, welder, and carpenter whenever he isn't programming. From the age of 12, he has been involved with web and embedded programming with PHP, Node.js, Golang, Rust, and C.

He has worked on text-based browser games that have reached over 1,000,000 active players, created bin-packing software for CNC machines, embedded programming with cortex-m and PIC circuits, high-frequency trading applications, and helped contribute to one of the oldest ORMs of Node.js (SequelizeJS).

He has also reviewed other books – PostgresSQL Developer's Guide, PostgreSQL 9.6 High Performance, Rust Programming By Example, and Rust High Performance – for Packt.

I would like to thank my parents, my brother, my mentors, and my friends who've all put up with my insanity of sitting in front of a computer day in and day out. I would not be here today if it wasn't for their patience, guidance, and love.

Gaurav Aroraa has completed his M.Phil in computer science. He is an MVP, a lifetime member of the Computer Society of India (CSI), an advisory member of IndiaMentor, and is certified as a scrum trainer/coach, XEN for ITIL-F, and APMG for PRINCE-F and PRINCE-P. Gaurav is an open source developer, and the founder of Ovatic Systems Private Limited. Recently, he was awarded the title "Icon of the year – excellence in mentoring technology startups" for the year 2018-19 by Radio City, A Jagran Initiative, for his extraordinary work during his 20-year career in the industry in the field of technology mentoring. You can tweet Gaurav on his twitter handle: @g_arora.

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Table of Contents

Title Page

Copyright and Credits

Hands-On Microservices with Rust

About Packt

Why subscribe?

Packt.com

Contributors

About the author

About the reviewer

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Download the color images

Conventions used

Get in touch

Reviews

Introduction to Microservices

Technical requirements

What are microservices?

Why we need microservices

How to deploy a microservice

Docker

Continuous Integration

How to split a traditional server into multiple microservices

Reasons to avoid monoliths

Impossible to scale vertically

Impossible to update and deploy only one feature

The failure of one server affects all features

Breaking a monolithic service into pieces

Definition of a REST API

User registration microservice

E-mail notifications microservice

Product catalog  microservice

Shopping cart microservice

Payment Integration microservice

Statistics collecting microservice

Transformation to microservices

Reusing existing microservices

Why Rust is a great tool for creating microservices

Explicit versus implicit

Minimal amount of runtime errors

Great performance

Minimal dependencies burden

Summary

Further reading

Developing a Microservice with the Hyper Crate

Technical requirements

Binding a Tiny Server

Adding necessary dependencies

The main function of the server

Address of the server

Server instances

Setting the requests handler

Adding the server instance to a runtime

Building and running

Rebuilding on changes

Handling incoming requests

Adding a service function

Implementing a service function

Index pages

Implementing the REST principles

Adding a shared state

Accessing a shared state from a service function

Parsing paths in a microservice

Implementing REST methods

Extracting the user's identifier

Getting access to the shared data

REST methods

POST – Creating data

GET – Reading data

PUT – Updating data

DELETE – Deleting data

Routing advanced requests

Defining paths with regular expressions

Adding the necessary dependencies

Writing regular expressions

Path for index page

Path for user management

 Path for the users list

Matching expressions

Summary

Logging and Configuring Microservice

Technical requirements

Adding logging to a microservice

Random-value-generating microservices

The log crate

Loggers

Log levels

Logging messages

Custom level of messages

Checking logging is enabled

Own target

Using logging

Configuring a logger with variables

RUST_LOG

RUST_LOG_STYLE

Changing the RUST_LOG variable to your own

Reading environment variables

Standard library

Using the .env file

Using the dotenv crate

Adding variables to the .env file

Parsing command-line arguments

Using the clap crate

Adding dependencies

Building a parser

Reading arguments

Usage

How to add subcommands

Reading the configuration from file

Adding the TOML config

Adding dependencies

Declaring a struct for configuration

Reading the configuration file

Joining all values by a priority

Creating and using the configuration file

Summary

Data Serialization and Deserialization with the Serde Crate

Technical requirements

Data formats for interaction with microservices

The serde crate

Serializing responses

Deserializing requests

Tweaking serialization

Changing the case of names

Removing a nesting

Using specific names for a tag and content

Any Value

Using hyper

Reading a body from a stream

Custom types

Custom serialization

Custom deserialization

Custom error types with the failure crate

Binary data

Compiling, running, and testing

Microservices with multiple formats

Different formats

Parsing a query

Checking different formats

Transcoding

XML support

Summary

Understanding Asynchronous Operations with Futures Crate

Technical requirements

Basic asynchronous types

Basic types of future crate

Using the Future trait

Using the Stream trait

Using Sink to send data back

The channel module

Channels for sending multiple messages

Single-Producer Single-Consumer 

Multi-Producer Single-Consumer

Multi-Producer Multi-Consumer 

Example of usage

One-shot

Using channels to use Sink in multiple places

Executors

Running futures and streams with blocking

Using an executor

The async/await syntax

Creating an image service

Uploading images

The tokio crate

Asynchronous input/output of files

Multipart form requests

Downloading images

sendfile for sending files

Testing the service

Summary

Reactive Microservices - Increasing Capacity and Performance

Technical requirements

What is a reactive microservice?

Loose coupling

Message-driven applications

Asynchronous

Should a reactive microservice be asynchronous?

Reactive microservices with futures and streams

Message brokers

Remote procedure calls

JSON-RPC

gRPC

Thrift

Other RPC frameworks

RPC and REST

Reactive manifesto

Understanding JSON-RPC

How JSON-RPC works

Creating a microservice

Dependencies

Client

Worker

Server

Compiling and running

Learning about gRPC

How gRPC works

Creating a microservice

Dependencies

Protocol

Generating interfaces

Shared client

Client

Server implementation

Service implementation

Handlers

The main function

Worker

Compiling and running

Summary

Reliable Integration with Databases

Technical requirements

PostgreSQL

Setting up a test database

Simple interaction with a database

Adding dependencies

Creating a connection

Wrapping with a tool

Compiling and running

Connection pools

Creating a pool

Parallel imports with the rayon crate

Rayon

MySQL

Database for tests

Connecting with the r2d2 adapter

Adding dependencies

Database interaction functions

Creating a connection pool

Redis

Bootstrap database for tests

Creating a connection pool

Dependencies

Adding commands and interaction functions

Data manipulation functions

Parsing arguments

Connecting to Redis

Executing subcommands

Testing our Redis example

MongoDB

Bootstrapping a database for testing

Connecting to a database using the r2d2 pool

Interaction functions

Parsing arguments

Creating a connections pool

Implementing subcommands

Testing

DynamoDB

Bootstrapping a database for testing

Connecting to DynamoDB

Adding dependencies

Interaction functions

Parsing command-line arguments

Testing

Summary

Interaction to Database with Object-Relational Mapping

Technical requirements

The diesel crate

Adding the necessary dependencies

diesel_cli

Creating migrations

Declaring the data structure

Models

Connecting to a database

Parsing arguments

Creating a connection

Implementing subcommands using a DSL

Adding a user subcommand implementation

Listing users subcommand implementation

Testing

Complex database structure

Business logic of the example application

API methods

Database structure and migrations

Diesel initial setup

up.sql

down.sql

Users table

up.sql

down.sql

Channels table

up.sql

down.sql

Memberships table

up.sql

down.sql

Messages table

up.sql

down.sql

Schema

Models

User

Channel

Membership

Message

Database interaction API crate

Api

Register user

Create channel

Publish channel

Add member

Add message

Delete message

Testing the crate

Summary

Simple REST Definition and Request Routing with Frameworks

Technical requirements

Rouille

Creating a microservice

Bootstrapping a server

Handling requests

Root handler

Sign-up handler

Sign-in handler

Default handler

Compiling and running

Nickel

Creating a microservice

Bootstrapping a server

Handling requests

Worker for sending emails

Compiling and running

Rocket

Creating a microservice

Bootstrapping a server

Handling requests

Database schema and models

Compiling and running

Gotham

Creating a microservice

Types of framework

The main function

Handler implementation

Running and testing

Summary

Background Tasks and Thread Pools in Microservices

Technical requirements

Interacting with threads

Synchronous or asynchronous?

IO-bound versus CPU-bound tasks

Synchronous tasks inside asynchronous contexts

The limitations of using IO operations on files

Spawning a thread for image processing

Interacting with a thread in an asynchronous way

Using thread pools

CpuPool

The blocking section

Actors

Basics of the actix framework

Implementing actors

The count actor

Types

Actor

Message

Handler

The log actor

Types

Actor

Message

Handler

The resize actor

Types

Actor

Message

Handler

Using the server with actors

Requests handler

Building and running

Summary

Involving Concurrency with Actors and the Actix Crate

Technical requirements

Actor concurrency in microservices

 Understanding actors

Actors in microservices

The Actix framework

Creating a microservice with actix-web

Bootstrapping an actix-web server

Starting a server

App creation

Scope and routes

Static files handler

HTTP client

GET requests

POST requests

Handlers

Signup

Signin

New comment

Comments

Counter

Middleware

Building and running

Using databases

The database interaction actor

Actors

Messages

Setting a value message

Get value message

Link to actor

Using database actors

WebSocket

Repeater actor

Actors

Messages

Updating the message

Control message

The notification actor

Actor

Handlers

Adding WebSocket support to a server

Summary

Scalable Microservices Architecture

Technical requirements

Scalable architecture

Basic ideas

Message brokers and queues

RabbitMQ

Kafka

Application bottlenecks

Scalable application with Rust and RabbitMQ

Bootstrap message broker for testing

Dependencies

Abstract queue interaction actor

Dependencies

Abstract messages handler

Actor

Handling an incoming stream

Sending a new message

Utility methods

Crate

Request and response

Worker

Dependencies

Handler

main function

Server

Dependencies

Shared state

Server handler

Requests handlers

Index handler

Tasks handler

Upload handler

main function

Testing

How to scale this application

Summary

Testing and Debugging Rust Microservices

Technical requirements

Testing microservices

Unit testing

Mocking

Starting a server for testing

Making requests

Implementing tests

Running our tests

Integration testing

Starting an application instance

Dependencies

Utils

The integration testing client

Types

Users

Content

Mailer

Router

Debugging microservices

curl

Postman

mitmproxy

LLDB

Visual Studio Code

Structural logging

Example

Building and testing

Distributed tracing

Starting Jaeger

Generating spans

Compile and run

Summary

Optimization of Microservices

Technical requirements

Performance-measuring tools

Welle

Drill

Measuring and optimizing performance

Basic example

Performance

Optimizations

State sharing without blocking

Reusing values by references

Caching

Compilation with optimizations

Optimization techniques 

Link-time optimizations

Abort instead of panicking

Reducing the size of binaries

Isolated benchmarks

Profiling

Summary

Packing Servers to Containers

Technical requirements

Building a Docker image with a microservice

Creating an image with the Rust compiler

Users microservice image

.dockerignore

Dockerfile

Building an image

Starting a container

Content microservice image

Email microservice image

Router microservice image

DBSync worker image

Dependencies

The main function

Hiding microservice source code 

Composing a microservice set

Application definition

Database container

A container with an email server

DBSync worker container

Mails microservice container

Users microservice container

Content microservice container

Router microservice container

Running the application

Adding persistent state to the application

Running the application in the background

Summary

DevOps of Rust Microservices - Continuous Integration and Delivery

Technical requirements

Continuous integration and continuous delivery

Continuous integration 

Continuous delivery 

Container orchestration

Rust tools

Rustfmt

Installation

Usage

Configuration

Clippy

Installation

Usage

Configuration

Recommended code attributes

Rustfix

Installation

Usage

Cargo test

CI and CD tools

TravisCI

AppVeyor

Jenkins

Demonstration of continuous integration

Docker Compose

The SCM server

The CI server

The CI agent

The image

Configuring Gogs

Configuring TeamCity

Authorizing agents

Creating a project

Building steps for Rust

Building with CI

Summary

Bounded Microservices with AWS Lambda

Technical requirements

Serverless architecture

AWS Lambda

Azure Functions

Cloudflare Workers

IBM Cloud Functions

Google Cloud Functions

Minimal Rust microservices for AWS Lambda

Dependencies

Developing a microservice

Building

Deployment

Serverless Framework

Preparation

Implementation

Dependencies

Handler

Request and response types

Functions

Configuration

Resources

Deployment

Permissions

Script

Running

Testing

Updating and removing

Summary

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

This book will introduce you to the development of microservices with Rust. I started using Rust not that long ago, back in 2015. It had only been a couple of months since the release of version 1.0 and, at that time, I didn't think that this tool would usher in a silent revolution that would disrupt the traditions associated with system programming, which, at that time, was tedious and in no way fashionable.

Maybe I'm exaggerating a little, but I have witnessed how companies stopped using the customary tools and began rewriting parts of their products or a number of services in Rust, and they were so happy with the outcome that they continue to do so time and again. Today, Rust is an important part of blockchain initiatives, the flagship for WebAssembly, and is an awesome tool for developing fast and reliable microservices that utilize all available server resources. Consequently, Rust has transformed itself from a hobby tool for curious developers into a strong foundation for modern products.

In this book, we will learn how to create microservices using Rust. We begin with a short introduction to microservices, and discuss why Rust is a good tool for writing them. Then, we will create our first microservice using the hyper crate, and learn how to configure microservices and log activities. After that, we will explore how to support different formats of requests and responses using the serde crate.

Who this book is for

This book is designed for two categories of reader—experienced Rust developers who are new to microservices, and advanced microservice developers who are new to Rust. I've tried to cover the ecosystem of useful tools and crates available for Rust developers today. This book describes the creation of microservices, from high-level frameworks to constructing low-level asynchronous combinators that produce responses with minimal resource blocking time. This book aims to allow you to find the solution to a specific task.

To be able to understand the topics covered in this book, you need a solid background in the Rust programming language (you should be able to write and compile applications using cargo, understand lifetimes and borrowing concepts, know how traits work, and understand how to use reference counters, mutexes, threads, and channels). If you are unfamiliar with Rust, take the time to understand these concepts before reading this book.

You also have to know how to write a minimal backend working on an HTTP protocol. You have to understand what REST is, and how to use it for applications. However, you don't have to understand how HTTP/2 works because we will use crates that provide abstractions agnostic to specific transport.

What this book covers

Chapter 1, Introduction to Microservices, introduces you to microservices and how they can be created with Rust. In this chapter, we also discuss the benefits of using Rust for creating microservices.

Chapter 2, Developing a Microservice with Hyper Crate, describes how to create microservices with the hyper crate, thereby allowing us to create a compact asynchronous web server with precise control over incoming requests (method, path, query parameters, and so on).

Chapter 3, Logging and Configuring Microservices, includes information about configuring a microservice using command-line arguments, environment variables, and configuration files. You will also see how to add logging to your projects, since this is the most important feature for the maintenance of microservices in production.

Chapter 4, Data Serialization and Deserialization with Serde Crate, explains how, in addition to customary HTTP requests, your microservice has to support formal requests and responses in a specific format, such as JSON, and CBOR, which is important for API implementation and in terms of organizing the mutual interaction of microservices.

Chapter 5, Understanding Asynchronous Operations with Futures Crate, delves into the deeper asynchronous concepts of Rust and how to use asynchronous primitives for writing combinators to process a request and prepare a response for a client. Without a clear understanding of these concepts, you cannot write effective microservices to utilize all available resources of a server, and to avoid the blocking of threads that execute asynchronous activities and require special treatment with execution runtime.

Chapter 6, Reactive Microservices – Increasing Capacity and Performance, introduces you to a reactive microservice that won't respond immediately to incoming requests, and that takes time to process a request and response when it's done. You will become familiar with remote procedure calls in Rust and how to use the language so that microservices can call one another.

Chapter 7, Reliable Integration with Databases, shows you how to interact with databases using Rust. You will get to know crates that provide interaction with databases, including MySQL, PostgreSQL, Redis, MongoDB, and DynamoDB.

Chapter 8, Interaction to Databasewith Object-Relational Mapping, explains how, in order to interact with SQL databases effectively and map database records to native Rust structs, you can use object-relational mapping (ORM). This chapter demonstrates how to use diesel crates which require nightly compiler version and whose capabilities are used for generating bindings with tables.

Chapter 9, Simple REST Definition and Request Routing with Frameworks, explains how, in certain cases, you don't need to write stringent asynchronous code, and that it is sufficient to use frameworks that simplify microservice writing. In this chapter, you will become acquainted with four such frameworks—rouille, nickel, rocket, and gotham.

Chapter 10, Background Tasks and Thread Pools in Microservices, discusses multithreading in microservices and how to use pools of threads to perform tasks on a background, given that not every task can be performed asynchronously and requires a high CPU load.

Chapter 11, Involving Concurrency with Actors and Actix Crate, introduces you to the Actix framework, which uses the actor's model to provide you with abstractions that are easily compatible with Rust. This includes the balance of performance, the readability of the code, and task separation.

Chapter 12, Scalable Microservices Architecture, delves into an explanation of how to design loose-coupling microservices that don't need to know about sibling microservices, and that use message queues and brokers to interact with one another. We will write an example of how to interact with other microservices using RabbitMQ.

Chapter 13, Testing and Debugging Rust Microservices, explains how testing and debugging is a key component in terms of preparing for the release of microservices. You will learn how to test microservices from unit tests to cover a full application with integration tests. Afterward, we will then discuss how to debug an application using debuggers and logging capabilities. Also, we will create an example that uses distributed tracing based on the OpenTrace API – a modern tool for tracking the activities of complex applications.

Chapter 14, Optimization of Microservices, describes how to optimize a microservice and extract the maximum performance possible.

Chapter 15, Packing Servers to Containers, explains how, when a microservice is ready for release, there should be a focus on packing microservices to containers, because at least some microservices require additional data and environments to work, or even just to gain the advantage of fast delivery containers over bare binaries.

Chapter 16, DevOps of Rust Microservices - Continuous Integration and Delivery, continues with the theme of learning how to build microservices and explains how to use continuous integration to automate building and delivery processes for your product.

 Chapter 17, Bounded Microservices with AWS Lambda, introduces you to serverless architecture, an alternative approach to writing services. You will become acquainted with AWS Lambda and you can use Rust to write fast functions that work as a part of serverless applications. Also, we will use the Serverless Framework to build and deploy the example application to the AWS infrastructure in a fully automated manner.

To get the most out of this book

You will require at least version 1.31 of Rust. Install it using the rustup tool: https://rustup.rs/. To compile examples from some chapters, you will need to install a nightly version of the compiler. You will also need to install Docker with Docker Compose to run containers with databases and message brokers to simplify the testing of example microservices from this book.

Download the example code files

You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

Log in or register at

www.packt.com

.

Select the

SUPPORT

tab.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR/7-Zip for Windows

Zipeg/iZip/UnRarX for Mac

7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Hands-On-Microservices-with-Rust. In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://www.packtpub.com/sites/default/files/downloads/9781789342758_ColorImages.pdf.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packt.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in, and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packt.com.

Introduction to Microservices

This chapter will introduce you to the basics of microservices, including what a microservice is and how to break a monolithic server down into microservices. It will be useful if you are not familiar with the concept of microservices or if you have never implemented them using the Rust programming language.

The following topics will be covered in this chapter:

What are microservices?

How to transform a traditional server architecture into microservices

The importance of Rust in microservices development

Technical requirements

This chapter hasn't got any special technical requirements, but now is a good time to install or update your Rust compiler. You can get this from Rust's official website: https://www.rust-lang.org/ . I recommend that you use the rustup tool, which you can download from https://rustup.rs/.

If you have previously installed the compiler, you need to update it to the latest version using the following command:

rustup update

You can get the examples for this book from the GitHub page: https://github.com/PacktPublishing/Hands-On-Microservices-with-Rust-2018/.

What are microservices?

Modern users interact with microservices every day; not directly, but by using web applications. Microservices are a flexible software development technique that help to implement applications as a collection of independent services with weak relations.

In this section, we'll learn about why microservices are a good thing and why we need them. Microservices follow the REST architecture, which provides rules about using consistent HTTP methods. We will also look at how microservices can be deployed to the user, which is one of their main advantages.

Why we need microservices

Microservices are a modern software development approach that refers to the splitting of software into a suite of small services that are easier to develop, debug, deploy, and maintain. Microservices are tiny, independent servers that act as single business functions. For example, if you have an e-commerce suite that works as a monolith, you could split it into small servers that have limited responsibility and carry out the same tasks. One microservice could handle user authorization, the other could handle the users' shopping carts, and the remaining services could handle features such as search functionality, social-media integration, or recommendations.

Microservices can either interact with a database or be connected to other microservices. To interact with a database, microservices can use different protocols. These might include HTTP or REST, Thrift, ZMQ, AMQP for the messaging communication style, WebSockets for streaming data, and even the old-fashioned Simple Object Access Protocol (SOAP) to integrate them with the existing infrastructure. We will use HTTP and REST in this book, because this is the most flexible way to provide and interact with the web API. We'll explain this choice later.

Microservices have the following advantages over monolithic servers:

You can use different programming languages

The code base of a single server is smaller

They have an independent DevOps process to build and deploy activities

They can be scaled depending on their implementation

If one microservice fails, the rest will continue to work

They work well within containers

Increased isolation between elements leads to better security

They are suitable for projects involving the Internet of Things

They are in line with the DevOps philosophy

They can be outsourced

They can be orchestrated after development

They are reusable

There are, however, a few drawbacks of microservices. These include the following:

Too many microservices overload the development process

You have to design interaction protocols

They can be expensive for small teams

A microservices architecture is a modern approach that can help you achieve the goal of having loosely coupling elements. This is where the servers are independent from one another, helping you to release and scale your application faster than a monolithic approach, in which you put all your eggs in one basket.

How to deploy a microservice

Since a microservice is a small but complete web server, you have to deploy it as a complete server. But since it has a narrow scope of features, it's also simpler to configure. Containers can help you pack your binaries into an image of the operating system with the necessary dependencies to simplify deployment.

This differs from the case with monoliths, in which you have a system administrator who installs and configures the server. Microservices need a new role to carry out this function—DevOps. DevOps is not just a job role, but a whole software engineering culture in which developers become system administrators and vice versa. DevOps engineers are responsible for packing and delivering the software to the end user or market. Unlike system administrators, DevOps engineers work with clouds and clusters and often don't touch any hardware except their own laptop.

DevOps uses a lot of automation and carries the application through various stages of the delivery process: building, testing, packaging, releasing, or deployment, and the monitoring of the working system. This helps to reduce the time it takes both to market a particular software and to release new versions of it. It's impossible to use a lot of automation for monolithic servers, because they are too complex and fragile. Even if you want to pack a monolith to a container, you have to deliver it as a large bundle and run the risk that any part of the application could fail. In this section, we'll have a brief look at containers and continuous integration. We will go into detail about these topics in Chapter 15, Packing Servers to Containers, and Chapter 16, DevOps of Rust Microservices – Continuous Integration and Delivery.

Docker

When we refer to containers, we almost always mean Docker containers (https://www.docker.com/). Docker is the most popular software tool for running programs in containers, which are isolated environments.

Containerization is a kind of virtualization where the scope of the application's resources is limited. This means the application works at its maximum performance level. This is different from full virtualization, where you have to run the full operating system with the corresponding overhead and run your application inside that isolated operating system.

Docker has become popular for a variety of reasons. One of these reasons is that it has a registry—the place where you can upload and download images of containers with applications. The public registry is Docker Hub (https://hub.docker.com/explore/), but you can have a private registry for a private or permissioned software.

Continuous Integration

Continuous Integration (CI) is the practice of keeping a master copy of the software and using tests and merging processes to expand the features of the application. The process of CI is integrated with the Source Code Management (SCM) process. When the source code is updated (for example, in Git), the CI tool checks it and starts the tests. If all tests pass, developers can merge the changes to the master branch.

CI doesn't guarantee that the application will work, because tests can be wrong, but it removes the need to run tests from developers on an automated system. This gives you the great benefit of being able to test all your upcoming changes together to detect conflicts between changes. Another advantage is that the CI system can pack your solution in a container, so the only thing that you have to do is deliver the container to a production cloud. The deployment of containers is also simple to automate. 

How to split a traditional server into multiple microservices

Around 10 years ago, developers used to use the Apache web server with a scripting programming language to create web applications, rendering the views on the server-side. This meant that there was no need to split applications into pieces and it was simpler to keep the code together. With the emergence of Single-Page Applications (SPAs), we only needed server-side rendering for special cases and applications were divided into two parts: frontend and backend. Another tendency was that servers changed processing method from synchronous (where every client interaction lives in a separate thread) to asynchronous (where one thread processes many clients simultaneously using non-blocking, input-output operations). This trend promotes the better performance of single server units, meaning they can serve thousands of clients. This means that we don't need special hardware, proprietary software, or a special toolchain or compiler to write a tiny server with great performance.

The invasion of microservices happened when scripting programming languages become popular. By this, we are not only referring to languages for server-side scripting, but general-purpose high-level programming languages such as Python or Ruby. The adoption of JavaScript for backend needs, which had previously always been asynchronous, was particularly influential.

If writing your own server wasn't hard enough, you could create a separate server for special cases and use them directly from the frontend application. This would not require rendering procedures on the server. This section has provided a short description of the evolution from monolithic servers to microservices. We are now going to examine how to break a monolithic server into small pieces.

Reasons to avoid monoliths

If you already have a single server that includes all backend features, you have a monolithic service, even if you start two or more instances of this service. A monolithic service has a few disadvantages—it is impossible to scale vertically, it is impossible to update and deploy one feature without interrupting all the running instances, and if the server fails, it affects all features. Let's discuss these disadvantages a little further. This might help you to convince your manager to break your service down into microservices.

Impossible to scale vertically

There are two common approaches to scaling an application:

Horizontally

: Where you start a new instance of application

Vertically

: Where you improve an independent application layer that has a bottleneck

The simplest way to scale a backend is to start another instance of the server. This will solve the issue, but in many cases it is a waste of hardware resources. For example, imagine you have a bottleneck in an application that collects or logs statistics. This might only use 15% of your CPU, because logging might include multiple IO operations but no intensive CPU operations. However, to scale this auxiliary function, you will have to pay for the whole instance.

Impossible to update and deploy only one feature

If your backend works as a monolith, you can't update only a small part of it. Every time you add or change a feature, you have to stop, update, and start the service again, which causes interruptions.

When you have a microservice and you have find a bug, you can stop and update only this microservice without affecting the others. As I mentioned before, it can also be useful to split a product into separate development teams.

The failure of one server affects all features

Another reason to avoid monoliths is that every server crash also crashes all of the features, which causes the application to stop working completely, even though not every feature is needed for it to work. If your application can't load new user interface themes, the error is not critical, as long as you don't work in the fashion or design industry, and your application should still be able to provide the vital functions to users. If you split your monolith into independent microservices, you will reduce the impact of crashes.

Breaking a monolithic service into pieces

Let's look an example of an e-commerce monolith server that provides the following features:

User registration

Product catalog

Shopping cart

Payment integration

E-mail notifications

Statistics collecting

Old-fashioned servers developed years ago would include all of these features together. Even if you split it into separate application modules, they would still work on the same server. You can see an example structure of a monolithic service here:

In reality, the real server contains more modules than this, but we have separated them into logical groups based on the tasks they perform. This is a good starting point to breaking your monolith into multiple, loosely coupled microservices. In this example, we can break it further into the pieces represented in the following diagram:

As you can see, we use a balancer to route requests to microservices. You can actually connect to microservices directly from the frontend application.

Shown in the preceding diagram is the potential communication that occurs between services. For simple cases, you can use direct connections. If the interaction is more complex, you can use message queues. However, you should avoid using a shared state such as a central database and interacting through records, because this can cause a bottleneck for the whole application. We will discuss how to scale microservices in Chapter 12, Scalable Microservices Architecture. For now, we will explore REST API, which will be partially implemented in a few examples throughout this book. We will also discuss why Rust is a great choice for implementing microservices.

Definition of a REST API

Let's define the APIs that we will use in our microservice infrastructure using the REST methodology. In this example, our microservices will have minimal APIs for demonstration purposes; real microservices might not be quite so "micro". Let's explore the REST specifications of the microservices of our application. We will start by looking at a microservice for user registration and go through every part of the application.

User registration microservice

The first service is responsible for the registration of users. It has to contain methods to add, update, or delete users. We can cover all needs with the standard REST approach. We will use a combination of methods and paths to provide this user registration functionality:

POST

request to

/user/

creates a new user and returns its

id

GET

request to

/user/id

  returns information related to a user with

id

PUT

request to

/user/id

  applies changes to a user with

id

DELETE

request to

/user/id

  removes a user with

id

This service can use the E-mail notifications microservice and call its methods to notify the user about registration.

E-mail notifications microservice

The E-mail notifications microservice can be extremely simple and contains only a single method:

The

POST

request to

/send_email/

  sends an email to any address

This server can also count the sent emails to prevent spam or check that the email exists in the user's database by requesting it from the User registration microservice. This is done to prevent malicious use.

Product catalog  microservice

The Product catalog microservice tracks the available products and needs only weak relations with other microservices, except for the Shopping cart. This microservice can contain the following methods:

POST

request to

/product/

creates a new product and returns its

id

GET

request to

/product/id

  returns information about the product with

id

PUT

request to

/product/id

  updates information about the product with

id

DELETE

request to

/product/id

  marks the product with

id

as deleted

GET

request to

/products/

returns a list of all products (can be paginated by extra parameters)

Shopping cart microservice

The Shopping cart microservice is closely integrated with the User registration and Product catalog microservices. It holds pending purchases and prepares invoices. It contains the following methods:

POST

request to

/user/uid/cart/

, which puts a product in the cart and returns the 

id

of item in the user's cart with the 

uid

GET

request to

/user/uid/cart/id

, which returns information about the item with

id

PUT

request to

/user/uid/cart/id

, which updates information about the item with

id

(alters the quantity of items)

GET

request to

/user/uid/cart/

, which 

returns a list of all the items in the cart

As you can see, we don't add an extra "s" to the /cart/ URL and we use the same path for creating items and to get a list, because the first handler reacts to the POST method, the second processes requests with the GET method, and so on. We also use the user's ID in the path. We can implement the nested REST functions in two ways:

Use session information to get the user's

id

. In this case, the paths contain a single object, such as 

/cart/id

. We can keep the user's

id

in session cookies, but this is not reliable.

We can add the

id

of a user to a path explicitly.

Payment Integration microservice

In our example, this microservice will be a third-party service, which contains the following methods:

POST

request to

/invoices

  creates a new invoice and returns its

id

POST

request to

/invoices/id/pay

  pays for the invoice

Statistics collecting microservice

This service collects usage statistics and logs a user's actions to later improve the application. This service exports API calls to collect the data and contains some internal APIs to read the data:

POST

request to

/log

  logs a user's actions (the

id

of a user is set in the body of the request)

GET

request to

/log?from=?&to=?

  works only from the internal network and returns the collected data for the period specified

This microservice doesn't conform clearly to the REST principles. It's useful for microservices that provide a full set of methods to add, modify, and remove the data, but for other services, it is excessively restrictive. You don't have follow a clear REST structure for all of your services, but it may be useful for some tools that expect it.

Transformation to microservices

If you already have a working application, you might transform it into a set of microservices, but you have to keep the application running at the highest rate and prevent any interruptions.

To do this, you can create microservices step by step, starting from the least important task. In our example, it's better to start from email activities and logging. This practice helps you to create a DevOps process from scratch and join it with the maintenance process of your app.

Reusing existing microservices

If your application is a monolith server, you don't need to turn all modules into microservices, because you can use existing third-party services and shrink the bulk of the code that needs rewriting. These services can help with many things, including storage, payments, logging, and transactional notifications that tell you whether an event has been delivered or not.

I recommend that you create and maintain services that determine your competitive advantage yourself and then use third-party services for other tasks. This can significantly shrink your expenses and the time to market.

In any case, remember the product that you are delivering and don't waste time on unnecessary units of your application. The microservices approach helps you to achieve this simply, unlike the tiresome coding of monoliths, which requires you to deal with numerous secondary tasks. Hopefully, you are now fully aware of the reasons why microservices can be useful. In the next section, we will look at why Rust is a promising tool for creating microservices.

Why Rust is a great tool for creating microservices

If you have chosen to read this book, you probably already know that Rust is an up-to-date, powerful, and reliable language. However, choosing it to implement microservices is not an obvious decision, because Rust is a system programming language that is often assigned to low-level software such as drivers or OS kernels. This is because you tend to have to write a lot of glue code or get into detailed algorithms with low-level concepts, such as pointers in system programming languages. This is not the case with Rust. As a Rust programmer, you've surely already seen how it can be used to create high-level abstractions with flexible language capabilities. In this section, we'll discuss the strengths of Rust: its strict and explicit nature, its high performance, and its great package system.

Explicit versus implicit

Up until recently, there hasn't been a well-established approach to using Rust for writing asynchronous network applications. Previously, developers tended to use two styles: either explicit control structures to handle asynchronous operations or implicit context switching. The explicit nature of Rust meant that the first approach outgrew the second. Implicit context switching is used in concurrent programming languages such as Go, but this model does not suit Rust for a variety of reasons. First of all, it has design limitations and it's hard or even impossible to share implicit contexts between threads. This is because the standard Rust library uses thread-local data for some functions and the program can't change the thread environment safely. Another reason is that an approach with context switching has overheads and therefore doesn't follow the zero-cost abstractions philosophy because you would have a background runtime.Some modern libraries such as actixprovide a high-level approach similar to automatic context switching, but actually use explicit control structures for handling asynchronous operations.

Network programming in Rust has evolved over time. When Rust was released, developers could only use the standard library. This method was particularly verbose and not suitable for writing high-performance servers. This was because the standard library didn't contain any good asynchronous abstractions. Also, event hyper, a good crate for creating HTTP servers and clients, processed requests in separate threads and could therefore only have a certain number of simultaneous connections.

The mio crate was introduced to provide a clear asynchronous approach to make high-performance servers. It contained functions to interact with asynchronous features of the operating system, such as epoll or kqueue, but it was still verbose, which made it hard to write modular applications.

The next abstraction layer over mio was a futures and tokio pair of crates. The futures crate contained abstractions for implementing delayed operations (like the defers concept in Twisted, if you're familiar with Python). It also contained types for assembling stream processors, which are reactive and work like a finite state machine.

Using the futures crate was a powerful way to implement high-performance and high-accuracy network software. However, it was a middleware crate, which made it hard to solve everyday tasks. It was a good base for rewriting crates such as hyper, because these can use explicit asynchronous abstractions with full control.

The highest level of abstraction today are crates that use futures, tokio, and hyper crates, such as rocket or actix-web. Now, rocket includes high-level elements to construct a web server with the minimal amount of lines. actix-web works as a set of actors when your software is broken down into small entities that interact with one another. There are many other useful crates, but we will start with hyper as a basis for developing web servers from scratch. Using this crate, we will be between low-level crates, such as futures, and high-level crates, such as rocket. This will allow us to understand both in detail.

Minimal amount of runtime errors

There are many languages suitable for creating microservices, but not every language has a reliable design to keep you from making mistakes. Most interpreted dynamic languages let you write flexible code that decides on the fly which field of the object to get and which function to call. You can often even override the rules of function calling by adding meta-information to objects. This is vital in meta-programming or in cases where your data drives the behavior of the runtime.

The dynamic approach, however, has significant drawbacks for the software, which requires reliability rather than flexibility. This is because any inaccuracy in the code causes the application to crash. The first time you try to use Rust, you may feel that it lacks flexibility. This is not true, however; the difference is in the approach you use to achieve flexibility. With Rust, all your rules must be strict. If you create enough abstractions to cover all of the cases your application might face, you will get the flexibility you want.

Rust rookies who come from the JavaScript or the Python world might notice that they have to declare every case of serialization/deserialization of data, whereas with dynamic languages, you can simply unpack any input data to the free-form object and explore the content later. You actually have to check all cases of inconsistency during runtime and try and work out what consequences could be caused if you change one field and remove another. With Rust, the compiler checks everything, including the type, the existence, and the corresponding format. The most important thing here is the type, because you can't compile a program that uses incompatible types. With other languages, this sometimes leads to strange compilation errors such as a case where you have two types for the same crate but the types are incompatible because they were declared in different versions of the same crate. Only Rust protects you from shooting yourself in the foot in this way. In fact, different versions can have different rules of serialization/deserialization for a type, even if both declarations have the same data layout. 

Great performance

Rust is a system programming language. This means your code is compiled into native binary instructions for the processor and runs without unwanted overhead, unlike interpreters such as JavaScript or Python.

Rust also doesn't use a garbage collector and you can control all allocations of memory and the size of buffers to prevent overflow.

Another reason why Rust is so fast for microservices is that it has zero-cost abstractions, which means that most abstractions in the language weigh nothing. They turn into effective code during compilation without any runtime overhead. For network programming, this means that your code will be effective after compilation, that is, once you have added meaningful constructions in the source code.

Minimal dependencies burden

Rust programs are compiled into a single binary without unwanted dependencies. It needs libc or another dynamic library if you want to use OpenSSL or similar irreplaceable dependencies, but all Rust crates are compiled statically into your code.

You may think that the compiled binaries are quite large to be used as microservices. The word microservice, however, refers to the narrow logic scope, rather than the size. Even so, statically linked programs remain tiny for modern computers.

What benefits does this give you? You will avoid having to worry about dependencies. Each Rust microservice uses its own set of dependencies compiled into a single binary. You can even keep microservices with obsolete features and dependencies besides new microservices. In addition, Rust, in contrast with the Go programming language, has strict rules for dependencies. This means that the project resists breaking, even if someone forces an update of the repository with the dependency you need.

How does Rust compare to Java? Java has microframeworks for building microservices, but you have to carry all dependencies with them. You can put these in a fat Java ARchive (JAR), which is a kind of compiled code distribution in Java, but you still need Java Virtual Machine (JVM). Don't forget, too, that Java will load every dependency with a class loader. Also, Java bytecode is interpreted and it takes quite a while for the Just-In-Time (JIT) compilation to finish to accelerate the code. With Rust, bootstrapping dependencies don't take a long time because they are attached to the code during compilation and your code will work with the highest speed from the start since it was already compiled into native code.

Summary

In this chapter, we have mastered the basics of microservices. Simply put, a microservice is a compact web server that handles specific tasks. For example, microservices can be responsible for user authentication or for email notifications. They make running units reusable. This means you don't need to recompile or restart units if they don't require any updates. This approach is simpler and more reliable in deployment and maintenance.

We have also discussed how to split a monolithic web server that contains all of its business logic in a single unit into smaller pieces and join them together through communication, in line with the ideology of loose coupling. To split a monolithic server, you should separate it into domains that are classified by what tasks the servers carry out.

In the last section of this chapter, we've looked at why Rust is a good choice for developing microservices. We touched on dependencies management, the performance of Rust, its explicit nature, and its toolchain. It's now time to dive deep into coding and write a minimal microservice with Rust.

In the next chapter we will start to writing microservices with Rust using hyper crate that provides all necessary features to write compact asynchronous HTTP server.

Further reading