Practical Serverless and Microservices with C# - Gabriel Baptista - E-Book

Practical Serverless and Microservices with C# E-Book

Gabriel Baptista

0,0
35,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

From the authors of the Software Architecture with C# and .NET series comes this practical and grounded showcase of microservices using the .NET stack.
Written for .NET developers entering the world of modern cloud and distributed applications, it shows you when microservices and serverless architectures are the right choice for building scalable enterprise solutions and when they’re not. You’ll gain a realistic understanding of their use cases and limitations. Rather than promoting microservices as a one-size-fits-all solution, it encourages thoughtful adoption based on real-world needs.
Following a brief introduction and important setup, the book helps you prepare for practical application through examples such as a ride-sharing website. You’ll work with Docker, Kubernetes, Azure Container Apps, and the new .NET Aspire with considerations for security, observability, and cost management. The book culminates in a complete event-driven application that brings together everything you've covered.
By the end of the book, you’ll have a well-rounded understanding of cloud and distributed .NET—viewed through the lens of two industry veterans.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 516

Veröffentlichungsjahr: 2025

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Practical Serverless and Microservices with C#

Build resilient and secure microservices with the .NET stack and embrace serverless development in Azure

Gabriel Baptista

Francesco Abbruzzese

Practical Serverless and Microservices with C#

Copyright © 2025 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Portfolio Director: Ashwin Nair

Relationship Lead: Nitin Nainani

Project Manager: Ruvika Rao

Content Engineer: Kinnari Chohan

Technical Editor: Sweety Pagaria

Copy Editor: Safis Editing

Indexer: Pratik Shirodkar

Proofreader: Kinnari Chohan

Production Designer: Vijay Kamble

Growth Lead: Anamika Singh

First published: June 2025

Production reference: 1260525

Published by Packt Publishing Ltd.

Grosvenor House

11 St Paul’s Square

Birmingham

B3 1RB, UK.

ISBN 978-1-83664-201-5

www.packtpub.com

Contributors

About the authors

Gabriel Baptista is a seasoned technology professional with over two decades of experience in software development and team leadership. He currently leads a team focused on building application software for retail and industry. In parallel, he serves as a member of a technical advisory board, teaches computer engineering at the undergraduate level, and has co-founded technology start-ups in the fields of industrial automation and intelligent logistics. Throughout his career, he has contributed extensively to academia, teaching subjects related to software engineering and information technology at various educational institutions.

To my beloved family - Denise, Murilo, and Heitor - who are always by my side.

To my colleagues at Toledo do Brasil, especially Aecio Carvalho, whose support and example have been a source of inspiration over the years.

Francesco Abbruzzese is the author of the MVC Controls Toolkit and Blazor Controls Toolkit libraries. He has contributed to the diffusion and evangelization of the Microsoft web stack since the first version of ASP.NET MVC. His company, Mvcct Team, offers web applications, tools, and services for web technologies. He moved from AI systems, where he implemented one of the first decision support systems for financial institutions, to top-10 video game titles such as Puma Street Soccer.

To my beloved parents to whom I owe everything. To all colleagues that shared various projects with me, and that contributed to the success of my company products. Their examples and suggestions were fundamental for the development of this book. To all reviewers and to the entire Packt team whose suggestions improved the book quality noticeably.

About the reviewer

Moien Tajik is a Principal Software Engineer with deep expertise in .NET, C#, and cloud-native architectures. With over 9 years of professional experience, he has led the development of scalable software systems for both enterprise and consumer-facing applications. He currently works at AIHR in the Netherlands and previously served as a Technical Fellow at Alibaba Travels, one of Iran’s largest tech companies. He frequently mentors other engineers and enjoys contributing to open source and personal projects. When he’s not coding, he explores new technologies, builds start-ups like MenuDish, and shares his learnings with the tech community on the @ProgrammingTip Telegram channel. You can connect with him on GitHub, LinkedIn, and Twitter: @MoienTajik.

Tomasz Pęczek is a seasoned staff+ engineer dedicated to crafting solutions that power companies across various sectors, including healthcare, banking, e-learning, and e-discovery.Throughout his career, Tomasz has transitioned between developer, architect, and consultant roles. Over the past few years, his primary focus has been on leveraging Azure to facilitate cloud adoption and building solutions tailored to meet the true needs of his clients.Tomasz participates in the community through speaking engagements at conferences and user groups. Additionally, he shares technical articles on his blog at tpeczek.com. His commitment to sharing his knowledge has earned him a Microsoft MVP title in the Azure and Developer Technologies categories.

Join our community on Discord

Join our community’s Discord space for discussions with the author and other readers:

https://packt.link/PSMCSharp

Preface

When we started writing this book, our main goal was to deliver hands-on experience on the main approach for developing cloud-native solutions: distributed applications. We decided to describe various options to build microservices architecture, which span from serverless implementation to Kubernetes orchestration.

Since our main technical background is .NET and Azure, we decided to focus on these, bringing an opportunity for developers to understand how and when serverless and microservices are the best ways to rapidly and consistently create enterprise solutions, thus enabling .NET developers to perform a career jump by entering the world of modern cloud-native and distributed applications. With this book, you will do the following:

Learn how to create serverless environments for developing and debugging in AzureImplement reliable microservices communication and computationOptimize microservices applications with the help of orchestrators such as KubernetesExplore Azure Functions in depth along with triggers for IoT and background activitiesUse Azure Container Apps to simplify creating and managing containersLearn how to properly secure a microservices applicationTake costs and usage limits seriously and calculate them in the correct way

We believe that by reading this book, you will find great tips and practical examples that will help you write your own applications. We hope this focused material can leverage your knowledge about this important software development subject.

Who this book is for

This book is for engineers and senior software developers aspiring to move toward modern cloud development and distributed applications, evolving their knowledge about microservices and serverless to get the best out of these architectural models.

What this book covers

Chapter 1, Demystifying Serverless Applications, introduces serverless applications, discussing the advantages and disadvantages and the underlying theory.

Chapter 2, Demystifying Microservices Applications, introduces microservices applications, discussing their advantages and disadvantages, basic principles, definitions, and design techniques.

Chapter 3, Setup and Theory: Docker and Onion Architecture, describes prerequisite technologies, such as Docker and Onion architecture, to implement modern distributed applications.

Chapter 4, Azure Functions and Triggers Available, discusses the possible settings related to Azure Functions and the triggers available for creating serverless applications.

Chapter 5, Background Functions in Practice, implements Azure Functions triggers that enable background processing. Timer, Blob, and Queue triggers are detailed, with their advantages, disadvantages, and opportunities to use.

Chapter 6, IoT Functions in Practice, discusses the importance of Azure Functions for IoT solutions.

Chapter 7, Microservices in Practice, describes the implementation of a microservice with .NET in detail.

Chapter 8, Practical Microservices Organization with Kubernetes, describes Kubernetes in detail and how to use it to orchestrate your microservices applications.

Chapter 9, Simplifying Containers and Kubernetes: Azure Container Apps and Other Tools, describes tools that simplify the usage of Kubernetes, and introduces Azure Container Apps as a simplified option for microservices orchestration, discussing its costs, advantages, and disadvantages.

Chapter 10, Security and Observability for Serverless and Microservices Applications, discusses security and observability for microservice scenarios, presenting the main options and techniques available for these two important aspects of modern software development.

Chapter 11, The Car Sharing App, presents the sample application of the book, using both serverless and microservices applications for understanding how an event-driven application works.

Chapter 12, Simplifying Microservices with .NET Aspire, describes Microsoft Aspire as a good option for testing microservices during their development.

To get the most out of this book

Prior experience with C#/.NET and the Microsoft stack (Entity Framework and ASP.NET Core) is required to get the most out of this book.

Download the example code files

The code bundle for the book is hosted on GitHub at https://github.com/PacktPublishing/Practical-Serverless-and-Microservices-with-Csharp. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://packt.link/gbp/9781836642015.

Conventions used

There are several text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and X/Twitter handles. For example: “Execute the docker build command.”

A block of code is set as follows:

public class TownBasicInfoMessage { public Guid Id { get; set; } public string? Name { get; set; } public GeoLocalizationMessage? Location { get; set; } }

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

FROM eclipse-temurin:11    COPY . /var/www/java    WORKDIR /var/www/java    RUN javac Hello.java    CMD ["java", "Hello"]  

Any command-line input or output is written as follows:

docker run --name myfirstcontainer simpleexample

Bold: Indicates a new term, an important word, or words that you see on the screen. For instance, words on menus or dialog boxes appear in the text like this. For example: “Select System info from the Administration panel.”

Warnings or important notes appear like this.

Tips and tricks appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: Email [email protected] and mention the book’s title in the subject of your message. If you have questions about any aspect of this book, please email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you reported this to us. Please visit http://www.packtpub.com/submit-errata, click Submit Errata, and fill in the form.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit http://authors.packtpub.com/.

Share your thoughts

Now you’ve finished Practical Serverless and Microservices with C#, we’d love to hear your thoughts! If you purchased the book from Amazon, please click here to go straight to the Amazon review page for this book and share your feedback or leave a review on the site that you purchased it from.

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Download a free PDF copy of this book

Thanks for purchasing this book!

Do you like to read on the go but are unable to carry your print books everywhere?

Is your eBook purchase not compatible with the device of your choice?

Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.

Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application.

The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily

Follow these simple steps to get the benefits:

Scan the QR code or visit the link below

https://packt.link/free-ebook/9781836642015

2. Submit your proof of purchase

3. That’s it! We’ll send your free PDF and other benefits to your email directly

1

Demystifying Serverless Applications

When it comes to software development, we are living in incredible times. With the evolution of cloud platforms and the rise of modern technologies, being a developer nowadays is both a wonderful way to live and a challenging profession to follow. There are so many ways to deliver an application and so many innovative technologies to explore that we may fall into a vicious circle where we focus more on the technologies rather than the actual solution.

This chapter aims to present the serverless architecture and explore how you can use this approach to implement a microservices application. To achieve this, it covers the theory behind serverless and provides an understanding of how it can be a viable alternative for microservices implementation.

The chapter also explores how Microsoft implements Function as a Service (FaaS), using Azure Functions as one of the options for building microservices. Two alternative development platforms will be presented: Visual Studio Code and Visual Studio.

By the end of this chapter, you will understand the different triggers available in Azure Functions and be ready to create your first function.

Technical requirements

This chapter requires Visual Studio 2022 free Community edition or Visual Studio Code. During the chapter, the details about how to debug Azure Functions for each development environment will be presented in the topics. You will also need an Azure account to create the sample environment. You can find the sample code for this chapter at https://github.com/PacktPublishing/Practical-Serverless-and-Microservices-with-Csharp.

What is serverless?

When someone asks you to develop a solution, the last thing they usually care about is how the infrastructure will work. The truth is, even for developers, the most important thing about infrastructure is that it simply works well.

Considering this reality, the possibility of having a cloud provider that dynamically manages server allocation and provisioning, leaving the underlying infrastructure to the provider, might be the best scenario.

That is what serverless architecture promises: a model we can use to build and run applications and services without having to manage the underlying infrastructure ourselves! This approach abstracts server management entirely, allowing developers to focus on their code.

The first cloud solution provider that presented this concept was Amazon, with the launch of AWS Lambda in 2014. After that, Microsoft and Google also provided similar solutions with Microsoft Azure Functions and Google Cloud Functions. As we mentioned before, the focus of this book will be Azure Functions.

There are many advantages that we can consider for using serverless computing. The fact that you do not have to worry about scaling can be considered the main one. Additionally, the cloud solution provider maintains the reliability and security of the environment. Besides that, with this approach, you have the option to pay as you go, so you only pay for what you use, enabling a sustainable model of growth.

Serverless can also be considered a good approach for accelerating software development since you only focus on the code needed to deliver that program. On the other hand, you may have difficulty overseeing a considerable number of functions, so this organization needs to be well handed to not cause problems while creating a solution with many functions.

Since the introduction of serverless, various kinds of functions have been created. These functions act as triggers that are used to start processing. As soon as the function is triggered, the execution can be done in different programming languages.

Now, let us check whether functions can be considered microservices or not.

Is serverless a way to deliver microservices?

If you look at the definition of microservices, you will find the concept of delivering an application as loosely coupled components that represent the implementation of a business capability. You can build something like that with a couple of functions, so yes, serverless is a way to deliver microservices.

Some specialists even consider serverless architecture an evolution of microservices, since the focus of serverless architecture is to deliver scalability in a safe environment, enabling the possibility of a set of functions to independently be developed, tested, and deployed, which brings a lot of flexibility to the software architecture. That is exactly the main philosophy of microservices.

Let us imagine, as an example, a microservice responsible for authenticating users. You may create specific functions for registering, logging, and resetting passwords. Considering that this set of functions can be created in a single serverless project, you have both the flexibility of creating separated functions and the possibility of defining the purpose of the microservice.

The serverless project will naturally support integration with databases, messaging queues, OpenAPI specifications, and other APIs, enabling the design patterns typically needed for a robust microservice architecture. It is also important to mention that keeping microservices isolated, small, and preferably reusable is a best practice worth following.

Now that you understand that you can write microservices using serverless approaches, let us understand how Microsoft Azure presents serverless in its platform.

How does Microsoft Azure present serverless?

In 2016, Microsoft introduced Azure Functions as a Platform-as-a-Service (PaaS) offering designed to deliver FaaS capabilities. This option enables innovation at a scale for business transformation. Today, Azure Functions gives us the opportunity to power up applications using multiple programming languages, including C#, JavaScript, F#, Java, and Python.

One of the standout features of Azure Functions is its seamless integration with other Azure services and third-party APIs. For instance, it can easily connect to different Azure databases (from Azure SQL Server to Azure Cosmos DB), Azure Event Grid for event-based architecture, and Azure Logic Apps for workflow automation. This connectivity simplifies the process of building complex, enterprise-grade applications that leverage multiple services.

Over the years, the possibilities with Azure Functions have evolved. Today, we can even manage stateful workflows and long-running operations, using Azure Durable Functions. With this, you can orchestrate complex processes that can be executed in multiple function executions.

But Microsoft has not only created an environment for coding functions. They have also created a complete pipeline for developers, following the DevSecOps process that’s now widely discussed and used in enterprise solutions. Developers can use tools such as Azure Pipelines, GitHub Actions, and other CI/CD services to automate the deployment process. You can also monitor and diagnose events in these functions using Azure Monitor and Application Insights, which facilitate troubleshooting and optimization.

The PaaS solution also enables different setups to adjust scalability and security aspects. Depending on the hosting plan you decide to set, you can have different scaling opportunities, as you can check here:

Consumption plan: The basic and most cost-effective option to get started with Azure Functions. Ideal for event-driven workloads with automatic scaling.Flex Consumption plan: Offers rapid, elastic scaling combined with support for private networking (VNet integration).Dedicated plan (App Service plan): Suitable for long-running functions and scenarios requiring more predictable performance and resource allocation.Azure Container Apps plan: A solid choice for microservices-based architectures that use multiple technology stacks or require greater flexibility.Premium plan: Designed for high-performance scenarios with the ability to scale on demand, providing support for advanced features such as VNet, longer execution times, and pre-warmed instances.

In summary, Microsoft Azure delivers serverless FaaS through Azure Functions, offering a powerful, flexible, and scalable platform that enhances the development and deployment of serverless applications. By using Azure Functions, developers can build and maintain responsive, cost-effective solutions. Now, let us explore how to create an Azure function in the Azure portal.

Creating your first serverless app in Azure

There are not many steps for creating your first serverless app in Azure. You can do it in a straightforward process when using the Azure portal. Follow these steps to get started:

Log in to the Azure portal. To do so, open your web browser and navigate to the Azure portal at https://portal.azure.com/. Sign in with your Azure account credentials.In the Azure portal, click on the Create a resource button located in the upper-left corner.

Figure 1.1: Creating a resource in the Azure portal

In the Search services and marketplace window, search for Function App and select it from the search results. This service will also be presented in the Popular Azure services section.Click the Create button to start the creation process.

Figure 1.2: Selecting Function App for creation

As soon as you select Function App, you will be prompted to select the required hosting plan. Today, we have five options for hosting plans using Azure Functions. These plans vary according to the scaling behavior, cold start, the possibility of usage of a virtual network, and, obviously, pricing. The Consumption plan is exactly what serverless is all about, where you have no idea of where and how your code is running, and you only pay for the execution of the code. On the other hand, when you select the App Service or Container Apps environment plans, you will have more control over the hardware and consumption of resources, which means you get the flexibility of using Azure Functions in your solution, along with the management needed for larger applications.

The following screen will be presented to you as soon as you select to create an Azure function app. As we described previously, you will need to decide on the hosting plan according to your needs.

Figure 1.3: Function App hosting plans

For the purpose of this chapter, we will select the Consumption plan. Once you select this option, you will find a wizard to help you create the service. In this service, you will need to fill in the following information:

Basics: Fill in the required fields such as Subscription, Resource Group, Function App name, Region, and Operating System. Ensure that the name you choose is unique. In Runtime stack, select the programming language of your functions. We will select .NET 8 Isolated worker model, but there are other options, as we presented before. It is worth mentioning that in-process models will be retired in 2026, so do not start projects using this approach.Storage: The function app needs an Azure storage account by default.Networking: This is where you will define whether the Azure function will be available for public access or not.Monitoring: Enable Application Insights to monitor your Function App for better diagnostics and performance tracking. Don’t forget that Azure Monitor logs will cause a cost increase.Deployment: It is also possible to initiate the setup of the deployment desired for the function app. This is interesting for enabling continuous deployment using GitHub Actions as default.Tags: Tagging the function app is considered a good practice for facilitating FinOps activity in professional environments.

In Chapter 2, Demystifying Microservices Applications, we will discuss the best way to interface microservices with the external world. For security reasons, it is not recommended that you provide functions directly to the public. You may decide to deliver them using an application gateway, such as Azure Application Gateway, or you can use Azure API Management as the entry for the APIs you develop using Azure Functions.

Once you click on Review and create, you will be able to check all the settings. Review your configuration and click the Create button again to deploy your function app:

Figure 1.4: Reviewing the function app setup

Once the deployment is complete, navigate to your new function app by clicking on the Go to resource button. You will find the function app running properly there:

Figure 1.5: Function app running

Now, it is time to understand the possibilities for development using Azure Functions and start coding.

Understanding the triggers available in Azure Functions

The basic idea of Azure Functions is that each function requires a trigger to start its execution. Once the trigger is fired, the execution of your code will start shortly afterward. However, the time it takes for execution to begin can vary depending on the selected hosting plan. For instance, in the Consumption plan, functions may experience cold starts – that is, a delay that occurs when the platform needs to initialize resources. It is also important to understand that the function can trigger more than once at the same time, which enables execution in parallel.

Azure Functions offers a variety of triggers that allow developers to execute code in response to different events. Here we have the most used triggers:

HTTP Trigger: This trigger allows the function to be executed via an HTTP request. It is useful for creating APIs and webhooks, where the function can be called using standard HTTP methods.Timer Trigger: This trigger runs the function on a schedule based on the NCRONTAB model. It is ideal for tasks that need to be performed at regular intervals, such as cleanup operations, data processing, or sending out periodic reports. It is important to mention that the same timer trigger function does not run again until its first execution is done. This behavior helps prevent overlapping executions and potential conflicts.Blob Storage Trigger: This trigger runs the function when a new blob is created or updated in an Azure Blob Storage container. It is useful for processing or transforming files, such as images or logs, as they are uploaded.Queue Storage Trigger: This trigger runs the function in response to messages added to Azure Queue Storage. It is useful for building scalable and reliable background processing systems.Event Grid Trigger: This trigger runs the function in response to events published to Azure Event Grid. It is useful for reacting to events from various Azure services, such as resource creation, modification, or deletion.Service Bus Trigger: This trigger runs the function when messages are received in an Azure Service Bus queue or topic. It is ideal for handling inter-application messaging and building complex workflows.Cosmos DB Trigger: This trigger runs the function in response to creation and updates in Azure Cosmos DB. It is useful for processing data changes in real time, such as updating a search index or triggering additional data processing.

These triggers offer flexibility and scalability, allowing developers to build event-driven applications that can respond to distinct types of events seamlessly. It is important to say that there are other triggers available in Azure Functions, and we will discuss them in more detail in the next chapters.

Coding with Azure Functions

The focus of this topic is to rapidly present some ways to develop Azure functions. During the other chapters of the book, we will present a use case related to car sharing. As you will see in detail in Chapter 2, Demystifying Microservices Applications, each microservice must have a health check endpoint. Let us develop a sample of this health check API.

Coding Azure functions using VS Code

Creating an HTTP trigger Azure function using VS Code involves several well-defined steps. Here is a detailed guide to help you through the process.

There are some prerequisites to enable the development of Azure functions using VS Code, as follows:

Ensure you have VS Code installed on your machine. The use of VS Code will help you not only develop the Azure functions needed but also manage your Azure account using the Azure Tools extension.It is recommended that you sign in to your Azure account to create the new function. The C# Dev Kit may also be installed.The GitHub Copilot extension can also be installed to help you solve coding problems and, at the same time, guide you while writing code.Install the Azure Functions extension for VS Code. This VS Code extension will facilitate the development of functions, giving you wizards for each function trigger desired.Install the Azurite extension for VS Code. This VS Code extension is an open source Azure Storage API-compatible server for debugging Azure Functions locally.Make sure you have the Azure Functions Core Tools, and the .NET SDK installed if you are using C#.

Once you have set up your environment, you will have something like the following figure:

Figure 1.6: VS Code ready to write Azure functions

Once all the prerequisites are set, in the Azure tab, go to WORKSPACE and select Create Function Project…. Next, perform the following steps:Choose a location for your project and select your preferred programming language.Follow the prompts to create a new HTTP trigger function. You can name it Health and call the namespace CarShare.Function.You will need to decide on the access rights for this function. For this example, you can choose Anonymous. We will discuss each of the security options later.Open the newly created function file. You will see a template code for an HTTP trigger function.Modify the function to meet your specific requirements, which, in this case, means to respond if the function is working properly. Notice that this is a GET and POST function. For the purpose we have defined, you can change the code to only be an HTTP GET function.Save your changes.

For running and debugging locally, you just need to press F5 or navigate to Run > Start Debugging. VS Code will start the Azure Functions host, and you will see the function URL in the output window. Then, you can use tools such as Postman or your browser to send HTTP requests to your function endpoint.

It is worth mentioning that for running Azure Functions locally, you will need to allow PowerShell scripts to run without being digitally signed. This can be a problem depending on the security policies provided by your company.

Once the function is running, you can consider it the same as when you work on other types of software projects, and even the debugging will work properly. The trigger will depend on the function you set. The following figure shows the code of the function program, where you can see the response to the caller with a status of 200 by using OkObjectResult with the message “Yes! The function is live!” and the UTC time.

Figure 1.7: Azure Functions running locally

As you have created a function app connected to a GitHub repository with the deployment process handled by GitHub Actions, once you commit and pull the code to GitHub, GitHub Actions will automatically build the function and deploy it as a function app.

Figure 1.8: Function app deployed using GitHub Actions

It is not the purpose of this book to discuss CI/CD strategies, but you will certainly need to think about them when it comes to professional development.

The result of this deployment can be checked in the Azure portal, where the function developed will be available in the list of functions. It is worth noting that a function app can handle more than one function at the same time.

Figure 1.9: Health function available in the function app

The function can be executed as soon as it is published to Azure. As a result of the sample function, as this was developed as a GET HTTP trigger, we can check that the function is working by accessing the API in the web browser.

Figure 1.10: Health function running properly

As you don’t have a live CI/CD pipeline, you can also publish your Azure function directly from the VS Code IDE. To do so, you may use the Azure Functions extension provided by VS Code.

There are a few steps to follow in this case. The first one is to select the action to deploy the function in the VS Code prompt:

Figure 1.11: Deploying to Azure using VS Code

After that, you will need to select the corresponding subscription and the name of the new function app you want to deploy, considering a new function:

Figure 1.12: Creating a new function app

The current process proposed by the extension is to deploy an Azure function in the Flex Consumption plan. There are some specific locations where this option is available:

Figure 1.13: Defining the location for the new function app

The definition of the runtime stack is also important to get the most out of your Azure function. In the case of the Flex Consumption plan, you will also be asked for the memory usage in the instance and the maximum number of instances available for parallel calls.

Figure 1.14: Defining the runtime stack for the new function app

Once these sets are defined, your Azure function will be deployed correctly. You can also redeploy functions using the same technique later, without needing to recreate the Azure function app every single time.

Figure 1.15: Function app properly deployed

Last, but not least, the Azure portal also gives you the possibility to monitor and manage the functions deployed. Once this process is done, you can monitor your function’s performance and log. By using the Monitoring section of your function app, you can view execution details, track failures, and analyze performance metrics.

Coding Azure functions using Visual Studio

Visual Studio is one of the best options for developing Azure functions. To do so, you must set Azure Development Workload, which will help enable Azure functions development natively on the platform.

Once you have done this, the same project you created using VS Code will be available for you to use at Visual Studio. The difference between VS Code and Visual Studio in this case is that Visual Studio will provide an easier setup environment for debugging and a lot of visual dialogs that can facilitate your decisions.

Figure 1.16: Creating a new Azure function for the function app

These dialogs simplify the development process, so if you have the opportunity to use Visual Studio, this will be the best option.

Figure 1.17: Defining the Azure function trigger type

Once again, when you create a Function Apps project, you can add more than one function to this project, which is extremely useful for microservices solutions. In the following example, we have added a second HTTP trigger function called Status to help you understand this possibility and to let you see how these functions work together in a single function app.

Figure 1.18: Function app with more than one function

It is important to mention that the same code developed initially using VS Code can continue to be maintained using Visual Studio, and vice versa. This is great because you can have different developers in the same team using the two environments and this will not cause a problem, at least not with Function Apps projects.

Visual Studio is an excellent option for developing Azure functions due to its comprehensive setup environment for debugging and integrated visual dialogs, which make development easier. Developers can switch between VS Code and Visual Studio without compatibility issues, facilitating team collaboration. Multiple functions, such as HTTP triggers, can be in a single Function Apps project, supporting microservices solutions.

Summary

This chapter explored the evolution of cloud platforms and the rise of modern technologies, emphasizing the importance of focusing on solutions rather than just technologies. The chapter highlighted the advantages of serverless computing, such as scalability, reliability, security, and cost-effectiveness, while also addressing potential challenges. It discussed how serverless architecture can deliver microservices and the benefits of using Microsoft Azure Functions for building and deploying serverless applications. The chapter also provided practical guidance on creating and managing Azure functions using tools such as VS Code and Visual Studio.

In the next chapter, we will discuss how microservices applications can be defined and designed in enterprise scenarios.

Questions

What are the main advantages of using serverless computing as mentioned in the chapter?

Serverless computing provides several advantages, including automatic scaling, cost-efficiency through a pay-as-you-go model, and reduced infrastructure management. Developers do not need to worry about provisioning or maintaining servers, which allows them to focus on delivering solutions faster and more efficiently.

It also promotes software development acceleration by letting developers focus solely on the code. Additionally, the environment’s reliability and security are managed by the cloud provider, enabling scalable and sustainable solutions without sacrificing performance or safety.

How can serverless architecture be used to deliver microservices?

Serverless architecture supports the microservices model by allowing developers to create independent, small, and reusable functions that represent distinct business capabilities. These functions can be deployed, tested, and scaled independently, following the core principles of microservices.

The chapter gave an example of a user authentication microservice, where separate functions such as registration, login, and password reset were implemented within a single serverless project. This flexibility enhances the modularity and maintainability of applications built using microservices principles.

What are the key triggers available in Azure Functions and their purposes?

Azure Functions can be triggered by a variety of events. The main triggers are HTTP trigger (for web requests), timer trigger (scheduled tasks), Blob Storage trigger (file uploads or changes), Queue Storage trigger (message processing), Event Grid trigger (event handling from Azure services), Service Bus trigger (messaging between applications), and Cosmos DB trigger (database change processing).

Each trigger allows developers to build event-driven applications with flexibility and scalability. For example, timer triggers are ideal for recurring tasks, while HTTP triggers are commonly used for APIs and webhooks. This variety of triggers supports the development of diverse and responsive solutions.

What steps are necessary to create a serverless application in the Azure portal?

To create a serverless application in Azure, the developer must log in to the Azure portal and create a new Function App resource. During the setup, they need to choose the hosting plan (e.g., Consumption plan), define project details such as region, runtime stack, storage account, and networking options, and enable monitoring via Application Insights.

After reviewing the configurations, the developer clicks Create to deploy the function app. Once deployed, they can navigate to the resource, start coding, and manage it directly from the portal or via development tools such as Visual Studio or VS Code.

How does Azure Functions integrate with other Azure services and third-party APIs?

Azure Functions integrates seamlessly with various Azure services such as Azure SQL, Cosmos DB, Event Grid, Service Bus, and Logic Apps. This enables developers to build complex workflows, automate tasks, and create highly responsive applications using existing Azure infrastructure.

Additionally, Azure Functions can connect to third-party APIs and services, supporting hybrid architectures. This integration capability allows developers to extend their applications across platforms, enhancing the flexibility and scalability of cloud-native solutions.

Further reading

Azure Functions documentation: https://learn.microsoft.com/en-us/azure/azure-functions/Azure API Management documentation: https://learn.microsoft.com/en-us/azure/api-management/Azure Application Gateway documentation: https://learn.microsoft.com/en-us/azure/application-gateway/overview

Join our community on Discord

Join our community’s Discord space for discussions with the author and other readers:

https://packt.link/PSMCSharp

2

Demystifying Microservices Applications

Over the last decade, microservices architecture has taken a central role in modern software development. In this chapter, we will define what microservices architecture is. You will learn the reasons behind the success of microservices, their pros and cons, and when it is worth adopting them. Starting with the problems that led to their conception, we will discuss typical scenarios of when to use them, the impact of their adoption on overall project costs, and the returns you might expect.

You will get insights into the organization of microservices, discovering how it differs from the usual monolithic application by resembling more of an assembly line than user-requests-driven processing. This newly conceived organization brings with it new challenges that require ad hoc techniques to enforce coherence, coordination, and reliability.

Moreover, new patterns and best practices have been created to tackle challenges with microservices and optimize their advantages. We will introduce and summarize some fundamental patterns here, while their practical implementation, together with more specific patterns, will be detailed throughout the remainder of the book.

More specifically, this chapter covers the following:

The rise of Service-Oriented Architectures (SOAs) and microservicesThe definition and organization of microservices architectureWhen is it worth adopting microservices architectures?Microservices common patterns

The rise of Service-Oriented Architectures (SOAs) and microservices

Briefly defined, microservices are chunks of software deployed on computer networks that communicate through network protocols. However, this is not all; they must also obey a set of further constraints.

Before giving a more detailed definition of what a microservices architecture is, we must understand how the idea of microservices evolved and what kind of problems it was called to solve. We will describe the two main steps of this evolution across two separate subsections.

The rise of SOA

The first step in the direction of microservices was taken by the so-called service-oriented architectures, or SOAs, that is, architectures based on networks of communicating processes. Initially, SOAs were implemented as web services similar to the ones you might have already experienced in ASP.NET Core.

In an SOA, different macro-modules that implement different features or roles in software applications are exposed as separate processes that communicate with each other through standard protocols. The first SOA implementation was web services communicating through the XML-based SOAP protocol. Then, most web services architectures moved toward JSON-based web APIs, which you might already know about since REST web services are available as standard ASP.NET project templates. The Further reading section contains useful links that provide more details on REST web services.

SOAs were conceived during the boom in the creation of software for business applications as one of the ways to integrate the various preexisting applications used by different branches and divisions into a unique company information system. Since the preexisting applications were implemented with different technologies, and the software expertise available in the various branches and divisions was heterogeneous, SOA was the answer to the following compelling needs:

Enabling software communication between modules implemented with different technologies and running on different platforms (Linux + Apache, Linux + NGINX, or Windows + IIS). In fact, software based on different technologies is not binary compatible, but it can still cooperate with others if each of them is implemented as a web service that communicates with the others through a technology-independent standard protocol. Among them, it is worth mentioning the text-based HTTP REST protocol and the binary gRPC protocol. Worth mentioning also is that the HTTP REST protocol is an actual standard while at the moment, gRPC is just a de facto standard proposed by Google. The Further reading section contains useful links for getting more details about these protocols.Enabling the versioning of each macro-module to evolve independently from the others. For instance, you might decide to move some web service toward the new .NET 9 version to take advantage of new .NET features or new, available libraries, while leaving other web services that don’t need modifications with a previous version, say, .NET 8.Promoting public web services that offer services to other applications. As an example, think of the various public services offered by Google, such as Google Maps, or the artificial intelligence services offered by Microsoft, such as language translation services.Below is a diagram that summarizes classical SOA.

Figure 2.1: SOA

Over time, the company information system and other complex SOA applications conquered more markets and users, so new needs and constraints appeared. We will discuss them in the next subsection.

Toward microservices architectures

As application users and traffic increased up to a different order of magnitude, the optimization of performance and the optimal balancing of hardware resources among the various software modules became a must. This led to a new requirement:

Each software module must be scalable independently from the others so that we can allocate to each module the optimal quantity of resources it needs.

As the company information system gained a central role, its continuous operation, that is, almost zero downtime, became a must, leading to another important constraint:

Microservices architecture must be redundant. Each software module must have several replicas running on different hardware nodes to resist software crashes and hardware failures.

Moreover, to adapt each application to a rapidly evolving market, the requirements on the development times became more compelling. Accordingly, more developers were needed to develop and maintain each application with the given strict milestones.

Unfortunately, handling software projects involving more than around four people to the required quality proved to be substantially impossible. So, a new constraint was added to SOAs:

The services composing an application must be completely independent of each other so that they can be implemented by loosely interacting separate teams.

However, the maintenance effort also needed to be optimized, yielding another important constraint:

Modifications to a service must not propagate to other services. Accordingly, each service must have a well-defined interface that doesn’t change with software maintenance (or that, at least, rarely changes). For the same reason, design choices adopted in the implementation of a service must not constrain any other application service.

The first and second requirements can be satisfied by implementing each software module as a separate service so that we might allocate more hardware resources to it by simply replicating it in N different instances as needed to optimize the overall performance and ensure redundancy.

We also need a new actor, something that decides how many copies of each service to use and on what hardware to place them. There are similar entities called orchestrators. It is worth pointing out that we might also have several orchestrators, each taking care of a subset of the services, or no orchestrator at all!Summing up, we moved from applications made of coarse-grained coupled web services to fine-grained and loosely coupled microservices, each implemented by a different developer team, as shown in the following figure.

Figure 2.2: Microservices architecture

The diagram shows fine-grained microservices assigned to different loosely coupled teams. It is worth pointing out that while loose coupling was also an initial target for the primordial web services architectures, it took time to improve to a good level, till reaching its peak with the advent of microservices techniques.The preceding diagram and requirements do not define exactly what microservices are; they just explain the start of the microservices era. In the next section, we will give a more formal definition of microservices that reflects their current stage of evolution.

The definition and organization of microservices architectures

In this section, we will give a definition of microservices and detail their immediate consequences on an organization, distinguishing between the microservices definition, which is expected to change gradually over time, and microservices practical organization, which might evolve at a faster rate as new technologies appear.

In the first subsection, we will focus on the definition and its immediate consequences.

A definition of microservices architectures

Let’s first list all the microservices requirements. Then, we will discuss each of them in a separate subsection.

A microservices architecture is an architecture based on SOA that satisfies all the constraints below:

Module boundaries are defined according to the domain of expertise they require. As we will discuss in the subsections below, this should ensure they are loosely coupled.Each module is implemented as a replicable service, called a microservice, where replicable means one can create several instances of each service to enforce scalability and redundancy.Each service can be implemented and maintained by a different team, where all teams are loosely coupled.Each service has a well-defined interface known to all teams involved in the development project.Communication protocols are decided at the project start and are known by all teams.Each service must depend just on the interface exposed by the others and on the communication protocols adopted. In particular, no design choice adopted for a service can impose constraints on the implementation of the others.

You are encouraged to compare each of the above constraints with the requirements that led to the conception of microservices architecture discussed in the previous section. In fact, each of these constraints is the immediate result of one or more of the previous requirements.

Let’s discuss each constraint in detail.

Domain of expertise and microservices

This constraint has the purpose of providing a practical rule for defining the boundary of each microservice so that microservices are kept loosely coupled and can be handled by loosely coupled teams. It is based on the theory of domain-driven design developed by Eric Evans (see Domain-Driven Design: https://www.amazon.com/exec/obidos/ASIN/0321125215/domainlanguag-20). Here, we will go over just a few essential concepts of this theory, but if you’re interested in reading more, refer to the Further reading section for more details.

Basically, each domain of expertise uses a typical language. Therefore, during the analysis, it is enough to detect changes in the language used by the experts you speak with to understand what is included in and excluded from each microservice.

The rationale behind this technique is that toughly interacting people always develop a specific language recognized by others who share the same domain of expertise, while the absence of such a common language is a signal of loose interaction.

This way, the application domain or an application subdomain is split into so-called bounded contexts, each characterized by the usage of a common language. It is worth pointing out that domain, subdomain, and bounded context are all core concepts of DDD. For more details on them and DDD, you may refer to the Further reading section, but our simple description should suffice for getting started with microservices.

Thus, we get the first division of the application into bounded contexts. Each is assigned to a team and a formal interface for each of them is defined. This interface becomes the specification of a microservice, and it is also everything the other teams must know about the microservice.

Then, each team that has been assigned a microservice can split it further into smaller microservices to scale each of them independently from the others, checking that each resulting microservice exchanges an acceptable quantity of messages with the others (loose coupling).

The first division is used to split the work among the teams, while the second division is designed to optimize performance in various ways, which we will detail in the Microservices organization subsection.

Replicable microservices

There should be a way to create several instances of the same microservice and place them on the available hardware to allocate more hardware resources to the most critical microservices. For some applications or single microservices, this can be done manually; but, more often, dedicated software tools called orchestrators are adopted. In this book, we will describe two orchestrators: Kubernetes, in Chapter 8, Practical Microservices Organization with Kubernetes, and Azure Container Apps, in Chapter 9, Simplifying Containers and Kubernetes: Azure Container Apps and other Tools.

Splitting microservices development among different teams

The way microservices are defined, so that they can be assigned to different loosely coupled teams, has already been explained in the Domain of expertise and microservices subsection. Here, it is worth pointing out that the microservices defined at this stage are called logical microservices, and then each team can decide to split each logical microservice into one or more physical microservices for various practical reasons.

Microservices, interfaces, and communication protocols

Once microservices are assigned to different teams, it is time to define their interfaces and the communication protocol used for each kind of message. This information is shared among all teams so that each team knows how to communicate with the microservices handled by the other teams.

Only the interfaces of all logical microservices and the associated communication protocols must be shared among all teams, while the division of each logical microservice into physical microservices is just shared within each team.

The coordination of the various teams, and the documentation and monitoring of all services, is achieved with various tools. Below are the main tools used:

Context maps are a graphical representation of the organizational relationships among the various teams working on all application-bounded contexts. Service catalogs collect information about all microservice requirements, teams, costs, and other properties. Tools like Datadog (https://docs.datadoghq.com/service_catalog/) and Backstage (https://backstage.io/docs/features/software-catalog/) perform various types of monitoring, while tools like Postman (https://www.postman.com/) and Swagger (https://swagger.io/) are mainly focused on formal requirements, such as the testing and automatic generation of clients for interacting with the services.

Just the interfaces of the logical microservices are public

The code of each microservice can’t make any assumptions about how the public interface of all other logical microservices is implemented. Nothing can be assumed about the technologies used (.NET, Python, Java, and so on) and their versions, and nothing can be assumed about the algorithms and data architectures used by other microservices.

Having analyzed the definition of microservices architecture, and its immediate consequences, we can move to the current most practical way to organize them.

Microservices organization

The first consequence of the independence of microservices design choices is that each microservice must have private storage because a shared database would cause dependencies among the microservices sharing it. Suppose microservices A and B both access the same database table, T. Now, we’re modifying microservice A to meet a new user’s requirements. As part of this update, the solution for A will require us to replace table T with two new tables, T1 and T2.

In a similar situation, we would be forced to also change the code of B to adapt it to the replacement of T with T1 and T2. Clearly, the same limitation doesn’t apply to different instances of the same microservice, so they can both share the same database. To summarize, we can state the following:

Instances of different microservices can’t share a common database.

Unfortunately, moving away from a single-application database inevitably leads to data duplication and coordination challenges. More specifically, the same chunk of data must be duplicated in several microservices, so when it changes, the change must be communicated to all microservices that are using a duplicated copy of it.

Thus, we may state another organizational constraint:

Microservices must be designed in a way that minimizes the duplication of data, or stated differently, duplications should involve as few microservices as possible.

As has been said in the previous section, if we define microservices according to the domain of expertise, the last constraint should be ensured automatically because different domains of expertise usually share just a little data.

No other constraints descend immediately from the definition of microservices, but it is enough to add a trivial performance constraint on the response time to force the organization of microservices in a way that it more closely resembles an assembly line than a usual user-request-driven software. Let’s see why.

A user request coming to microservice A might cause, in turn, a long chain of requests issued to other microservices, as shown in the following figure:

Figure 2.3: Chain of synchronous request-responses

Messages 1-6 are triggered by a request to microservice A and are sent in sequence, so their processing times sum up to the response time. Moreover, microservice A, after having sent message 1, remains blocked, waiting for a response, until it receives the last message (6); that is, it remains blocked for the whole lifetime of the overall chained communication process.

Microservice B remains blocked twice, waiting for an answer to a request it issued. The first time is during the 2-3 communication and then the second is during the 4-5 communication. To sum up, a naive request-response pattern to microservices communication implies high response times and a waste of microservices computation time.

The only ways to overcome the above problems are either avoiding complete dependencies among microservices or caching all information needed to satisfy any user request into the first microservice, A. Since reaching total independence is basically impossible, the usual solution is caching in A whatever data it needs to answer requests without asking for further information about other microservices.

To achieve this goal, microservices are proactive and adopt the so-called asynchronous data-sharing approach. Whenever they update data, they send the updated information to all other microservices that need it for their responses. Put simply, in the example above, tree nodes, instead of waiting for requests from their parent nodes, send pre-processed data to all their possible callers each time their private data changes, as shown in the figure below.

Figure 2.4: Data-driven communication

Both communications labeled 1 are triggered when the data of the C/D microservices changes, and they may occur in parallel. Moreover, once communication is sent, each microservice can return to its job without waiting for a response. Finally, when a request arrives at microservice A, it already has all the data it needs to build the response with no need to interact with other microservices. In general, microservices based on asynchronous data sharing pre-process data and send it to whichever other service might need it as soon as their data changes. This way, each microservice already contains precomputed data that it can use to respond immediately to user requests with no need for further request-specific communications.

This time, we can’t speak of requests and responses but simply of messages exchanged. People working with classical web applications will be accustomed to request/response communications where a client issues a request and a server processes that request and sends back a response.

In general, in a request/response communication, one of the involved actors, say, A, sends a message containing a request to perform some specific processing to another actor, say, B, then B performs the required processing and returns a result (the response), which may also be an error notification.

However, we may also have communications that are not request/response-based. In this case, we simply speak of messages. In this case, there are not responses but just acknowledgments that the messages have been correctly received by either the final target or an intermediate actor. Differently from responses, acknowledgments are sent before completing the processing of the messages.

Returning to asynchronous data sharing, as new data becomes available, each microservice does its job and then sends the results to all interested microservices, and then it continues performing its job without waiting for a response from its recipients.

Each sender just waits for an acknowledgment from its immediate recipient, so wait times do not add up like in the initial chained request/response example.

What about message acknowledgments? They also cause small delays. Is it possible to also remove this smaller inefficiency? Of course, with the help of asynchronous communication!

In synchronous communication, the sender waits for the message acknowledgment before continuing its processing. This way, if the acknowledgment times out or is replaced by an error notification, the sender can perform corrective actions, such as resending the message.

In asynchronous communication, the sender doesn’t wait for either an acknowledgment or an error notification but continues its processing, immediately after the message is sent, while acknowledgments or error notifications are sent to a callback.

Asynchronous communication is more effective in microservices because it completely avoids wait times. However, the necessity to perform corrective actions in case of possible errors complicates the overall message-sending action. More specifically, all sent messages must be added to a queue, and each time an acknowledgment arrives, the message is marked as correctly sent and removed from this queue. Otherwise, if no acknowledgment arrives within a configurable timeout time, or if an error is raised, the message is marked to be re-sent according to some retry policies.

The microservices asynchronous data-sharing approach is often accompanied by the so-called Command Query Responsibility Segregation (CQRS) pattern. According to CQRS, microservices are split into updates microservices, which perform the usual CRUD operations, and query microservices, which are specialized in answering queries that aggregate data from several other microservices, as shown in the following figure:

Figure 2.5: Updates and query microservices

According to the asynchronous data-sharing approach, each update microservice sends all its modifications to the query services that need them, while query microservices precompute all queries to ensure short response times. It is worth pointing out that data-driven updates resemble a factory assembly line that builds all possible query results.

Both updates and query microservices are called frontend microservices because they are involved in the usual request-response pattern with the user. However, data updates in their path may also encounter microservices that do not interact at all with a user. They are called worker microservices. The following figure shows the relationship between worker and frontend microservices.

Figure 2.6: Frontend and worker microservices

While frontend microservices usually respond to several user requests in parallel by creating a thread for each request, worker microservices are involved only in data updates, so they don’t need to parallelize requests to ensure low response times to the user.