45,59 €
Harness the power of the Cloud, leveraging the speed and scale of Azure Serverless computing
This book is for .NET developers who would like to learn about serverless architecture. Basic C# programming knowledge is assumed.
Serverless architecture allows you to build and run applications and services without having to manage the infrastructure. Many companies have started adopting serverless architecture for their applications to save cost and improve scalability.
This book will be your companion in designing Serverless architecture for your applications using the .NET runtime, with Microsoft Azure as the cloud service provider. You will begin by understanding the concepts of Serverless architecture, its advantages and disadvantages. You will then set up the Azure environment and build a basic application using a sample text sentiment evaluation function. From here, you will be shown how to run services in a Serverless environment. We will cover the integration with other Azure and 3rd party services such as Azure Service Bus, as well as configuring dependencies on NuGet libraries, among other topics. After this, you will learn about debugging and testing your Azure functions, and then automating deployment from source control. Securing your application and monitoring its health will follow from there, and then in the final part of the book, you will learn how to Design for High Availability, Disaster Recovery and Scale, as well as how to take advantage of the cloud pay-as-you-go model to design cost-effective services. We will finish off with explaining how azure functions scale up against AWS Lambda, Azure Web Jobs, and Azure Batch compare to other types of compute-on-demand services.
Whether you've been working with Azure for a while, or you're just getting started, by the end of the book you will have all the information you need to set up and deploy applications to the Azure Serverless Computing environment.
This step-by-step guide shows you the concepts and features of Serverless architecture in Azure with .NET.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 462
Veröffentlichungsjahr: 2017
BIRMINGHAM - MUMBAI
Copyright © 2017 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: August 2017
Production reference: 1160817
ISBN 978-1-78728-839-3
www.packtpub.com
Author
Sasha Rosenbaum
Copy Editor
Pranjali Chury
Reviewers
Donna Malayeri
Mikhail Veselov
Project Coordinator
Vaidehi Sawant
Commissioning Editor
Merint Mathew
Proofreader
Safis Editing
Acquisition Editor
Chaitanya Nair
Indexer
Francy Puthiry
ContentDevelopmentEditor
Rohit Kumar Singh
Graphics
Abhinash Sahu
Technical Editor
Vibhuti Gawde
Production Coordinator
Nilesh Mohite
Push it down the stack.
Abstract it away to a third-party.
Focus on what's unique to my application.
This book is about the next important step in our never-ending journey to build more sophisticated and scalable applications with less effort and fewer lines of code. If there has been a unifying thread to my career, it's been the relentless pursuit of not worrying about as many aspects of my applications as possible.
To put serverless computing in perspective, consider how your own interaction with the stack has changed over the years. Has it moved in the upward direction?
Did you ever have to worry about an actual data center? That got abstracted into colocation services. What about the physical servers in those managed racks? They went the way of cloud-based VMs. How about configuring, patching, and networking those VMs? On Azure, that got pushed down into cloud services so we could just publish our code and scale at will. But then even the application frameworks got encapsulated into specialized services for mobile backends, websites, asynchronous job processing, and more.
So at this point in the journey, we have a nicely dialed-in platform where we have VMs, but they are so preconfigured and loaded up with useful frameworks that you don't need to think about them that much. However, you may still be creating fairly elaborate projects with more plumbing code than what feels quite right.
Serverless computing is a jump forward from that. The big idea is that you can just write a collection of useful functions (or methods or microservices) with no application container whatsoever. They become available in the cloud at an infinite scale (and low cost) to be invoked at will. The potential to cull and simplify mid-tier code is significant.
I happened to work with this model recently, and it was an eye-opener. My company has an elaborate application implemented on Azure Cloud Services. We had a need to integrate with an external authentication provider whose API was most conveniently accessed via a Node.js library. Having C#.NET skills in house, we were not looking forward to spinning up a new service application to do the Node work. Then we found Azure Functions.
We were able to write a series of 10-line routines that did what we needed, expose them via a RESTful endpoint, and call them from our existing application in a day. We did this without thinking about building a new service or deploying new VMs or considering how it would scale out.
This is just a tip of the iceberg scenario, but it got me thinking about how to architect our services as functions going forward. This will certainly help us spend less time and effort building things, but the elastic properties of how this scales and is paid for are really interesting. Scalability is provided at the function level. There are no underlying VMs in
your account that you need to ramp up or down. Your functions simply run when invoked and you pay for the execution thereof. It's a much more cost-effective model as there are literally no idle resources from your point of view.
Serverless is clearly an exciting new tool for forward thinking architects and developers alike.
All this backstory brings us to your author, Sasha Rosenbaum. I'd like to tell you why I think she's an important voice on the topic at hand. We met at a consultancy specializing in custom software development for the Azure platform. Shortly after she joined, it became clear that she could fearlessly take on any new technology, figure it out quickly, and apply it for customers with all the attention to detail and pride in the work that you could ever want. So I, somewhat selfishly, made sure that she worked on all my projects because I knew the chance of success was going to be 100%.
I personally witnessed her grok and apply tools as diverse as Azure Cloud Services, App Services, and SQL Databases; .NET MVC, Web API, and Entity Framework; hybrid native application development for iOS and Android, Python running on IoT devices, and interactive video. All these over a two-year period! Then a short year after I left the company, she had added DevOps expertise to the list and was booking speaking engagements on the topic.
I mention this because Azure Functions are nascent, with limited real-world examples to draw experience from. You want your guide to not only have really dug into the details, but also to have a breadth of experience from which to put the new technology into perspective. You need help figuring out where it might (or might not) add value to your situation.
I can assure you that Sasha is the right person for the job. She has just the right mix of inquisitive, theoretical, and pragmatic to bring actionable insight to the serverless computing conversation.
I hope you find this book as useful for your work as I have, and I hope that you appreciate the mountain of work and dedication that Sasha put into creating it.
Steve Harshbarger
President and CTO of Monj
Sasha Rosenbaum helps Microsoft clients migrate their infrastructure and applications to the cloud, working as a technology solutions professional on the global black belt team. She covers a broad range of products and services available on the Azure platform, and helps clients envision, design, and deploy cloud based applications.
Sasha has been working with Azure since its early days, helping companies on their journey to the cloud, as a consultant. She has a computer science degree from Technion, Israel's Institute of Technology, one of the top 20 CS departments in the world.
Sasha is passionate about the DevOps movement, helping companies adopt a culture of collaboration. Sasha is a co-organizer of the the DevOps Days Chicago conference.
You can visit Sasha's personal blog and follow her on Twitter (@DivineOps).
I would like to express sincere gratitude to my technical reviewers, Donna Malayeri and Mikhail Veselov, for their feedback and insights. Additional thanks goes to Packt for this tremendous opportunity, as well as the entire Packt editorial team that worked with me, for their dedication and effort throughout the publishing process.
Finally, I would like to thank Lou for his unwavering support, which helped me see this project to completion.
I hope you will enjoy this book!
Donna Malayeriis a program manager on the Azure Functions team, where she is responsible for the developer experience and the Visual Studio tooling. She previously worked on products such as Azure Mobile Services, Reactive Extensions (Rx), Visual F#, and Scala. She holds a PhD in programming languages from Carnegie Mellon University. In her spare time, she enjoys amateur improve and beadwork. You can follow her on Twitter at@lindydonna.
Mikhail Veselov is a professional software developer with over 12 years' experience in .NET-related products. His work with cloud-based applications started in 2011, with various projects completed since that time. He holds two degrees'--in math and computer science--from the Saint-Petersburg State University. He is also a big fan of the bass guitar and origami. You can ask him any question at [email protected].
For support files and downloads related to your book, please visit www.PacktPub.com.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
https://www.packtpub.com/mapt
Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.
Fully searchable across every book published by Packt
Copy and paste, print, and bookmark content
On demand and accessible via a web browser
Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review on this book's Amazon page at https://www.amazon.com/dp/1787288390.
If you'd like to join our team of regular reviewers, you can e-mail us at [email protected]. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!
Preface
What this book covers
What you need for this book
Who this book is for
Conventions
Reader feedback
Customer support
Downloading the example code
Errata
Piracy
Questions
Understanding Serverless Architecture
What is serverless?
Azure serverless
Architecture
Inherent features
Asynchronous
Stateless
Idempotent and defensive
Execution restrictions
Limited execution time
Startup latency
Advantages of serverless computing
Scalability
Pay-As-You-Go
Reduced operational costs
Speed of deployment
Independent technology stack and updates
Integration with the cloud provider
Open source
Disadvantages of serverless computing
Distributed system complexity
Potential load on downstream components
Potential for repetitive code
Different operations
Security and monitoring
Testing
Vendor control
Vendor lock-in
Multitenancy
Vendor-specific limitations
Applications
Summary
Getting Started with the Azure Environment
Microsoft Azure Cloud
Azure account
Azure subscription
Subscription constraints
Creating a subscription
Azure Management APIs
API access
The Azure Resource Manager API
Resource groups
Azure Resource Manager templates
Azure Management Portal
Azure serverless computing
Hosting Plan
Function App
Azure Function
Deploying a function
Creating an Azure serverless environment
The App name parameter
The Subscription parameter
The Resource Group parameter
The Hosting Plan parameter
The Location parameter
The Storage parameter
The Automation options link
Deploying the application
Functions Portal
Deploying Azure Functions online
Function files
The function.json file
The run.csx file
HTTP endpoint
Exploring the Functions Portal
The Test pane
Function logs
Error output
Function App settings
Application Settings
Daily Usage Quota
The runtime version
The Function App edit mode
Host keys
Slots
Proxies
The Platform features pane
App Service Editor
Dev Console
Resource Explorer
Clean up Azure resources
Deleting a function
Deleting the Function App
Deleting the resource group
Exploring further options
Summary
Setting Up the Development Environment
Configuring the development environment
Downloading and installing Visual Studio
Creating the project
Function App Configuration Files
The local.settings.json file
The host.json file
Creating the function
Running the function locally
Deploying the application to Azure
Modifying the text scoring function
Updating the function code
Republishing the function to Azure
Storing the results
Setting up the SQL PaaS database in Azure
Creating the SQL Server
Managing the SQL database from Visual Studio
Creating the DocumentTextScore table
Integrating the ScoreText function with the SQL database
Modifying the function to store results in SQL
Setting up a web dashboard for scoring results
Connecting the ASP.Net Core Web Application to the database
Installing Entity Framework dependencies
Get the SQL Azure connection string
Updating the generated database context
Dependency injection
Creating the web UI
Creating the MVC controller
Creating the MVC view
Changing the application home page
Publishing to Azure
Tying it all together
Summary
Configuring Endpoints, Triggers, Bindings, and Scheduling
Triggers and bindings
Triggers
Binding
Trigger binding
Input binding
Output binding
Advanced bindings
Endpoints
Custom routes
Allowed methods
API definition
Proxies
Securing the endpoints
Blob Storage trigger
Creating a storage account for document upload
Getting the storage connection string
Creating a Blob trigger function
Function code
Triggering the function
Updating the function to process text
Timer trigger
CRON expression
Implementing the average result function
Defining the SQL table binding
Finding an existing entity
Full Function Code
Summary
Integrations and Dependencies
Processing a Twitter feed
Creating a WebHook Trigger function
Creating the Logic App to search Twitter
Sharing code between functions
Integrating with a Service Bus queue
Email processing using Service Bus
Creating a Service Bus queue
Configuring the access permissions
Listen Access Policy for the function
Send Access Policy for the Logic App
Creating the function
Other Service Bus configuration options
Sending email messages to Service Bus Queues
Testing the Service Bus flow
.NET dependency
Adding NuGet libraries
Summary
Integrating Azure Functions with Cognitive Services API
Using Microsoft Cognitive Services APIs to analyze text
Creating a Cognitive API account
Text sentiment API usage
The text sentiment analytics API call implementation
Constructing the payload
Calculating the overall score
Using Sentiment Analysis in Azure Functions
A short text analysis example
A long text analysis example
Storing the function results
Storing tweet score results using the shared code
Creating the TweetTextScore table
Updating the ScoreTweet function
Reflecting the new results in the Web dashboard
Updating the model from database
Updating controllers and views
Summary
Debugging Your Azure Functions
Software debugging
Logging events
Logging best practices
Logging adequately
Logging with context
Logging in a readable format
Logging at the proper level
Logging during normal operation
Debugging the functions locally
Debugging functions with Visual Studio
Triggering the functions
HTTP-triggered function
Triggering functions using Functions Core tools
Blob Storage triggered function
Triggering a function using custom code
Triggering a function using functions
Triggering the Service Bus queue function locally
Handling errors
Remote debugging in the cloud
Summary
Testing Your Azure Functions
The importance of testing
Software testing
Software assessment perspectives
Code correctness
Unit testing
Integration testing
System testing
Performance
Performance testing
Load testing
Usability
Testing the functions
Unit testing
Unit testing approach
Naming convention
Unit test best practices
What to cover
Mocking frameworks
Dependency injection
Creating a test project
Creating a unit test
The ProcessQueue function - input string, and logged output
Getting more granular
Asynchronous tests
Unit test examples
Unit test examples
The EvaluateText class - API failure and method failure
Integration testing
The ScoreTweet function - updating an entity
Performance testing
Load testing
Summary
Configuring Continuous Delivery
Version Control System
Centralized VCS
Distributed VCS
Common practices
Committing best practices
Database versioning
Continuous Integration and delivery
Version control for functions
Configuring VSTS
Configuring the repository
Linking the Azure subscription to VSTS
Continuous delivery for functions
Deployment slots
Configuring the VSTS build process
Configuring the NuGet restore task
E-mail notifications
Configuring Release
Load testing with VSTS
Automating Function App deployment
Summary
Securing Your Application
Securing the application
Physical security
Host infrastructure
Networking security
Integrating functions with a private network
Networking concepts
Azure App Service
VNet integration
Hybrid connections
App Service Environment
ASE deployment modes
Public App Service Environment
Adding a Network Security Group
Adding an NVA
Private App Service Environment
Application-level security
Authorization and authentication
Anonymous mode
API key authorization
HTTP trigger
Webhook trigger functions
User-based authentication
Configuring Azure Active Directory
Third-party identity providers
Code quality
Managing keys and secrets
Data encryption at rest
Data encryption in transit
Configuring a custom domain
Configuring SSL
Configuring CORS
Administrative access
Role-based access control
Resource locks
Summary
Monitoring Your Application
Application performance management
Detect
Diagnose
Measuring and learning
Monitoring tools
Collecting logs
Creating charts
Setting up alerts
Avoiding alert fatigue
Alerting on actionable items
Functions monitoring tools
Functions Monitor tab
Application Insights
Monitoring functions with Application Insights
Setting up local configuration
Staging slot reporting
Application Insights dashboards
Live Stream
Metrics dashboards
Performance dashboard
Dashboard customization
Servers dashboard
Failures dashboard
Metrics Explorer
Custom telemetry
Analytics view
Setting up alerts
Smart Detection
Summary
Designing for High Availability, Disaster Recovery, and Scale
High Availability
Service downtime
Azure services SLAs
What is covered by the SLA
What is not covered by the SLA
Fault tolerance
Elimination of single points of failure
Transient fault handling
Setting timeouts
The Retry pattern
The Circuit Breaker pattern
Message Queuing
The Retry pattern in Azure Functions
The Retry logic in Logic Apps
Fault containment
Fault detection
Prevention of human errors
Disaster Recovery
What is a disaster?
Disaster Recovery planning in Azure
DR site type
Implementing a hot DR site
DR in Functions
Web Apps
Logic Apps
Azure Service Bus
Azure Storage
Azure SQL Database
The Text Analytics API
Scaling the application
Scaling serverless compute
Consumption plan
App Service plan
Scaling other application components
Web Apps
Azure SQL Database
Azure Service Bus
Azure Storage
Logic Apps
Text Analytics API
Load testing
Summary
Designing Cost-Effective Services
Pay for what you use
Azure Functions pricing
Consumption plan
The Consumption plan pricing based on load
App Service plan
The App Service plan pricing based on load
The overall application cost
Web Apps
Web App pricing based on load
Azure SQL Database
Azure SQL Database pricing based on load
Azure Blob Storage
Azure Blob Storage pricing based on load
Azure Logic Apps
Azure Logic App pricing based on load
Azure Service Bus
Azure Service Bus pricing based on load
The Text Analytics API
Text Analytics API pricing based on load
Network bandwidth
Application Insights
Application Insights pricing based on load
Visual Studio Team Services
Calculating the overall applications costs
Summary
C# Script-Based Functions
C# script-based functions
Using NuGet libraries
Option 1 - the #r directive
Option 2 - the project.json file
Sharing code between functions
Summary
Azure Compute On-Demand Options
Compute on-demand
Azure WebJobs
Azure Logic Apps
Azure Batch
Azure PaaS Cloud Services
Summary
Dear reader,
I firmly believe that time is the most valuable resource we have. Thank you for choosing to spend your time with me, learning about Azure serverless compute.
It is hard to believe how far cloud technologies have come in the last decade. serverless compute is a type of technology that was unimaginable just a few years ago and is now rapidly gaining popularity. With the rise in popularity, it is timely to think where serverless compute fits into application development in general. This book aims at providing a hands-on guide to implementing .NET-based Azure serverless functions as well as looking at the bigger picture of designing and maintaining serverless applications.
With this rapid technology advancement, it is only fitting, perhaps, that even during the time of writing of this book, the Azure serverless technology was enhanced a number of times. Every effort has been made to keep the book’s content as current as possible. Please keep this in mind as you read, and do not hesitate to reach out if the text needs further revision.
I would like to express sincere gratitude to my technical reviewers, Donna Malayeri, and Mikhail Veselov, for their feedback and support. Additional thanks goes to Packt publishing for this tremendous opportunity, as well as the entire Packt editor team that worked with me, for their dedication and effort throughout the publishing process.
Finally, I would like to thank Lou for his unwavering support that helped me see this project to completion.
Chapter 1,Understanding the Serverless Architecture, discusses the features of serverless computing and the type of workloads that are best suited to be hosted in it.
Chapter 2, Getting Started with Azure Environment, provides us with a solid introduction to the Azure serverless computing environment and walks us through the deployment of our first Azure Functions application.
Chapter 3, Setting Up the Development Environment, gives us an understanding of how to develop a serverless computing application on a local computer using Visual Studio and, then, deploy it to Azure.
Chapter 4, Configuring Endpoints, Triggers, Bindings, and Scheduling, explores more advanced options to configure function triggers and input/output parameters as well as to configure custom routes for HTTP-triggered functions.
Chapter 5, Integrations and Dependencies, covers more about Azure Functions integrations and dependencies. We will describe how to share common code between different functions in the same Function App and the advantages of doing so.
Chapter 6, Integrating Azure Functions with Cognitive Services API, shows you how to use JavaScript, HTML, and CSS to develop a mobile application and how to set up the environment to test run this mobile application.
Chapter 7, Debugging Your Azure Functions, discusses the process of debugging the serverless functions. We will discuss both local and online debugging processes, and how to enable cloud-triggered functions to be debugged locally.
Chapter 8, Testing Your Applications, walks us through testing best practices in detail and covers the process of testing Azure Functions, focusing primarily on unit and integration testing using the MSTest framework.
Chapter 9, Configuring Continuous Delivery, reviews the benefits of using source control, and the benefits of continuous integration and delivery approaches in software development.
Chapter 10, Securing Your Application, reviews the different layers of application security as it pertains to serverless computing. We will review the authentication, authorization, and key management of Azure Functions in detail, and provide the steps for configuring different authentication and authorization types.
Chapter 11,Monitoring your Application, explains how to monitor the Azure serverless compute performance and application health using Azure native tools
Chapter 12, Designing for High Availability, Disaster Recovery, and Scale, discusses the three major design considerations of building a reliable application: the application's high availability, disaster recovery readiness, and the ability to scale on demand and be prepared to handle high or fluctuating load.
Chapter 13, Designing Cost-Effective Services, discusses the pricing of Azure Functions. You will learn how to estimate the cost of serverless computing in Azure. We will also review the pricing of the other PaaS services used in the TextEvaluation application as a function of the expected user traffic load.
Appendix A, C# Script-Based Functions, reviews C# script-based functions and discusses the two main implementation differences between script-based and precompiled functions.
Appendix B, Azure Compute On-Demand Options, gives a brief overview of additional Azure services that provide compute on-demand capabilities and discusses the different workload types that are best suited for each one.
This book requires the following two things:
Access to a Microsoft Azure subscription (a trial account is sufficient)
Visual Studio 2017 IDE (any edition)
This book is for the following professionals:
Software engineers looking for a hands-on guide on .NET-based Azure Functions
Application architects looking to understand the pros and cons of serverless architecture
IT professionals looking to understand Azure serverless compute operations management from networking, security, monitoring, and continuous delivery standpoints
In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning. Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "Since the text variable was not previously defined, the log output will now display a compilation error." A block of code is set as follows:
{ "IsEncrypted": false, "Values": { "AzureWebJobsStorage":"DefaultEndpointsProtocol=https; AccountName=textsentimentstorage;AccountKey=<full account key>", "AzureWebJobsDashboard": "DefaultEndpointsProtocol=https; AccountName=textsentimentstorage;AccountKey=<full account key>" } }
When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
[default] exten => s,1,Dial(Zap/1|30) exten => s,2,Voicemail(u100)
exten => s,102,Voicemail(b100)
exten => i,1,Voicemail(s0)
Any command-line input or output is written as follows:
Install-Package Microsoft.Azure.Webjobs.Extensions.ApiHub -pre
New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "If you do not have any subscriptions listed, click on add subscription to add a new Pay-As-You-Go subscription, and follow the creation wizard."
Feedback from our readers is always welcome. Let us know what you think about this book-what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of. To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. You can download the code files by following these steps:
Log in or register to our website using your e-mail address and password.
Hover the mouse pointer on the
SUPPORT
tab at the top.
Click on
Code Downloads & Errata
.
Enter the name of the book in the
Search
box.
Select the book for which you're looking to download the code files.
Choose from the drop-down menu where you purchased this book from.
Click on
Code Download
.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR / 7-Zip for Windows
Zipeg / iZip / UnRarX for Mac
7-Zip / PeaZip for Linux
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Serverless-computing-in-Azure-with-Dot-NET. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title. To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy. Please contact us at [email protected] with a link to the suspected pirated material. We appreciate your help in protecting our authors and our ability to bring you valuable content.
If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.
This chapter provides a theoretical introduction into serverless computing, and the types of workloads it is best suited for.
In this chapter, we will cover the following topics:
The features of serverless computing
Serverless compute best practices
Serverless computing advantages and disadvantages
The types of services and applications that are a good fit for serverless
Being a technical person, you might be tempted to skip the theory and dive into practice. It is highly advised, however, that you read the next few pages before diving into implementation details.
Being an emerging trend in the technology world, serverless computing is rapidly gaining popularity. The most wide-spread definition of serverless at this point is driven by the arrival of technologies such as AWS Lambda, Azure Functions, IBM OpenWhisk, and Google Cloud Functions:
This definition of serverless is synonymous with Functions as a Service (FaaS). We will use these terms interchangeably in this book.
In different programming languages, we may encounter the terms “function”, “procedure”, and “method” referring to different types of routines performing a task. In this context, the term function is not programming language specific, but rather conceptual:
Ironically, serverless computing does not actually run without servers. Rather, it involves outsourcing the server provisioning and management to a third-party.
Nearly all existing serverless computing technologies are provided by major public cloud vendors. The sheer scale of today's public cloud vendors allows for the following two things that make serverless more attractive than ever before:
Realizing the cost benefits of the economy of scale
: For any specific development team, or even organization, it would be difficult to reach the scale at which outsourcing parts of the application to separately managed compute containers provides worthwhile cost benefits. At public cloud vendors' scale, serverless compute becomes inexpensive because the compute power allocation is balanced across thousands of servers and billions of executions, with each specific client application peaking at different times. The nature of software-defined data centers
also allows for more efficient server allocation.
Minimizing the adverse effects of vendor lock-in:
The modern IT world is rapidly coming to a consensus that the benefits of public cloud outweigh the disadvantages of any vendor lock-in that comes with it. With many IT services moving to public cloud, it becomes easier and more beneficial to leverage a cloud provider for hosting serverless applications.
By now, you are probably familiar with some variation of a "shared responsibility" diagram outlining the differences between IaaS, PaaS, and SaaS. Let's add a visual to show where Functions as a Service (FaaS) fits in:
As you can see from the diagram, FaaS takes vendor responsibility one step further, abstracting away the application context along with the physical hardware and virtual servers.
For this reason, despite the book title, I, personally, think that the term serverless is not completely accurate, and the actual architectural approach we are working with would be better described by the term Applicationless.
The Azure serverless offering is called Azure Functions.
The implementation details of serverless computing differ by vendor, and it is difficult to overview the serverless computing features without being vendor-specific. This book is dedicated to Azure Functions, and thus will focus on Azure-specific features whenever there is difference between vendors.
To illustrate where serverless computing would come into your application, let's take a look at a classic three-tier architecture. In this commonly used approach, the application is broken down into the following tiers:
Presentation Tier:
The presentation tier handles the user interface and typically operates as a thin client on a web or mobile device.
Logic Tier
: The logic tier, also known as application tier, handles the functional process logic and the business rules of the application. This tier can serve one or more presentation tier clients and scale independently.
Data Tier
: The data tier persists the application data in databases or file shares and handles the data access layer.
Any of these tiers can be further expanded and broken into separate services. For a deeper dive into three-tier architecture, please visit the following link:
https://en.wikipedia.org/wiki/Multitier_architecture#Three-tier_architecture
A basic three-tier architecture can be presented as the diagram below:
With the introduction of serverless computing, all, or parts of your application's logic tier can be replaced by serverless computing containers, or FaaS.
Depending on an application, functions can handle all of the business logic, or work jointly with other types of services to comprise the logic tier.
A basic three-tier architecture with the logic tier fully handled by functions can be presented as the following diagram:
It is crucial to note that not all types of functionality typically handled by the business logic tier are well suited for FaaS. To see which functionality can be replaced by FaaS, let us discuss the inherent features of serverless computing.
The following list outlines the inherent features of serverless computing, which also dictate the implementation best practices. In some cases, the best practices are imposed by the serverless provider, while in others they remain a developer responsibility.
Serverless computing is event-triggered and asynchronous by nature. It is therefore important to use non-blocking, awaitable calls in functions.
Serverless computing is inherently stateless, meaning that no state should be maintained on the host machine. This also means not sharing state between any parallel or sequential function executions. Any required state needs to be persisted to a database, a file server, or a cache.
In recent years, the stateless approach was made popular by the Twelve-Factor methodology, and many applications have already been refactored to use stateless web and logic tiers. The following quote is from the Twelve Factor App Methodology, factor 6:
To learn more about the Twelve-Factor Methodology, please visit https://12factor.net.
While the Twelve-Factor Methodology is increasingly popular, and makes applications easy to deploy and scale, the restriction of local state is not always a good thing. The main benefit of local state is the low latency of access, and some applications cannot attain optimal performance without it. As an example, when building an application used to trade in a financial market, persisting state to a database or even a cache can become extremely costly. Applications that require local state would not be a good fit for serverless computing. To learn more about stateful alternatives, please look into Azure Service Fabric stateful services:
https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-introduction
Note that some of the serverless computing vendors completely prevent you from accessing the host machine. With Azure Functions, you do have read/write access to the host machine's virtual D drive, however, it is highly recommended that you don't use it to persist state.
To ensure consistency, serverless computing functions should be idempotent.
Mathematically, a function is idempotent if, whenever it is applied twice to any value, it gives the same result as if it were applied once, that is, ƒ(ƒ(x)) ≡ ƒ(x).
To give a simple example of a non-idempotent function, imagine a function with a task of calculating a square root of the input number. If the function is run a second time on an input value that has already been processed, it will result in an incorrect output, as √(√(x)) ≠ √(x). Thus, the only way to ensure that the function remains idempotent is making sure that the same input isn't processed twice.
In an asynchronous, highly parallelized environment ran by ephemeral compute containers, we need to work extra hard to ensure that execution errors will not impact all of the subsequent events. What happens when a function crashes midway through encoding a large media file? What happens if a function tasked with processing 100 rows in a database crashes before finishing? Will the remainder of the input remain unprocessed, or will its already processed part be re-processed?
To ensure consistency, we need to store the required state information with our data, allowing a function to exit gracefully if no more processing is required. In addition, we need to implement a circuit-breaker pattern to ensure that a failing function will not retry infinitely. To learn more about the circuit-breaker pattern, please visit the following link:
https://docs.microsoft.com/en-us/azure/architecture/patterns/circuit-breaker
Azure Functions in particular have some built-in defensive mechanisms that you can leverage. For instance, for a storage queue triggered function, a queue message processing will be retried five times in case of failure, after which it will be dropped to a poison-message queue.
In comparison to a traditional application, a FaaS environment has two very important execution restrictions: the length of time the function can run and the time it takes to start the first function execution after a period of inactivity.
In a FaaS environment, the runtime of each particular function execution should be as short as possible.
Some vendors impose hard limits on the functions' execution time, limiting the runtime to a few minutes. These limits impose a certain style of programming, but can get cumbersome to deal with.
Azure Functions are offered under two different hosting plans: a Consumption plan and an App Service plan. The Consumption plan scales dynamically on-demand, while an App Service plan always has at least one VM instance provisioned. Because of the different approaches to resource provisioning, these plans have different execution constraints.
Under the App Service plan there is no limit on the function execution time.
Under the Consumption plan there is a default limit of 5 minutes, which can be increased up to 10 minutes by making a change in the function configuration.
Even under the App Service plan, however, it is highly recommended to keep the function execution time as short as possible. A long running function can be broken down into shorter functions that each perform a particular task.
For very long running and/or compute-intensive work, consider a different type of Compute as a Service -Azure Batch. You can refer to the following link for more information on Azure Batch:
https://docs.microsoft.com/en-us/azure/batch/batch-technical-overview.
In a FaaS environment, the functions should be kept as light as possible. Loading many explicit or implicit external dependencies (when a library you reference loads many additional modules it relies on) can increase the function load time and even cause timeouts. Thus, functions should keep their external dependencies to a minimum.
In addition, in most FaaS environments, functions face a significantly increased cold start latency. After a period of inactivity an unused function goes idle. The next time the function is loaded, compute and memory will need to be allocated to it, external dependencies will need to be loaded, and, in the case of compiled languages like C#, the code needs to be re-compiled. All of these factors can cause a significant delay in function startup time.
In Azure C# based functions specifically, the cold start problem has been alleviated with the release of .NET Class Library based functions, since the functions are precompiled and can be loaded more quickly. In addition, when running under the App Service plan (rather than a Consumption plan), the cold start problem is eliminated.
The advantages of FaaS can be grouped into a few categories.
Some of the advantages exist in most PaaS environments, however, they may be more pronounced in a FaaS environment.
Some of the advantages are similar to the advantages of the Microservices architecture, in which the application is structured as a collection of loosely coupled services, each of which handles a particular task. To learn more about Microservices architecture, please visit http://microservices.io/patterns/microservices.html.
Lastly, some of the advantages are specific to the FaaS environment only.
Serverless computing makes it very easy to scale the application out by provisioning more compute power as required, and deallocating it when the demand is low. This allows developers to avoid the risk of failing their users during peak demand, while also avoiding the cost of allocating massive standby infrastructure.
This makes serverless computing particularly useful for applications experiencing inconsistent traffic. Let's take a look at the following examples:
An application used during sporting events
: In this case, your application is likely to experience highly variable traffic loads, with a significant difference between high and low traffic. Serverless can help mitigate the complexity and cost of providing adequate service.
A retail application
: It is common for retail applications to experience extremely high loads during holiday seasons or during marketing campaigns. While these loads are predictable, they often differ so significantly from the day-to-day load, that maintaining the required standby infrastructure can get very costly. Serverless can eliminate the need for standby infrastructure.
A periodic social media update application
: Imagine an application which posts an update to a Twitter feed once every hour. This application requires very little compute power. In the traditional IT world, such an application would typically run on two servers to ensure resiliency, which is extremely wasteful from the compute power standpoint. Deploying multiple applications to the same server can often become problematic for operational/organizational reasons, and in most organizations, the on-premises compute power is heavily underutilized (on-premises, teams tend to significantly over-provision hardware because it is quite difficult to add more compute power in the future). Serverless computing fits very well to solve this problem.
It is important to note that the scalability advantage exists in every PaaS service, however, with serverless computing, the scaling typically is completely dynamic and handled by the vendor. This means that while in a typical PaaS service, you will need to define metrics (such as high CPU or memory utilization) and, to an extent, define the scaling procedure (such as a number of additional nodes to provision or whether or not the application needs to scale back down after the demand decreases) with serverless computing, the vendor will simply allocate additional compute to your function based on the number of requests coming in.
In serverless computing, you only pay for what you use. The Pay-As-You-Go model is likely to result in cost savings in most cases (remember the underutilized infrastructure), and becomes particularly beneficial in the inconsistent traffic scenarios described in the previous section. The model also means that any speed optimization of your service translates directly into cost savings.
Pay-As-You-Go is also an advantage of any PaaS service, however, most PaaS services do not get as granular in allocating compute power.
While the translation of execution time to cost is a lot more direct in an FaaS environment, it is wise to calculate whether or not the dynamic compute allocation is actually the best pricing model for your application. We will discuss cost-effective services design in more detail in Chapter 13, Designing for High Availibility Disaster Recovery and Scale.
In a serverless computing environment, you do not need to provision, manage, patch, or secure servers. You are outsourcing the management of both the physical hardware and the virtual servers, operational systems, networking, and security to the serverless computing vendor. This provides cost savings in the following two ways:
Direct infrastructure cost
IT operations cost
This advantage also exists in any PaaS services, and for a FaaS service it may actually not be as straightforward as it seems. While there are very clear cost benefits to not managing servers, it is important to remember that operations typically cover a lot more than server management, including tasks such as application deployment, monitoring, and security. More on this in the next section.
Serverless computing makes it incredibly easy to go from an idea to execution. Whether proving the business value of an idea or needing a sandbox to test a scenario, the ease of creating new business logic layer with serverless computing provides an excellent ability to test drive your minimal viable product.
Similar to Microservices architecture, FaaS forces a pattern of breaking the logic layer into smaller, task-specific services. This provides the following tangible benefits:
Versioning the services independently of one another
: In a monolithic application, changing even a small part of business logic will trigger a redeployment of the entire monolith. In a FaaS environment, each function handles a particular task, and thus the implementation of each function can be changed independently, as long as the contract with the services upstream and downstream of the function is maintained. This can have a tremendous effect on the agility and flexibility of the application update process.
Freedom to use a different technology stack for each service
: In a monolith application, the developer is committed to a particular technology stack, whether or not it is well suited for the task at hand. In a FaaS environment, the developer is free to implement each task in the way best suited for the job, and most serverless computing vendors provide a number of different languages/platforms to choose from. If part of your application can benefit from Python's powerful tooling for processing regular expressions, you can easily deploy a Python-based Azure Function along with your C#-based functions, either packaged in the same Function App or separately. This freedom can greatly improve code efficiency and simplicity.
Existing serverless frameworks are closely integrated with other services offered by the same public cloud vendor. They make it easy to trigger the functions based on events in other cloud services and store the outputs in cloud data stores. They are hosted on the same infrastructure, which makes for minimal latency. As such, serverless functions are ideal for augmentation of other cloud services with bits of custom code performing tasks that aren't offered as a fully managed service.
While they are fully managed by the Microsoft engineering, Azure Functions are an open source offering based on the Azure WebJobs SDK, which means that as a developer you could contribute quality code and help develop required features or resolve issues.
To learn more about Azure Functions and the Azure WebJobs SDK, visit https://github.com/Azure/Azure-Functions.
The following section outlines the current disadvantages of leveraging serverless computing.
Some of these disadvantages arise from additional complexity of the application architecture. Others stem from the lack of maturity of current serverless environments tooling and the problems that come with outsourcing parts of your system.
Similar to the Microservices architecture, serverless introduces increased system complexity and a requirement for network communication between application layers. The added complexity centers around the following two main aspects:
Implicit interfaces between services
: As discussed earlier, functions make the application changes easier by allowing for separate versioning of services. This, however, introduces an implicit contract between different parts of the system, that could be broken by one of the sides. In a monolith application, breaking changes can be easily caught by the compiler or integration testing. In a FaaS environment, a developer could make a breaking change without impact awareness.
Network and queueing
: In a FaaS environment, parts of the application communicate with each other using HTTP requests or queueing mechanisms. This introduces additional latency, adds a dependency on queueing services, and makes handling errors and retries significantly more complex.
When relying on the inherent dynamic scalability of the serverless computing for the business logic layer, it is easy to miss the potential overload on the downstream components such as databases and file stores. During the design and testing phases of the application development, it is crucial to verify that downstream components are able to handle the potential high load created by the dynamic scaling of the serverless computing tier.
The assumption of the three-tier architecture is that the business logic tier can serve multiple different clients, such as various web and mobile devices, different consumer APIs, and so on. When the entire business logic tier is moved into serverless computing, certain functionality is likely to be moved upstream to client applications. This can introduce a situation in which each client application is implementing the same functionality.
As we've discussed, the server administration and scaling out is fully handled by the serverless computing vendor. However, this benefit comes with a trade-off. You are still fully responsible for testing, deploying, and monitoring your application. You are also responsible for the application security, as well as for ensuring that it will perform correctly and consistently at scale. With serverless computing, you may be presented with a new set of tools for managing all of the preceding tools that may not integrate well with your current ops stack. Needing to train your team on the new tool stack can be a drawback.
With serverless computing offers being new, their security and monitoring tools are also new and often very specific to the serverless environment and the particular vendor. This introduces new complexity into the process of managing operations for the application overall, adding a new type of service to manage. Security and monitoring of Azure Functions will be discussed in depth in Chapter 10, Securing Your Application and Chapter 11, Monitoring Your Application.
Testing can become more difficult in a serverless environment due to the following few aspects:
For the purposes of Integration testing, it is sometimes difficult to replicate the full cloud-based flow on a testing machine.
The more distributed the system becomes, the more dependencies and points of failure are introduced, and the harder it becomes to test for every possible variation of the flow
Load testing becomes an even more crucial aspect of testing the application, as some issues may only arise at scale
We will discuss testing of serverless applications in more detail in Chapter 8, Testing Your Azure Functions.
Unlike vendor lock-in, vendor control implies that by outsourcing a big part of your operations management to a third-party, you also relinquish control over how these operations are handled. This includes the service limitations, the scaling mechanism, and the potential optimization of hosting your application.
In addition, the vendor has the ultimate control over the environment and tooling, deciding when to roll out features and fix issues (although in the case of Azure Functions, you can help fix issues by contributing to the open source project).
Despite the theoretical portability of implementation code used in functions, the surrounding features and tooling make it relatively difficult to deploy the application with another vendor.
