35,99 €
Get started with designing your serverless application using optimum design patterns and industry standard practices
Key FeaturesLearn the details of popular software patterns and how they are applied to serverless applicationsUnderstand key concepts and components in serverless designsWalk away with a thorough understanding of architecting serverless applicationsBook Description
Serverless applications handle many problems that developers face when running systems and servers. The serverless pay-per-invocation model can also result in drastic cost savings, contributing to its popularity. While it's simple to create a basic serverless application, it's critical to structure your software correctly to ensure it continues to succeed as it grows. Serverless Design Patterns and Best Practices presents patterns that can be adapted to run in a serverless environment. You will learn how to develop applications that are scalable, fault tolerant, and well-tested.
The book begins with an introduction to the different design pattern categories available for serverless applications. You will learn the trade-offs between GraphQL and REST and how they fare regarding overall application design in a serverless ecosystem. The book will also show you how to migrate an existing API to a serverless backend using AWS API Gateway. You will learn how to build event-driven applications using queuing and streaming systems, such as AWS Simple Queuing Service (SQS) and AWS Kinesis. Patterns for data-intensive serverless application are also explained, including the lambda architecture and MapReduce.
This book will equip you with the knowledge and skills you need to develop scalable and resilient serverless applications confidently.
What you will learnComprehend the popular design patterns currently being used with serverless architecturesUnderstand the various design options and corresponding implementations for serverless web application APIsLearn multiple patterns for data-intensive serverless systems and pipelines, including MapReduce and Lambda ArchitectureLearn how to leverage hosted databases, queues, streams, storage services, and notification servicesUnderstand error handling and system monitoring in a serverless architecture a serverless architectureLearn how to set up a serverless application for continuous integration, continuous delivery, and continuous deploymentWho this book is for
If you're a software architect, engineer, or someone who wants to build serverless applications, which are non-trivial in complexity and scope, then this book is for you. Basic knowledge of programming and serverless computing concepts are assumed.
Brian Zambrano is a software engineer and architect with a background cloud-based SAAS application architecture, design, and scalability. Brian has been working with AWS consistently since 2009. For the past several years, he has focused on cloud architecture with AWS using serverless technologies, microservices, containers, and the vast array of AWS services. Brian was born and bred in the San Francisco Bay Area and currently resides in Fort Collins, CO with his wife and twin boys.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 298
Veröffentlichungsjahr: 2018
Copyright © 2018 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Commissioning Editor: Richa TripathiAcquisition Editor: Sandeep MishraContent Development Editor: Akshada IyerTechnical Editor: Mehul SinghCopy Editor: Safis EditingProject Coordinator: Prajakta NaikProofreader: Safis EditingIndexer: Rekha NairGraphics: Jisha ChirayilProduction Coordinator: Shraddha Falebhai
First published: April 2018
Production reference: 1110418
Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.
ISBN 978-1-78862-064-2
www.packtpub.com
Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
Improve your learning with Skill Plans built especially for you
Get a free eBook or video every month
Mapt is fully searchable
Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
Brian Zambrano is a software engineer and architect with a background cloud-based SAAS application architecture, design, and scalability. Brian has been working with AWS consistently since 2009. For the past several years, he has focused on cloud architecture with AWS using serverless technologies, microservices, containers, and the vast array of AWS services.
Brian was born and bred in the San Francisco Bay Area and currently resides in Fort Collins, CO with his wife and twin boys.
Daniel Paul Searles enthusiastically attempted to learn to program by book at thirteen, only to be completely stumped by a technical error in one of the required coding exercises. A number of years later, he was successful with another book, which propelled him to gain experience across many languages, operating systems, and tech stacks. The thought of what could have been if he was able to learn to program at a younger age energizes his work as a technical reviewer. Currently, he is pursuing Machine Learning and Functional Programming.
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
Title Page
Copyright and Credits
Serverless Design Patterns and Best Practices
Dedication
Packt Upsell
Why subscribe?
PacktPub.com
Contributors
About the author
About the reviewer
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the example code files
Conventions used
Get in touch
Reviews
Introduction
What is serverless computing?
No servers to manage
Pay-per-invocation billing model
Ability to automatically scale with usage
Built-in availability and fault tolerance
Design patterns
When to use serverless
The sweet spot
Classes of serverless pattern
Three-tier web application patterns
ETL patterns
Big data patterns
Automation and deployment patterns
Serverless frameworks
Summary
A Three-Tier Web Application Using REST
Serverless tooling
System architecture
Presentation layer
Logic layer
Data layer
Logic layer
Application code and function layout
Organization of the Lambda functions
Organization of the application code
Configuration with environment variables
Code structure
Function layout
Presentation layer
File storage with S3
CDN with CloudFront
Data layer
Writing our logic layer
Application entrypoint
Application logic
Wiring handler.py to Lambda via API Gateway
Deploying the REST API
Deploying the Postgres database
Setting up static assets
Viewing the deployed web application
Running tests
Iteration and deployment
Deploying the entire stack
Deploying the application code
Summary
A Three-Tier Web Application Pattern with GraphQL
Introduction to GraphQL
System architecture
Logic layer
Organization of the Lambda functions
Organization of the application code
Function layout
Presentation layer
Writing the logic layer
Implementing the entry point
Implementing GraphQL queries
Implementing GraphQL mutations
Deployment
Viewing the deployed application
Iteration and deployment
Summary
Integrating Legacy APIs with the Proxy Pattern
AWS API Gateway introduction
Simple proxy to a legacy API
Setting up a pass-through proxy
Deploying a pass-through proxy
Transforming responses from a modern API
Method execution flow
Setting up example
Setting up a new resource and method
Setting up Integration Request
Setting up Integration Response
Complex integration using a Lambda function
Implementing the application code
Setting up a new resource and method
Migration techniques
Staged migration
Migrating URLs
Summary
Scaling Out with the Fan-Out Pattern
System architecture
Synchronous versus asynchronous invocation
Resizing images in parallel
Setting up the project
Setting up trigger and worker functions
Setting up permissions
Implementing the application code
Testing our code
Alternate Implementations
Using notifications with subscriptions
Using notifications with queues
Summary
Asynchronous Processing with the Messaging Pattern
Basics of queuing systems
Choosing a queue service
Queues versus streams
Asynchronous processing of Twitter streams
System architecture
Data producer
Mimicking daemon processes with serverless functions
Data consumers
Viewing results
Alternate Implementations
Using the Fan-out and Messaging Patterns together
Using a queue as a rate-limiter
Using a dead-letter queue
Summary
Data Processing Using the Lambda Pattern
Introducing the lambda architecture
Batch layer
Speed layer
Lambda serverless architecture
Streaming data producers
Data storage
Computation in the speed layer
Computation in the batch layer
Processing cryptocurrency prices using lambda architecture
System architecture
Data producer
Speed layer
Batch layer
AWS resources
Data producer
Speed layer
Batch layer
Results
Summary
The MapReduce Pattern
Introduction to MapReduce
MapReduce example
Role of the mapper
Role of the reducer
MapReduce architecture
MapReduce serverless architecture
Processing Enron emails with serverless MapReduce
Driver function
Mapper implementation
Reducer implementation
Understanding the limitations of serverless MapReduce
Memory limits
Storage limits
Time limits
Exploring alternate implementations
AWS Athena
Using a data store for results
Using Elastic MapReduce
Summary
Deployment and CI/CD Patterns
Introduction to CI/CD
CI
CD
Setting up unit tests
Code organization
Setting up unit tests
Setting up CI with CircleCI
Configuring CircleCI builds
Setting up environment variables
Setting up CD and deployments with CircleCI
Setting up Slack notifications
Setting up a CircleCI badge
Setting up deployments
Setting up AWS credentials
Setting up environment variables
Executing deployments
Summary
Error Handling and Best Practices
Error tracking
Integrating Sentry for error tracking
Integrating Rollbar
Logging
Structuring log messages
Digesting structured logs
Cold starts
Keeping cloud functions warm
AWS Lambda functions and VPCs
Start-up times for different languages
Allocating more memory
Local development and testing
Local development
Learning about testing locally
Managing different environments
Securing sensitive configuration
Encrypting variables
Decrypting variables
Trimming AWS Lambda versions
Summary
Other Books You May Enjoy
Leave a review - let other readers know what you think
Serverless architectures are changing the way software systems are being built and operated. When compared with systems that use physical servers or virtual machines, many tools, techniques, and patterns remain the same; however, there are several things that can or need to change drastically. To fully capitalize on the benefits of serverless systems, tools, patterns, and best practices should be thought through carefully before embarking on a serverless journey.
This book introduces and describes reusable patterns applicable to almost any type of serverless application, whether it be web systems, data processing, big data, or Internet of Things. You will learn, by example and explanation, about various patterns within a serverless context, such as RESTful APIs, GraphQL, proxy, fan-out, messaging, lambda architecture, and MapReduce, as well as when to use these patterns to make your applications scalable, performant, and fault tolerant. This book will take you through techniques for Continuous Integration and Continuous Deployment as well as designs for testing, securing, and scaling your serverless applications. Learning and applying these patterns will speed up your development lifecycle, while also improving the overall application architecture when building on top of your serverless platform of choice.
This book is aimed at software engineers, architects, and anyone who is interested in building serverless applications using a cloud provider. Readers should be interested in learning popular patterns to improve agility, code quality, and performance, while also avoiding some of the pitfalls that new users may fall into when starting with serverless systems. Programming knowledge and basic serverless computing concepts are assumed.
Chapter 1, Introduction, covers the basics of serverless systems and discusses when serverless architectures may or may not be a good fit. Three categories of serverless patterns are introduced and briefly explained.
Chapter 2, A Three-Tier Web Application Using REST, walks you through a full example of building a traditional web application using a REST API powered by AWS Lambda, along with serverless technologies for hosting HTML, CSS, and JavaScript for the frontend code.
Chapter 3, A Three-Tier Web Application Pattern with GraphQL, introduces GraphQL and explains the changes needed to turn the previous REST API into a GraphQL API.
Chapter 4, Integrating Legacy APIs with the Proxy Pattern, demonstrates how it's possible to completely change an API contract while using a legacy API backend using nothing other than AWS API Gateway.
Chapter 5, Scaling Out with the Fan-Out Pattern, teaches you one of the most basic serverless patterns around, where a single event triggers multiple parallel serverless functions, resulting in quicker execution times over a serial implementation.
Chapter 6, Asynchronous Processing with the Messaging Pattern, explains different classes of messaging patterns and demonstrates how to put messages onto a queue using a serverless data producer, and process those messages downstream with a serverless data consumer.
Chapter 7, Data Processing Using the Lambda Pattern, explains how you can use multiple subpatterns to create two planes of computation, which provide views into historical aggregated data as well as real-time data.
Chapter 8, The MapReduce Pattern, explores an example implementation of aggregating large volumes of data in parallel, similar to the way systems such as Hadoop work.
Chapter 9, Deployment and CI/CD Patterns, explain how to set up Continuous Integration and Continuous Delivery for serverless projects and what to keep in mind when doing so, in addition to showing examples of continuous deployment.
Chapter 10, Error Handling and Best Practices, reviews the tools and techniques for automatically tracking unexpected errors as well as several best practices and tips when creating serverless applications.
Almost all of the examples in this book use the Serverless Framework to manage AWS resources and Lambda functions. Installation instructions for the Serverless Framework can be found at https://serverless.com/framework/docs/getting-started/.
In addition to the Serverless Framework, readers will need to have an AWS account to run the examples. For those new to AWS, you can create a new account, which comes with a year of usage in their Free Tier, at https://aws.amazon.com.
During the course of this book, you will need the following tools:
AWS Lambda
AWS RDS
AWS API Gateway
AWS DynamoDB
AWS S3
AWS SQS
AWS Rekognition
AWS Kinesis
AWS SNS
We will learn how to use these tools through the course of this book.
You can download the example code files for this book from your account at www.packtpub.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
Log in or register at
www.packtpub.com
.
Select the
SUPPORT
tab.
Click on
Code Downloads & Errata
.
Enter the name of the book in the
Search
box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR/7-Zip for Windows
Zipeg/iZip/UnRarX for Mac
7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Serverless-Design-Patterns-and-Best-Practices. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
Feedback from our readers is always welcome.
General feedback: Email [email protected] and mention the book title in the subject of your message. If you have questions about any aspect of this book, please email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packtpub.com.
It's an exciting time to be in the software industry. Over the past few years, we've seen an evolution in architectural patterns, with a considerable movement away from large, monolithic applications toward microservices. As cloud computing has evolved, so too have the systems and services we software developers have at our disposal. One of the most revolutionary tools in this domain is lambda functions, or more accurately, Functions as a Service. A step beyond microservices, being able to run, manage, and deploy a single function as a different entity has pushed us into the realm of nanoservices.
Of course, this book focuses on design patterns for serverless computing. The best place to start then is: what are design patterns and what is serverless computing?
If you're just beginning your journey into the world of serverless systems and patterns, I encourage you to read other resources to get more details on these and related topics. Our upcoming discussion intends to set the stage for building systems with patterns, but it's not necessary to explain the foundations of serverless platforms or its concepts in excruciating detail.
In this chapter, I'll first define a few relevant terms and concepts before diving deeper into those topics. Then, I'll discuss when serverless architectures are or are not a good fit. Finally, I'll explain the various categories of serverless patterns that I'll present in this book. I presume that you, the reader, are somewhat familiar with these large topics, but absolute mastery is not required.
At the end of this chapter, you should be able to do the following:
Describe the term
serverless
in your own words
Know how design patterns relate to serverless architectures
Understand general classifications of serverless design patterns
Let's start with the simpler of the two questions first—what is serverless computing? While there may be several ways to define serverless computing, or perhaps more accurately serverless architectures, most people can agree on a few key attributes. Serverless computing platforms share the following features:
No operating systems to configure or manage
Pay-per-invocation billing model
Ability to automatically scale with usage
Built-in availability and fault tolerance
While there are other attributes that come with serverless platforms, they all share these common traits. Additionally, there are other serverless systems that provide functionality other than general computing power. Examples of these are DynamoDB, Kinesis, and Simple Queue Service, all of which fall under the Amazon Web Services (AWS) umbrella. Even though these systems are not pay-per-invocation, they fall into the serverless category since the management of the underlying systems is delegated to the AWS team, scaling is a matter of changing a few settings, fault-tolerance is built-in, and high availability is handled automatically.
Arguably, this is where the term serverless came from and is at the heart of this entire movement. If we look back not too long ago, we can see a time when operations teams had to purchase physical hardware, mount it in a data center, and configure it. All of this was required before engineers even had the chance of deploying their software.
Cloud computing, of course, revolutionized this process and turned it upside down, putting individual engineers in the driver's seat. With a few clicks or API calls, we could now get our very own virtual private server (VPS) in minutes rather than weeks or months. While this was and is incredibly enabling, most of the work of setting up systems remained. A short list of things to worry about includes the following:
Updating the operating system
Securing the operating system
Installing system packages
Dealing with dependency management
This list goes on and on. A point worth noting is that there may be hours and hours of configuration and management before we're in a position to deploy and test our software.
To ease the burden of system setup, configuration software such as Puppet, Chef, SaltStack, and Ansible arrived on the scene. Again, these were and are incredibly enabling. Once you have your recipes in place, configuring a new virtual host is, most importantly, repeatable and hopefully less error-prone than doing a manual setup. In systems that comprise hundreds or even thousands of virtual servers, some automation is a requirement rather than a mere convenience.
As lovely as these provisioning tools are, they do come with a significant cost of ownership and can be incredibly time-consuming to develop and maintain. Often, iterating on this infrastructure-as-code tooling requires making changes and then executing them. Starting up a new virtual host is orders of magnitude faster than setting up a physical server; however, we measure VPS boot time and provisioning time in minutes. Additionally, these are software systems in and of themselves that a dedicated team needs to learn, test, debug, and maintain. On top of this, you need to continually maintain and update provisioning tools and scripts in parallel with any changes to your operating systems. If you wanted to change the base operating system, it would be possible but not without significant investment and updates to your existing code.
When Lambda was launched by AWS in 2014, a new paradigm for computing and software management was born. In contrast to managing your virtual hosts, AWS Lambda provided developers the ability to deploy application code in a managed environment without needing to manage virtual hosts themselves. Of course, there are servers running somewhere that are operated by someone. However, the details of these servers are opaque to us as application developers. No longer do we need to worry about the operating system and its configuration directly. With AWS Lambda and other Functions as a Service (FaaS) platforms, we can now delegate the work of VPS management to the teams behind those platforms.
The most significant shift in thinking with FaaS platforms is that the unit of measure has shrunk from a virtual machine to a single function.
Another significant change with the invention of serverless platforms is the pay-per-invocation model. Before this, billing models were typically per minute or hour. While this was the backbone of elastic computing, servers needed to stay up and running if they were used in any production environment.
Paying for a VPS only while it's running is a great model when developing since you can just start it at the beginning of the d``
ay and terminate it at the end of the day. However, when a system needs to be available all the time, the price you pay is nearly the same whether its CPU is at 100% usage or 0.0001% usage.
Serverless platforms, on the other hand, the bill only while the code is being executed. They are designed and shine for systems that are stateless and have a finite, relatively short duration. As such, billing is typically calculated based on a total invocation time. This model works exceptionally well for smaller systems that may get only a few calls or invocations per day. On many platforms, it's possible to run a production system that is always available completely for free. There is no such thing as idle time in the world of serverless.
Gone are the days of needing to overprovision a system with more virtual hosts than you typically need. As invocations ramp up, the underlying system automatically scales up, providing you with a known number of concurrent invocations. Moving this limit higher is merely a matter of making a support request to Amazon, in the case of AWS Lambda. Before this, managing horizontal scalability was an exercise for the team designing the system. Never has horizontal scalability for computing resources been so easy.
Different cloud providers provide the ability to scale up or down (that is, be elastic) based on various parameters and metrics. Talk to DevOps folks or engineers who run systems with autoscaling and they will tell you it's not a trivial matter and is difficult to get right.
Servers, real or virtual, can and do fail. Since the hosts that run your code are now of little or no concern for you, it's a worry not worth having.
Just as the management of the operating system is handled for you, so too is the management of failing servers. You can be guaranteed that when your application code should be invoked, it will be.
With a good understanding of serverless computing behind us, let's turn our attention to design patterns.
If you've spent any amount of time working with software, you will have heard the term design pattern and may very well be familiar with them to some degree. Stepping back slightly, let's discuss what a design pattern is.
I will assert that if you ask 10 different developers to define the term design pattern, you will get 10 different answers. While we all may have our definition, and while those definitions may not be wrong, it's relatively simple to agree on the general spirit or idea of a software design pattern. Within the context of software engineering, design patterns are reusable solutions or code organization applied to a frequently occurring problem. For example, the Model-View-Controller pattern evolved to solve the problem of GUI applications. The Model-View-Controller pattern can be implemented in almost any language for nearly any GUI application.
Software design patterns evolved as a solution to help software authors be more efficient by applying well-known and proven templates to their application code. Likewise, architectural patterns provide the same benefits but at the level of the overall system design, rather than at the code level.
In this book, we won't be focusing on software design, but rather architectural design in serverless systems. In that vein, it's worth noting that the context of this book is serverless architectures and our patterns will manifest themselves as reusable solutions that you can use to organize your functions and other computing resources to solve various types of problem on your serverless platform of choice.
Of course, there is an infinite number of ways to organize your application code and hundreds of software and architectural patterns you can use. The primary focus here is the general organization or grouping of your functions, how they interact with one another, the roles and responsibilities of each function, and how they operate in isolation but work together to compose a larger and more complex system.
As serverless systems gain traction and become more and more popular, I would expect serverless patterns such as those we will discuss in this book to grow in both popularity and number.
Many types of computing problem can be solved with a serverless design. Personally speaking, I have a hard time not using serverless systems nowadays due to the speed, flexibility, and adaptability they provide. The classes of problem that are suitable for serverless systems are extensive. Still, there is a sweet spot that is good to keep in mind when approaching new problems. Outside of the sweet spot, there are problems that are not a good fit.
Since serverless systems work on the basis of a single function, they are well suited to problems that are, or can be broken down into, the following subsystems:
Stateless
Computationally small and predictable
Serverless functions are ephemeral; that is, they have a known lifetime. Computation that is itself stateless is the type of problem where FaaS platforms shine. Application state may exist, and functions may store that state using a database or some other kind of data store, but the functions themselves retain no state between invocations.
In terms of computing resources, serverless functions have an upper bound, both in memory and total duration. Your software should have an expected or predictable upper limit that is below that of your FaaS provider. At the time of writing, AWS Lambda functions have an upper bound of 1,536 MB for memory and 300 seconds in duration. Google Compute advertises an upper limit of 540 seconds. Regardless of the actual values, systems, where you can reliably play within these bounds, are good candidates for moving to serverless architecture.
A good, albeit trivial, an example of this would be a data transformation function—given some input data, transform it into a different data structure. It should be clear with such a simple example that no state needs to be or is carried between one invocation and the next. Of course, data comes in various sizes, but if your system is fed data of a predictable size, you should be able to process the data within a certain timeframe.
In contrast, long-running processes that share state are not good fits for serverless. The reason for this is that functions die at the end of their life, leaving any in-memory state to die with them. Imagine a long-running process such as an application server handling WebSocket connections.
WebSockets are, by definition, stateful and can be compared to a phone call—a client opens up a connection to a server that is kept open as long as the client would like. Scenarios such as this are not a good fit for serverless functions for the two following reasons:
State exists (i.e., state of a phone call is connected or disconnected)
The process is long-lived because the connection can remain open for hours or days
Whenever I approach a new problem and begin to consider serverless, I ask myself these two questions:
Is there any global state involved that needs to be kept track of within the application code?
Is the computation to be performed beyond the system limits of my serverless platform?
The good news is that, very often, the answer to these questions is no and I can move forward and build my application using a serverless architecture.
In this book, we'll discuss four major classes of serverless design pattern:
Three-tier web application patterns
Extract
,
transform
,
load
(
ETL
)
patterns
Big data patterns
Automation and deployment patterns
Web applications with the traditional request/response cycle are a sweet spot for serverless systems. Because serverless functions are short-lived, they lend themselves well to problems that are themselves short-lived and stateless. We have seen stateful systems emerge and become popular, such as WebSockets; however, much of the web and web applications still run in the traditional stateless request/response cycle. In our first set of patterns, we'll build different versions of web application APIs.
While there are three different patterns to cover for web applications, they will all share a common basis, which is the three-tier model. Here, the tiers are made up of the following:
Content Delivery Network (CDN) for presentation code/static assets (HTML, JavaScript, CSS, and so on)
Database for persistence
Serverless functions for application logic
REST APIs should be a common and familiar tool for most web developers. In Chapter 2, A Three-Tier Web Application Using REST, we'll build out a fully featured REST API with a serverless design. This API will have all of the methods you'd expect in a classic REST API—create, read, update, delete (CRUD).
While REST APIs are common and well understood, they do face some challenges. After starting with a serverless REST API, we'll walk through the process of designing the changes needed to make that same API work as a single GraphQL endpoint that provides the same functionality in Chapter 3, A Three-Tier Web Application Pattern with GraphQL.
Finally, in Chapter 4, Integrating Legacy APIs with the Proxy Pattern, we'll use a proxy pattern to show how it's possible to completely change an API but use a legacy API backend. This design is especially interesting for those who would like to get started migrating an API to a serverless platform but have an existing API to maintain.
ETL patterns is another area of computing that lends itself very well to serverless platforms. At a high level, ETL jobs comprise the following three steps:
Extracting data from one data source
Transforming that data appropriately
Loading the processed data into another data source
Often used in analytics and/or data warehousing, ETL jobs are hard to escape. Since this problem is again ephemeral and because users would probably like their ETL jobs to execute as quickly as possible, serverless systems are a great platform in this problem space. While serverless computation is typically short-lived, we will see how ETL processes can be designed to be long-running in order to work through large amounts of data.
In the fan-out pattern, discussed in Chapter 5, Scaling Out with the Fan-Out Pattern,
