40,81 €
Master over 60 recipes to help you deliver completely scalable and serverless cloud-native applications
Key Features
Book Description
Cloud-native development is a modern approach to building and running applications that leverages the merits of the cloud computing model. With cloud-native development, teams can deliver faster and in a more lean and agile manner as compared to traditional approaches. This recipe-based guide provides quick solutions for your cloud-native applications.
Beginning with a brief introduction, JavaScript Cloud-Native Development Cookbook guides you in building and deploying serverless, event-driven, cloud-native microservices on AWS with Node.js. You'll then move on to the fundamental patterns of developing autonomous cloud-native services and understand the tools and techniques involved in creating globally scalable, highly available, and resilient cloud-native applications. The book also covers multi-regional deployments and leveraging the edge of the cloud to maximize responsiveness, resilience, and elasticity.
In the latter chapters you'll explore techniques for building fully automated, continuous deployment pipelines and gain insights into polyglot cloud-native development on popular cloud platforms such as Azure and Google Cloud Platform (GCP). By the end of the book, you'll be able to apply these skills to build powerful cloud-native solutions.
What you will learn
Who this book is for
If you want to develop powerful serverless, cloud-native solutions, this book is for you. You are expected to have basic knowledge of concepts of microservices and hands-on experience with Node.js to understand the recipes in this book.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 330
Veröffentlichungsjahr: 2018
Copyright © 2018 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Commissioning Editor: Merint MathewAcquisition Editor: Alok DhuriContent Development Editor: Akshada IyerTechnical Editor: Adhithya HaridasCopy Editor: Safis EditingProject Coordinator: Prajakta NaikProofreader: Safis EditingIndexer: Priyanka DhadkeGraphics: Jisha ChirayilProduction Coordinator: Aparna Bhagat
First published: September 2018
Production reference: 1260918
Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.
ISBN 978-1-78847-041-4
www.packtpub.com
Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Spend less time learning and more time coding with practical eBooks and videos from over 4,000 industry professionals
Improve your learning with Skill Plans built especially for you
Get a free eBook or video every month
Mapt is fully searchable
Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
I have known John for a little over two years through his involvement in cloud technologies, especially in serverless architectures and building applications using the Serverless Framework. He took us on a cloud-native journey, thinking and reasoning about new paradigms in building software on the cloud in his previous book, Cloud Native Development Patterns and Best Practices.
Continuing on his journey to explore cloud-native systems, he brings us his new book, JavaScript Cloud Native Development Cookbook, to showcase recipes in a cookbook format that deliver lean and autonomous cloud-native services. In a nutshell, the recipes in the book demonstrate how to build cloud-native software at a global scale, utilize event-driven architectures, and build an autonomous development environment, starting with a developer committing code to deploying the application using a continuous deployment pipeline.
Serverless computing is the way to go when building cloud-native applications. With no servers to manage or patch, pay-per-execution billing, no paying for idling, auto-scaling, and a microservices/event-driven architecture, there is really no reason not to adopt serverless computing.
In this book, John presents practical Node.js recipes for building serverless cloud-native applications on AWS. The recipes are battle-tested and work around the pitfalls that present themselves in real-life scenarios. The recipes are built with best practices and practical development workflows in mind.
John takes the traditional cloud-native principles and shows you how to implement them with serverless technologies to give you the ultimate edge in building and deploying modern cloud-native applications.
The book covers building a stack from scratch and deploying it to AWS using the Serverless Framework, which automates a lot of the mundane work, letting you focus on building the business functionality. It goes on to incorporate event sourcing, CQRS patterns, and data lakes, and shows you how to implement autonomous cloud-native services. The recipes cover leveraging CDN to execute code on the edge of the cloud and implementing security best practices. It walks you through techniques that optimize for performance and observability while designing applications for managing failures. It showcases deployment at scale, using multiple regions to tackle latency-based routing, regional failovers, and regional database replication.
You will find extensive explanations on core concepts, code snippets with how-it-works details, and a full source code repository of these recipes, for easy use in your own projects.
This book has a special place on my bookshelf, and I hope you will enjoy it as much as I did.
Rupak Ganguly Enterprise Relations and Advocacy, Serverless Inc.
John Gilbert is a CTO with over 25 years of experience of architecting and delivering distributed, event-driven systems. His cloud journey started more than five years ago and has spanned all the levels of cloud maturity—through lift and shift, software-defined infrastructure, microservices, and continuous deployment. He is the author of Cloud Native Development Patterns and Best Practices. He finds delivering cloud-native solutions to be by far the most fun and satisfying, as they force us to rewire how we reason about systems and enable us to accomplish far more with much less effort.
Max Brinegar is a principal software engineer with credentials that include a B.S. degree in Computer Science from University of Maryland, College Park, experience in cloud native development at Dante Consulting, Inc., and an AWS developer certification. Experience and expertise include web services development and deployment, modern programming techniques, information processing, and serverless architecture for software on AWS and Azure.
Joseph Staley is a senior software developer who currently specializes in JavaScript and cloud computing architecture. With over 20 years of experience in software design and development, he has helped create solutions for many companies across many industries. Originally building solutions utilizing on-premise Java servers, he has transitioned to implementing cloud-based solutions with JavaScript.
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
Title Page
Copyright and Credits
JavaScript Cloud Native Development Cookbook
Dedication
Packt Upsell
Why subscribe?
PacktPub.com
Foreword
Contributors
About the author
About the reviewers
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the example code files
Download the color images
Conventions used
Sections
Getting ready
How to do it…
How it works…
There's more…
See also
Get in touch
Reviews
Getting Started with Cloud-Native
Introduction
Creating a stack
Getting ready
How to do it...
How it works...
Creating a function and working with metrics and logs
How to do it...
How it works...
Creating an event stream and publishing an event
How to do it...
How it works...
Creating a stream processor
Getting ready
How to do it...
How it works...
Creating an API Gateway
How to do it...
How it works...
Deploying a single-page application
How to do it...
How it works...
Applying the Event Sourcing and CQRS Patterns
Introduction
Creating a data lake
Getting ready
How to do it...
How it works...
Applying the event-first variant of the Event Sourcing pattern
How to do it...
How it works...
Creating a micro event store
How to do it...
How it works...
Applying the database-first variant of the Event Sourcing pattern with DynamoDB
How to do it...
How it works...
Applying the database-first variant of the Event Sourcing pattern with Cognito datasets
How to do it...
How it works...
Creating a materialized view in DynamoDB
How to do it...
How it works...
Creating a materialized view in S3
How to do it...
How it works...
Creating a materialized view in Elasticsearch
How to do it...
How it works...
Creating a materialized view in a Cognito dataset
How to do it...
How it works...
Replaying events
Getting ready
How to do it...
How it works...
Indexing the data lake
How to do it...
How it works...
Implementing bi-directional synchronization
How to do it...
How it works...
Implementing Autonomous Services
Introduction
Implementing a GraphQL CRUD BFF
Getting ready
How to do it...
How it works...
Implementing a search BFF
How to do it...
How it works...
Implementing an analytics BFF
How to do it...
How it works...
Implementing an inbound External Service Gateway
How to do it...
How it works...
Implementing an outbound External Service Gateway
Getting ready
How to do it...
How it works...
Orchestrating collaboration between services
How to do it...
How it works...
Implementing a Saga
How to do it...
How it works...
Leveraging the Edge of the Cloud
Introduction
Serving a single-page application from a CDN
How to do it...
How it works...
Associating a custom domain name with a CDN
Getting ready
How to do it...
How it works...
Serving a website from the CDN
How to do it...
How it works...
Deploying a service behind a CDN
How to do it...
How it works...
Serving static JSON from a CDN
How to do it...
How it works...
Triggering the invalidation of content in a CDN
How to do it...
How it works...
Executing code at the edge of the cloud
How to do it...
How it works...
Securing Cloud-Native Systems
Introduction
Securing your cloud account
How to do it...
How it works...
Creating a federated identity pool
How to do it...
How it works...
Implementing sign up, sign in, and sign out
How to do it...
How it works...
Securing an API Gateway with OpenID Connect
Getting ready
How to do it...
How it works...
Implementing a custom authorizer
Getting ready
How to do it...
How it works...
Authorizing a GraphQL-based service
Getting ready
How to do it...
How it works...
Implementing a JWT filter
Getting ready
How to do it...
How it works...
Using envelope encryption
How to do it...
How it works...
Creating an SSL certificate for encryption in transit
Getting ready
How to do it...
How it works...
Configuring a web application firewall
How to do it...
How it works...
Replicating the data lake for disaster recovery
How to do it...
How it works...
Building a Continuous Deployment Pipeline
Introduction
Creating the CI/CD pipeline
Getting ready
How to do it...
How it works...
Writing unit tests
How to do it...
How it works...
Writing integration tests
Getting ready
How to do it...
How it works...
Writing contract tests for a synchronous API
How to do it...
How it works...
Writing contract tests for an asynchronous API
How to do it...
How it works...
Assembling transitive end-to-end tests
How to do it...
How it works...
Leveraging feature flags
Getting ready
How to do it...
How it works...
Optimizing Observability
Introduction
Monitoring a cloud-native system
Getting ready
How to do it...
How it works...
Implementing custom metrics
How to do it...
How it works...
Monitoring domain events
Getting ready
How to do it...
How it works...
Creating alerts
How to do it...
How it works...
Creating synthetic transaction tests
Getting ready
How to do it...
How it works...
Designing for Failure
Introduction
Employing proper timeouts and retries
How to do it...
How it works...
Implementing backpressure and rate limiting
Getting ready
How to do it...
How it works...
Handling faults
How to do it...
How it works...
Resubmitting fault events
How to do it...
How it works...
Implementing idempotence with an inverse OpLock
How to do it...
How it works...
Implementing idempotence with Event Sourcing
How to do it...
How it works...
Optimizing Performance
Introduction
Tuning Function as a Service
How to do it...
How it works...
Batching requests
Getting ready
How to do it...
How it works...
Leveraging asynchronous non-blocking IO
How to do it...
How it works...
Grouping events in stream processors
How to do it...
How it works...
Autoscaling DynamoDB
How to do it...
How it works...
Utilizing cache-control
How to do it...
How it works...
Leveraging session consistency
How to do it...
How it works...
Deploying to Multiple Regions
Introduction
Implementing latency-based routing
Getting ready
How to do it...
How it works...
Creating a regional health check
Getting ready
How to do it...
How it works...
Triggering regional failover
Getting ready
How to do it...
How it works...
Implementing regional replication with DynamoDB
Getting ready
How to do it...
How it works...
Implementing round-robin replication
How to do it...
How it works...
Welcoming Polycloud
Introduction
Creating a service with Google Cloud Functions
Getting ready
How to do it...
How it works...
Creating a service with Azure Functions
Getting ready
How to do it...
How it works...
Other Books You May Enjoy
Leave a review - let other readers know what you think
Welcome to the JavaScript Cloud Native Development Cookbook. This cookbook is packed full of recipes to help you along your cloud-native journey. It is intended to be a companion to another of my books, Cloud Native Development Patterns and Best Practices. I have personally found delivering cloud-native solutions to be, by far, the most fun and satisfying development practice. This is because cloud-native is more than just optimizing for the cloud. It is an entirely different way of thinking and reasoning about software systems.
In a nutshell, cloud-native is lean and autonomous. Powered by disposable infrastructure, leveraging fully managed cloud services and embracing disposable architecture, cloud-native empowers everyday, self-sufficient, full-stack teams to rapidly and continuously experiment with innovations, while simultaneously building global-scale systems with much less effort than ever before. Following this serverless-first approach allows teams to move fast, but this rapid pace also opens the opportunity for honest human error to wreak havoc across the system. To guard against this, cloud-native systems are composed of autonomous services, which creates bulkheads between the services to reduce the blast radius during a disruption.
In this cookbook, you will learn how to build autonomous services by eliminating all synchronous inter-service communication. You will turn the database inside out and ultimately turn the cloud into the database by implementing the event sourcing and CQRS patterns with event streaming and materialized views. Your team will build confidence in its ability to deliver because asynchronous inter-service communication and data replication remove the downstream and upstream dependencies that make systems brittle. You will also learn how to continuously deploy, test, observe, optimize, and secure your autonomous services across multiple regions.
To get the most out of this book, be prepared with an open mind to discover why cloud-native is different. Cloud-native forces us to rewire how we reason about systems. It tests all our preconceived notions of software architecture. So, be prepared to have a lot of fun building cloud-native systems.
This book is intended to help create self-sufficient, full-stack, cloud-native development teams. Some cloud experience is helpful, but not required. Basic knowledge of the JavaScript language is assumed. The book serves as a reference for experienced cloud-native developers and as a quick start for entry-level cloud-native developers. Most of all, this book is for anyone who is ready to rewire their engineering brain for cloud-native development.
Chapter 1, Getting Started with Cloud-Native, showcases how the ease of defining and deploying serverless, cloud-native resources, such as functions, streams, and databases, empowers self-sufficient, full-stack teams to continuously deliver with confidence.
Chapter 2, Applying the Event Sourcing and CQRS Patterns, demonstrates how to use these patterns to create fully autonomous services, by eliminating inter-service synchronous communication through the use of event streaming and materialized views.
Chapter 3, Implementing Autonomous Services, explores the boundary and control patterns for creating autonomous services, such as Backend for Frontend, External Service Gateway, and Event Orchestration.
Chapter 4, Leveraging the Edge of the Cloud, provides concrete examples of using a cloud provider's content delivery network to increase the performance and security of autonomous services.
Chapter 5, Securing Cloud-Native Systems, looks at leveraging the shared responsibility model of cloud-native security so that you can focus your efforts on securing the domain-specific layers of your cloud-native systems.
Chapter 6, Building a Continuous Deployment Pipeline, showcases techniques, such as task branch workflow, transitive end-to-end testing, and feature flags, that help teams continuously deploy changes to production with confidence by shifting deployment and testing all the way to the left, controlling batch sizes, and decoupling deployment from release.
Chapter 7, Optimizing Observability, demonstrates how to instill team confidence by continuously testing in production to assert the health of a cloud-native system and by placing our focus on the mean time to recovery.
Chapter 8, Designing for Failure, deals with techniques, such as backpressure, idempotency, and the Stream Circuit Breaker pattern, for increasing the robustness and resilience of autonomous services.
Chapter 9, Optimizing Performance, explores techniques, such as asynchronous non-blocking IO, session consistency, and function tuning, for boosting the responsiveness of autonomous services.
Chapter 10, Deploying to Multiple Regions, demonstrates how to deploy global autonomous services that maximize availability and minimize latency by creating fully replicated, active-active deployments across regions and implementing regional failover.
Chapter 11, Welcoming Polycloud, explores the freedom provided by choosing the right cloud provider one service at a time while maintaining a consistent development pipeline experience.
To follow along with the recipes in this cookbook, you will need to configure your development environment according to these steps:
Install Node Version Manager (
https://github.com/creationix/nvm
or
https://github.com/coreybutler/nvm-windows
)
Install Node.js with
nvm install 8
Install the Serverless Framework with
npm install serverless -g
Create a
MY_STAGE
environment variable with
export MY_STAGE=<your-name>
Create an AWS account (
https://aws.amazon.com/free
) and configure your credentials for the Serverless Framework (
https://serverless.com/framework/docs/providers/aws/guide/credentials
)
You can download the example code files for this book from your account at http://www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
Log in or register at
www.packt.com
.
Select the
SUPPORT
tab.
Click on
Code Downloads & Errata
.
Enter the name of the book in the
Search
box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR/7-Zip for Windows
Zipeg/iZip/UnRarX for Mac
7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub athttps://github.com/PacktPublishing/JavaScript-Cloud-Native-Development-Cookbook. We also have other code bundles from our rich catalog of books and videos available athttps://github.com/PacktPublishing/. Check them out!
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://www.packtpub.com/sites/default/files/downloads/9781788470414_ColorImages.pdf.
In this book, you will find several headings that appear frequently (Getting ready, How to do it..., How it works..., There's more..., and See also).
To give clear instructions on how to complete a recipe, use these sections as follows:
This section tells you what to expect in the recipe and describes how to set up any software or any preliminary settings required for the recipe.
This section contains the steps required to follow the recipe.
This section usually consists of a detailed explanation of what happened in the previous section.
This section consists of additional information about the recipe in order to make you more knowledgeable about the recipe.
This section provides helpful links to other useful information for the recipe.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, please email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packt.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packt.com.
In this chapter, the following recipes will be covered:
Creating a stack
Creating a function and working with metrics and logs
Creating an event stream and publishing an event
Creating a stream processor
Creating an API Gateway
Deploying a single-page application
Cloud-native is lean. Companies today must continuously experiment with new product ideas so that they can adapt to changing market demands; otherwise, they risk falling behind their competition. To operate at this pace, they must leverage fully managed cloud services and fully-automated deployments to minimize time to market, mitigate operating risks, and empower self-sufficient, full-stack teams to accomplish far more with much less effort.
The recipes in this cookbook demonstrate how to use fully managed, serverless cloud services to develop and deploy lean and autonomous services. This chapter contains bare-boned recipes with no clutter in order to focus on the core aspects of deploying cloud-native components and to establish a solid foundation for the remainder of this cookbook.
Each autonomous cloud-native service and all its resources are provisioned as a cohesive and self-contained group called a stack. On AWS, these are CloudFormation stacks. In this recipe, we will use the Serverless Framework to create and manage a bare-bones stack to highlight the steps involved in deploying a cloud-native service.
Before starting this recipe, you will need to follow the instructions in the Preface for configuring your development environment with Node.js, the Serverless Framework, and AWS account credentials.
Create the project from the following template:
$ sls create --template-url https://github.com/danteinc/js-cloud-native-cookbook/tree/master/ch1/create-stack --path cncb-create-stack
Navigate to the
cncb-create-stack
directory with
cd cncb-create-stack
.
Review the file named
serverless.yml
with the following content:
service: cncb-create-stackprovider: name: aws
Review the file named
package.json
with the following content:
{ "name": "cncb-create-stack", "version": "1.0.0", "private": true, "scripts": { "
test
": "sls
package
-r us-east-1 -s test", "
dp:lcl
": "sls
deploy
-r us-east-1", "
rm:lcl
": "sls
remove
-r us-east-1" }, "devDependencies": { "
serverless
": "1.26.0" }}
Install the dependencies with
npm install
.
Run the tests with
npm test
.
Review the contents generated in the
.serverless
directory.
Deploy the stack:
$ npm run dp:lcl -- -s $MY_STAGE
> [email protected] dp:lcl <path-to-your-workspace>/cncb-create-stack> sls deploy -r us-east-1 "-s" "john"Serverless: Packaging service...Serverless: Creating Stack...Serverless: Checking Stack create progress........Serverless: Stack create finished...Serverless: Uploading CloudFormation file to S3...Serverless: Uploading artifacts...Serverless: Validating template...Serverless: Updating Stack...Service Informationservice: cncb-create-stackstage: johnregion: us-east-1stack: cncb-create-stack-johnapi keys: Noneendpoints: Nonefunctions: None
Review the stack in the AWS Console:
Remove the stack once you have finished with
npm run rm:lcl -- -s $MY_STAGE
.
The Serverless Framework (SLS) (https://serverless.com/framework/docs) is my tool of choice for deploying cloud resources, regardless of whether or not I am deploying serverless resources, such as functions. SLS is essentially an abstraction layer on top of infrastructure as code tools, such as AWS CloudFormation, with extensibility features such as plugins and dynamic variables. We will use SLS in all of our recipes. Each recipe starts by using the SLS feature to create a new project by cloning a template. You will ultimately want to create your own templates for jump-starting your own projects.
This first project is as bare bones as we can get. It essentially creates an empty CloudFormation stack. In the serverless.yml file, we define the service name and the provider. The service name will be combined with the stage, which we will discuss shortly, to create a unique stack name within your account and region. I have prefixed all the stacks in our recipes with cncb to make it easy to filter for these stacks in the AWS Console if you are using a shared account, such as your development or sandbox account at work.
Our next most important tool is Node Package Manager (NPM) (https://docs.npmjs.com/). We will not be packaging any Node modules (also known as libraries), but we will be leveraging NPM's dependency management and scripting features. In the package.json file, we declared a development dependency on the Serverless Framework and three custom scripts to test, deploy, and remove our stack. The first command we execute is npm install, which will install all the declared dependencies into the project's node_modules directory.
Next, we execute the npm test script. This is one of several standard scripts for which NPM provides a shortcut alias. We have defined the test script to invoke the sls package command to assert that everything is configured properly and help us see what is going on under the covers. This command processes the serverless.yml file and generates a CloudFormation template in the .serverless directory. One of the advantages of the Serverless Framework is that it embodies best practices and uses a configuration by exception approach to take a small amount of declaration in the serverless.yml files and expand it into a much more verbose CloudFormation template.
Now, we are ready to deploy the stack. As developers, we need to be able to deploy a stack and work on it in isolation from other developers and other environments, such as production. To support this requirement, SLS uses the concept of a stage. Stage (-s $MY_STAGE) and region (-r us-east-1) are two required command-line options when invoking an SLS command. A stack is deployed into a specific region and the stage is used as a prefix in the stack name to make it unique within an account and region. Using this feature, each developer can deploy (dp) what I refer to as a local (lcl) stack with their name as the stage with npm run dp:lcl -- -s $MY_STAGE. In the examples, I use my name for the stage. We declared the $MY_STAGE environment variable in the Getting ready section. The double dash (--) is NPM's way of letting us pass additional options to a custom script. In Chapter 6, Building a Continuous Deployment Pipeline, we will discuss deploying stacks to shared environments, such as staging and production.
CloudFormation has a limit regarding the template body size in a request to the API. Typical templates easily surpass this limit and must be uploaded to S3 instead. The Serverless Framework handles this complexity for us. In the .serverless directory, you will notice that there is a cloudformation-template-create-stack.json file that declares a ServerlessDeploymentBucket. In the sls deploy output, you can see that SLS uses this template first and then it uploads the cloudformation-template-update-stack.json file to the bucket and updates the stack. It's nice to have this problem already solved for us because it is typical to learn about this limit the hard way.
At first glance, creating an empty stack may seem like a silly idea, but in practice it is actually quite useful. In a sense, you can think of CloudFormation as a CRUD tool for cloud resources. CloudFormation keeps track of the state of all the resources in a stack. It knows when a resource is new to a stack and must be created, when a resource has been removed from a stack and must be deleted, and when a resource has changed and must be updated. It also manages the dependencies and ordering between resources. Furthermore, when an update to a stack fails, it rolls back all the changes.
Unfortunately, when deploying a large number of changes, these rollbacks can be very time-consuming and painful when the error is in one of the last resources to be changed. Therefore, it is best to make changes to a stack in small increments. In Chapter 6, Building a Continuous Deployment Pipeline, we will discuss the practices of small batch sizes, task branch workflow, and decoupling deployment from release. For now, if you are creating a new service from a proven template, then initialize the new project and deploy the stack with all the template defaults all the way to production with your first pull request. Then, create a new branch for each incremental change. However, if you are working on an experimental service with no proven starting point, then an empty stack is perfectly reasonable for your first deployment to production.
In your daily development routine, it is important to clean up your local stacks when you have completed work on a task or story. The cost of a development account can creep surprisingly high when orphaned stacks accumulate and are rarely removed. The npm run rm:lcl -- -s $MY_STAGE script serves this purpose.
Function-as-a-Service is the cornerstone of cloud-native architecture. Functions enable self-sufficient, full-stack teams to focus on delivering lean business solutions without being weighed down by the complexity of running cloud infrastructure. There are no servers to manage, and functions implicitly scale to meet demand. They are integrated with other value-added cloud services, such as streams, databases, API gateways, logging, and metrics, to further accelerate development. Functions are disposable architecture, which empower teams to experiment with different solutions. This recipe demonstrates how straightforward it is to deploy a function.
The Serverless Framework handles the heavy lifting, which allows us to focus on writing the actual function code. The first thing to note is that we must define the runtime: nodejs8.10 in the serverless.yml file. Next, we define a function in the functions section with a name and a handler. All other settings have defaulted, following the configuration by exception approach. When you look at the generated CloudFormation template, you will see that over 100 lines were generated from just a handful of lines declared in the serverless.yml file. A large portion of the generated template is dedicated to defining boilerplate security policies. Dig into the .serverless/cloudformation-template-update-stack.json file to see the details.
We also define environment variables in the serverless.yml. This allows the functions to be parameterized per deployment stage. We will cover this in more detail in Chapter 6, Building a Continuous Deployment Pipeline. This also allows settings, such as the debug level, to be temporarily tweaked without redeploying the function.
When we deploy the project, the Serverless Framework packages the function along with its runtime dependencies, as specified in the package.json file, into a ZIP file. Then, it uploads the ZIP file to the ServerlessDeploymentBucket so that it can be accessed by CloudFormation. The output of the deployment command shows when this is happening. You can look at the content of the ZIP file in the .serverless directory or download it from the deployment bucket. We will cover advanced packaging options in Chapter 9, Optimizing Performance.
The signature of an AWS Lambda function is straightforward. It must export a function that accepts three arguments: an event object, a context object, and a callback function. Our first function will just log the event, content, and the environment variables so that we can peer into the execution environment a little bit. Finally, we must invoke the callback. It is a standard JavaScript callback. We pass an error to the first argument or the successful result to the second argument.
Logging is an important standard feature of Function as a Service (FaaS). Due to the ephemeral nature of cloud resources, logging in the cloud can be tedious, to put it lightly. In AWS Lambda, console logging is performed asynchronously and recorded in CloudWatch logs. It's a fully-managed logging solution built right in. Take the time to look at the details in the log statements that this function writes. The environment variables are particularly interesting. For example, we can see that each invocation of a function gets a new temporary access key.
Functions also provide a standard set of metrics out-of-the-box, such as invocation count, duration, errors, throttling, and so forth. We will cover this in detail in Chapter 7, Optimizing Observability.
