34,79 €
Amazon Web Services (AWS) is currently the leader in the public cloud market. With an increasing global interest in leveraging cloud infrastructure, the AWS Cloud from Amazon offers a cutting-edge platform for architecting, building, and deploying web-scale cloud applications.
As more the rate of cloud platform adoption increases, so does the need for cloud certification. The AWS Certified Solution Architect – Associate Guide is your one-stop solution to gaining certification. Once you have grasped what AWS and its prerequisites are, you will get insights into different types of AWS services such as Amazon S3, EC2, VPC, SNS, and more to get you prepared with core Amazon services. You will then move on to understanding how to design and deploy highly scalable applications. Finally, you will study security concepts along with the AWS best practices and mock papers to test your knowledge.
By the end of this book, you will not only be fully prepared to pass the AWS Certified Solutions Architect – Associate exam but also capable of building secure and reliable applications.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 464
Veröffentlichungsjahr: 2018
Copyright © 2018 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Commissioning Editor: Vijin BorichaAcquisition Editor:Heramb BhavsarContent Development Editor:Abhishek JadhavTechnical Editor:Mohd Riyan KhanCopy Editor:Safis EditingProject Coordinator:Jagdish PrabhuProofreader: Safis EditingIndexer: Tejal Daruwale SoniGraphics:Tom ScariaProduction Coordinator: Nilesh Mohite
First published: October 2018
Production reference: 1311018
Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.
ISBN 978-1-78913-066-9
www.packtpub.com
Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
Improve your learning with Skill Plans built especially for you
Get a free eBook or video every month
Mapt is fully searchable
Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
Gabriel Ramirez is a passionate technologist with a broad experience in the Software Industry, he currently works as an Authorized Trainer for Amazon Web Services and Google Cloud.
He is holder of 9/9 AWS Certifications and does community work by organizing the AWS User Groups in Mexico. He can be found on LinkedIn at linkedin.com/in/gramirezm/.
Stuart Scott is the AWS content lead at Cloud Academy where he has created over 40 courses reaching tens of thousands of students. His content focuses heavily on cloud security and compliance, specifically on how to implement and configure AWS services to protect, monitor and secure customer data in an AWS environment.
He has written numerous cloud security blogs Cloud Academy and other AWS advanced technology partners. He has taken part in a series of cloud security webinars to share his knowledge and experience within the industry to help those looking to implement a secure and trusted environment.
In January 2016 Stuart was awarded 'Expert of the Year' from Experts Exchange for his knowledge share within cloud services to the community.
Yohan Wadia is a client-focused evangelist and technologist with more than 8 years of experience in the cloud industry, focused on helping customers succeed with cloud adoption.
As a technical consultant, he provides guidance and implementation services to customers looking to leverage cloud computing through either Amazon Web Services, Windows Azure, or Google Cloud Platform by helping them come up with pragmatic solutions that make practical as well as business sense.
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
Title Page
Copyright and Credits
AWS Certified Solutions Architect – Associate Guide
Dedication
Packt Upsell
Why subscribe?
Packt.com
Contributors
About the authors
About the reviewer
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the example code files
Conventions used
Get in touch
Reviews
Introducing Amazon Web Services
Technical requirements
Minimizing complexity
Conway's law
Cloud computing
Architecting for AWS
Cloud design principles
Cloud design patterns – CDP
AWS Cloud Adoption Framework – AWS CAF
AWS Well-Architected Framework – AWS WAF
Shared security model
Identity and Access Management
User creation
Designing an access structure
Create an administration group
Business case
Inline policies
IAM cross-account roles
Summary
Further reading
AWS Global Infrastructure Overview
Technical requirements
Introducing AWS global infrastructure
Becoming a service company
Data centers
10,000-feet view
Regions
100,000-feet view
Latency
Compliance
Supported services
Cost
Connectivity
Endpoint access
Global CDN
Amazon CloudFront
Single region / multi-region patterns
Rationale
Active-active
Active-passive
Network-partitioning tolerance
Complexity
CloudFront
Data replication and redundancy with managed services
Exercise
Replicating tags
Replicating ACLs
Distributed nature of S3
Metadata replication
Encryption replication
Hosting a static website with S3 and CloudFront
Summary
Further reading
Elasticity and Scalability Concepts
Technical requirements
Sources of failure
The cause
Dividing and conquering
Serial configuration
Parallel configuration
Reactive and proactive scalability
Horizontal scalability
Vertical scalability
Exercise
Virtualization technologies
LAMP installation
Scaling the web server
Resiliency
EC2 persistence model
Disaster recovery
Cascading deletion
Bootstrapping
Scaling the compute layer
Proactive scalability
Scaling a database server
Summary
Further reading
Hybrid Cloud Architectures
Effective migration to the cloud
Extending your data center
All in the cloud
VPC
Tenancy
Sizing
The default VPC
Public traffic
Private traffic
Security groups
Creating a security group
Chaining security groups
Bastion host
Hybrid deployment
Software VPNs
Static hardware VPNs
Dynamic hardware VPNs
Direct Connect (DX)
Storage gateway use cases
Network filesystems with file gateways
Block storage iSCSI with volume gateway – stored
Block storage iSCSI with volume gateway – cached
Virtual tape library iSCSI with a tape gateway
The Database Migration Service
Homogeneous migration
The AWS Schema Conversion tool
Heterogeneous migrations
Summary
Further reading
Resilient Patterns
Technical requirements
Route 53
Health checks
Record types
Summary
Further reading
Event Driven and Stateless Architectures
Technical requirements
Web application hosting
Route 53
Serverless application architecture
Streaming data architecture
Summary
Further reading
Integrating Application Services
Technical requirements
SQS as a reliable broker
Asynchrony
Creating a queue
Security
Durability
Message delivery
Message reception
Messaging patterns
Managing 1:N communications with SNS
Subscriber
Fanout
Authenticating your web and mobile apps with Cognito
Cognito user pools
Federated identities
API Gateway integration
Request flow
WebSockets in AWS
AWS IoT
AWS AppSync
Web app demo
Summary
Further reading
Disaster Recovery Strategies
Technical requirements
Availability metrics
The business perspective
Business impact analysis
Recovery Time Objective (RTO)
Recovery Point Objective (RPO)
Availability monitoring
Backup and restore
Preparation phase
In the case of a disaster
Trade-offs
Pilot light
The preparation phase
In the case of a disaster
Trade-offs
Warm standby
The preparation phase
In the case of a disaster
Trade-offs
Multi-site active-active
The preparation phase
In the case of a disaster
Trade-offs
Best practices
Summary
Further reading
Storage Options
Technical requirements
Relational databases
RDS
Managed capabilities
Instances
Parameter groups
Option groups
Snapshots
Events
Multi-AZ
Read replicas
Caching
Object storage
Simple storage service
Data organization
Integrity
Availability
Cost dimensions
Reducing cost
Durability
Maximum durability
Limited durability
Use cases
Consistency
Storage optimization
Creating objects from the CLI
Copy an existing object
Using a lifecycle policy
Lifecycle policies
Archiving with Glacier
Retrieval options
Workflow
NoSQL
DynamoDB
Control plane
Managed capabilities
Consistency
Local secondary index
Global secondary index
DynamoDB Streams
Global tables
Summary
Further reading
Matching Supply and Demand
Technical requirements
Elastic Load Balancing
Classic Load Balancer – CLB
Network Load Balancer – NLB
Application Load Balancer – ALB
Creating an Application Load Balancer
ELB attributes
Stateless versus stateful
Internet-facing versus internal-facing
TCP passthrough
Cross-zone load balancing
Connection draining
AWS Auto Scaling
Alternate flow
Create a launch configuration
Auto Scaling groups
Resiliency
Summary
Further reading
Introducing Amazon Elastic MapReduce
Technical requirements
Clustering in AWS
High performance computing
CfnCluster
Enhanced networking
Jumbo frames
Placement groups
Creating a placement group
Benchmarking
Elastic MapReduce
MapReduce
Analyzing a public dataset
Summary
Further reading
Web Scale Applications
Technical requirements
AWS Lambda
Summary
Further reading
Understanding Access Control
Technical requirements
Authentication, authorization, and access control
Authentication
Authorization
Access control
Authenticating via access control methods
Usernames and passwords
Multi-factor authentication
Programmatic access
Key pairs
IAM roles
Cross-account roles
Web identity and SAML federation
Federation of access
Web identity federation
SAML 2.0 federation
IAM authorization
Users
Groups
Roles
Identity-based policies
Managed policies versus inline policies
Writing policies from scratch by using a JSON policy editor
Using the visual editor within IAM
Copying an existing managed policy
Inline policies
Summary
Further reading
Encryption and Key Management
Technical requirements
An overview of encryption
Symmetric key cryptography
Asymmetric key cryptography
EBS encryption
Encrypting a new EBS volume
Encrypting a new EBS volume during the launch of a new EC2 instance
Encrypting an existing EBS volume
Amazon S3 encryption
Server-side encryption with S3 managed keys (SSE-S3)
Server-side encryption with KMS managed keys (SSE-KMS)
Server-side encryption with customer managed keys (SSE-C)
Client-side encryption with KMS managed keys (CSE-KMS)
Client-side encryption with KMS managed keys (CSE-C)
RDS encryption
How to enable encryption
Steps to encrypt an existing database
Key Management Service (KMS)
So, what is KMS?
Customer master keys
Data encryption keys (DEK)
Key policies
Grants
Key rotation
Manual key rotation
Summary
Further reading
An Overview of Security and Compliance Services
Technical requirements
AWS CloudTrail
Amazon Inspector
Installing the agent
Assessment templates, runs, and findings
AWS Trusted Advisor
Yellow warning under service limits
Red warning under service limits
AWS Systems Manager
Resource groups
Creating a resource group
Actions
Insights
Shared resource
AWS Config
Configuration item
Configuration streams
Configuration history
Configuration snapshot
Configuration recorder
Config rules
Resource relationship
High-level process overview
Summary
Further reading
AWS Security Best Practices
Technical requirements
Shared responsibility model
Data protection
Using encryption at rest for sensitive data
Taking advantage of encryption features built into AWS services
Using encryption in transit for sensitive data
Protecting against unexpected data loss
Using S3 MFA delete to prevent accidental deletion
Using S3 lifecycle policies
Implementing S3 versioning to protect against unintended actions
Virtual Private Cloud
Using security groups to control access at an instance level
Using NACLs to control access at a subnet level
Implementing the rule of least privilege
Implementing layers in your VPC
Creating Flow Logs to obtain deeper analysis of network traffic
Identity and Access Management
Avoid sharing identities
Using MFA for privileged users
Using roles
Password policy
Assigning permissions to groups instead of to individual users
Rotating your access keys
Assigning permissions according to the rule of least privilege
Re-evaluating permissions and deleting accounts
Do not use the root account as an operational user
EC2 security
Implementing a patching strategy
Controlling access with security groups
Encrypting sensitive data on persistent storage
Harden the operating system
Using Bastion hosts to connect to your EC2 instances
Security services
Summary
Further reading
Web Application Security
Technical requirements
AWS web application firewall
Conditions
Rules
Web ACL
Monitoring
AWS Shield
DDoS
Shield plans
AWS Firewall Manager
Before using AWS Firewall Manager
Amazon CloudFront security features
Summary
Further reading
Cost Effective Resources
Technical requirements
Reserved Instances
Standard Reserved Instances
Convertible Reserved Instances
Billing and cost management
Billing alarms
Service level alarms
Billing reports
Cost Explorer
Reserved Instances recommendations
QuickSight visualization
Cost Allocation Tags
AWS Organizations
Summary
Further reading
Working with Infrastructure as Code
Technical requirements
AWS CloudFormation
Template anatomy
Resources
Stack updates
Deletion policy
Outputs
Template reusability
Parameters
Mappings
Depends on
Helper scripts
Multi-tier web app
Best practices
Summary
Further reading
Automation with AWS
Technical requirements
Incident Response
CloudWatch Logs Agent
CloudWatch Metric Filters
Summary
Further reading
Introduction to the DevOps practice in AWS
Technical requirements
CI / CD pipeline
AWS CodeDeploy
AppSpec file
Summary
Further reading
Mock Test 1
Mock Test 2
Assessment
Mock Test 1
Mock Test 2
Another Book You May Enjoy
Leave a review - let other readers know what you think
Amazon Web Services (AWS) is currently the leader in the public cloud market. With an increasing global interest in leveraging cloud infrastructure, the AWS Cloud from Amazon offers a cutting-edge platform for architecting, building, and deploying web-scale cloud applications.
As more the rate of cloud platform adoption increases, so does the need for cloud certification. The AWS Certified Solution Architect – Associate Guide is your one-stop solution to gaining certification. Once you have grasped what AWS and its prerequisites are, you will get insights into different types of AWS services such as Amazon S3, EC2, VPC, SNS, and more to get you prepared with core Amazon services. You will then move on to understanding how to design and deploy highly scalable applications. Finally, you will study security concepts along with the AWS best practices and mock papers to test your knowledge.
By the end of this book, you will not only be fully prepared to pass the AWS Certified Solutions Architect – Associate exam but also capable of building secure and reliable applications.
The AWS Certified Solutions Architect – Associate Guide is for you if you are an IT professional or Solutions Architect wanting to pass the AWS Certified Solution Architect – Associate 2018 exam. This book is also for developers looking to start building scalable applications on AWS.
Chapter 1, Introducing Amazon Web Services, in this chapter, the Amazon Web Services provides a very rich feature set of services and this chapter will take the readers through fundamentals concepts of AWS concepts. This will include information about what AWS Cloud is, how it enables large organizations and small start-ups to leverage enterprise class infrastructure.
Chapter 2, AWS Global Infrastructure Overview, this chapter will teach the readers about the AWS Global infrastructure, the service endpoints and partitions, availability zones, regions, edge locations and how the interact with high availability patterns and resilient designs. This chapter will also cover replication and synchronization of data at a global scale with a special focus on security.
Chapter 3, Elasticity and Scalability Concepts, this chapter will teach the readers how to match capacity and demand, design cost efficient solutions and understand how this two concepts play a role in Cloud Architecture. We'll focus on Demand, Buffer and Time based approaches, automation and serverless implementations.
Chapter 4, Hybrid Cloud Architectures, this chapter will teach the readers how to integrate cloud services, deploy new applications and interconnect and extend existing infrastructure to the cloud. Use application services as message queues, publisher subscriber, API Gateway and lambda as a bridge as a adapter.
Chapter 5, Resilient Patterns, this chapter will teach the readers how to avoid complete service failures by absorbing the operational impact of a service failure by loosely coupling components and services. To inject failure in our systems to make them fault tolerant and exposing failure paths. To design reactive autonomous monitoring systems in the cloud.
Chapter 6, Event Driven and Stateless Architectures, this chapter will teach the readers how to design workflows like ETL and image processing leveraging storage and processing using lambda. You will understand the pros and cons about maintaining servers and using PaaS and abstract services like S3 and DynamoDB.
Chapter 7, Integrating Application Services, the chapter will teach the readers how to integrate services like authentication, mobile backends, messaging and persistence to their apps. You will use Backend as a Service (BaaS) to decouple front end and middleware and to use 3rd party service providers.
Chapter 8, Disaster Recovery Strategies, the chapter will teach the readers what are the main patterns in DR strategies using the cloud. The reader will learn to implement successfully backup and restore, use pilot light and multi site active - active scenarios. You will be guided on how to implement a full DR exercise in a hybrid environment.
Chapter 9, Storage Options, the chapter will teach the readers the different storage options available, to evaluate durability, cost, performance size and management tasks of each one. You will compare hot and cold solutions and examples of EBS, S3, Glacier, RedShift and DynamoDB.
Chapter 10, Matching Supply and Demand, the chapter will teach the readers how to optimize for cost, use optimal resources on every layer. Work with AutoScaling and resize RDS databases with CloudWatch alarms.
Chapter 11, Introducing Amazon Elastic MapReduce, this chapter will teach the readers get insight about Elastic Map Reduce, the use cases and how to design clusters for High Performance Computing on EC2. Profiling your instances to maximize throughput and optimize network resources.
Chapter 12, Web Scale Applications, the chapter will teach the readers how to build massive applications that reach millions of users, with high levels of concurrency. Offload your backends with cache technologies like CloudFront and ElasticCache and NoSQL datastores.
Chapter 13, Understanding Access Control, the chapter will teach the readers to get familiar about the main security objectives, use granular control access for your users and applications. Learn about the security best practices and permission management through IAM.
Chapter 14, Encryption and Key Management, the chapter will teach the readers how encryption works in the cloud, use custom means to encrypt sensitive information and leverage encryption mechanisms from different AWS services and how to integrate with the Marketplace solutions to design robust security schemes and be compliant with several international standards, regulations and frameworks.
Chapter 15, An Overview of Security and Compliance Services, the chapter will provide an overview of some of the key AWS services that are used to secure, protect and govern data and resources within an AWS environment. It will define what each service is used for and the components that are used within each.
Chapter 16, AWS Security Best Practices, the chapter will teach the readers how to implement the AWS security reference model and get an in depth analysis of every service and configuration used to protect your application and data.
Chapter 17, Web Application Security, the chapter will teach the readers how to protect web applications, take a proactive standpoint for application design. You will learn how to avoid Cross Site Scripting, Man in the middle attacks and data integrity loss.
Chapter 18, Cost Effective Resources, the chapter will teach the readers how to design cost efficient resources and optimize services to improve ROI. Build custom cost reports with custom tags and use consolidated billing with multiple accounts. Create budgets and alarms to avoid unexpected charges.
Chapter 19, Working with Infrastructure as Code, the chapter will teach the readers how manage infrastructure using a set of tools, practices and thinking as software to gain consistency, flexibility, reusability and many advantages of this paradigm. We will work with CloudFormation and introduce you to OpsWorks, also will talk about many of the tools available in the industry that will help you manage configurations and infrastructure.
Chapter 20, Automation with AWS, the chapter will continue on the previous one demonstrating how automation help industries achieve more with less, how deployment strategies can help in consistency, availability and continuity of business. We show how to automate response to application logs, CloudTrail and Configuration Changes through AWS Config.
Chapter 21, Introduction to DevOps in AWS, the chapter will explain the principles, processes, toolchain and culture behind this practice. We'll take a holistic approach to apply SCM, Continuous Integration (CI) and Continuous Delivery (CD).
Chapter 22, Mock Test 1, in this chapter, readers will get the hands-on experience of the real time certification exam which will cover questions from the above stated services and which will make them confident about clearing the associate exam with the help of important tips and tricks.
Chapter 23, Mock Test 2, in this chapter, readers will get the hands-on experience of the real time certification exam which will cover questions from the above stated services and which will make them confident about clearing the associate exam with the help of important tips and tricks.
You should have access to an AWS account.
The detailed requirement for each chapter can be found in the Technical requirement section of the chapters.
You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
Log in or register at
www.packt.com
.
Select the
SUPPORT
tab.
Click on
Code Downloads & Errata
.
Enter the name of the book in the
Search
box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR/7-Zip for Windows
Zipeg/iZip/UnRarX for Mac
7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub athttps://github.com/PacktPublishing/AWS-Certified-Solutions-Architect-Associate-Guide. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available athttps://github.com/PacktPublishing/. Check them out!
There are a number of text conventions used throughout this book.
CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "The public key is installed in the ~/.ssh/authorized_keys in the filesystem of the instance."
A block of code is set as follows:
{ "Tenancy": "default", "GroupName": "", "AvailabilityZone": "us-east-1a"}
Any command-line input or output is written as follows:
mkdir webApp && cd $_
Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "In the EC2 console choose Launch Instance."
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packt.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packt.com.
Welcome to the journey of becoming an Amazon Web Services (AWS) solutions architect. A path full of challenges, but also a path full of knowledge awaits you. To begin, I'd like to start by defining the role of a solutions architect in the software-engineering context. Architecture has a lot to do with technology, but it also has a lot to do with everything else; it is a discipline responsible for the nonfunctional requirements, and a model to design the Quality of Service (QoS) of the information systems.
Architecture is about finding the right balance and the midpoint of every circumstance. It is about understanding the environment in which problems are created, involving the people, the processes, the organizational culture, the business capabilities, and any external drivers that can influence the success of a project.
We will learn that part of our role as solutions architects is to evaluate several trade-offs, manage the essential complexity of things, their technical evolution, and the inherent entropy of complex systems.
The following topics will be covered in this chapter:
Understanding cloud computing
Cloud design patterns and principles
Shared security model
Identity and access management
Solution scripts are available in the book's repositories at the following URLs, if you get stuck with the examples:
https://github.com/PacktPublishing/AWS-Certified-Solutions-Architect-Associate-Guide
https://github.com/gabanox/Certified-Solution-Architect-Associate-Guide
A widely used strategy to solve difficult problems is to use functional decomposition, that is, breaking a complex system or process into manageable parts; a pattern for this is the multilayered architecture, also known as the n-tier architecture, by which we decompose big systems into logical layers focused only on one responsibility, leveraging characteristics such as scalability, flexibility, reusability, and many other benefits. The three-layer architecture is a popular pattern used to decompose monolithic applications and design distributed systems by isolating their functions into three different services:
Presentation Tier
: This represents the component responsible for the user interface, in which user actions and events are generated via a web page, a mobile application, and so on.
Logic Tier
: This is the middleware, the middle tier where the business logic is found. This tier can be implemented via
web servers
or
application servers
;here, every presentation tier event gets translated into service methods and business functions.
Data Tier
: Persistence means will interact with the logic tier to maintain user state and behavior; this is the central repository of data for the application. Examples of this are
database management systems
(
DBMS
) or
distributed memory-caching systems
.
This sentence shows the relevance of the way people organize to develop systems, and how this impacts every design decision we make. We will get into depth in the later chapters discussing microservices architectures, about how we can decouple and remove the barriers that prevent systems from evolving. Bear in mind that this book will show you a new way of systems thinking, and with AWS you have the tools to solve any kind of problem and create very sophisticated solutions.
Cloud computing is a service model based on large pools of resources exposed through web interfaces, with the objective being to provide shareable, elastic, and secure services on demand with low cost and high flexibility:
Designing cloud-based architectures, carries a different approach than traditional solutions, because physical hardware and infrastructure are now treated as software. This brings many benefits, such as reusability, high cohesion, a uniform service interface, and flexible operations.
It's easy to make use of on-demand resources when they are needed to modify its attributes in a matter of minutes. We can also provision complex structures declaratively and adapt services to the demand patterns of our users. In this chapter, we will be discussing the design principles that will make the best use of AWS.
These principles confirm the fundamental pillars on which well-architected and well-designed systems must be made:
Enable scalability
:
Antipattern
: Manual operation to aggregate capacity reactively and not proactively. Passive detection of failures and service limits can result in downtimes for applications and is prone to human errors due to limited reaction timespans:
From the diagram, we can see that instances take time to be fully usable, and the process is human-dependent.
Best practice
: The elastic nature of AWS services makes it possible to manage changes in demand and adapt to the consumer patterns with the possibility to reach global audiences. When a resource is not elastic, it is possible to use Auto Scaling or serverless approaches:
Auto Scaling automatically spins up instances to compensate for demand.
Automate your environment
:
Antipattern
: Ignoring configuration changes and the lack of a
configuration
management database
(
CMDB
) can result in erratic behavior, visibility loss, and have a high impact on critical production systems. The absence of robust monitoring solutions results in fragile systems and slow responses to change requests compromising security and the system's stability:
AWS Config records every change in the resources and provides a unique source of truth for configuration changes.
Best practice
: Relying on automation, from scripts to specialized monitoring services, will help us to gain reliability and make every cloud operation secure and consistent. The early detection of failures and deviation from normal parameters will support in the process of fixing bugs and avoiding issues to become risks. It is possible to establish a monitoring strategy that accounts for every layer of our systems. Artificial intelligence can be used for analyzing the stream of changes in real time and reacting to these events as they occur, facilitating agile operations in the cloud:
Config can be used to durably store configuration changes for later inspection, react in real time with push notifications, and visualize real-time operations dashboards.
Use disposable resources
:
Antipattern
: Running instances with low utilization or over capacity can result in higher costs. The poor understanding of every service feature and capability can result in higher expenses, lower performance, and administration overhead. Maybe you are not using the right tool for the job. Immutable infrastructure is a determinant aspect of using disposable resources, so you can create and replace components in a declarative way:
Tagging will help you gain control, and provide means to orchestrate change management. The previous diagram shows how tagging can help to discover compute resources to stop and start only in office hours for different regions.
Best practice
: Using reserved instances promotes a fast
Return on Investment
(
ROI
). Running instances only when they are needed or using services such as Auto Scaling and AWS Lambda will strengthen usage only when needed, thus optimizing costs and operations. Practices such as
Infrastructure as Code
(
IaC
) give us the ability to create full-scale environments and perform production-level testing. When tests are over, you can tear down the testing environment:
Auto Scaling lets the customer specify the scaling needs without incurring additional costs.
Loosely couple your components
:
Antipattern
: Tightly coupled systems avoid scalability and create a high degree of dependencies at the component and service levels. Systems become more rigid, less portable, and it is very complicated to introduce changes and improvements. Coupling not only happens at the software or infrastructure levels, but also with service providers by using proprietary solutions or services that create dependencies with a brand forcing us to consume their products with the inherent restrictions of the maker:
Software changes constantly, and we need to keep ourselves updated to avoid the erosion of the operating systems and technology stacks.
Best practice
: Using indirection levels will avoid direct communications and less state sharing between components, thus keeping low coupling and high cohesion. Extracting configuration data and exposing it as services will permit evolution, flexibility, and adaptability to changes in the future.
It is possible to replace a component with another if the interface has not changed significantly. It is fundamental to use standards-based technologies and protocols that bring interoperability such as HTTP and RESTful web services:
A good strategy to decouple is to use managed services.
Design services, not servers
:
Antipattern
: Investing time and effort in the management of storage, caching, balancing, analysis, streaming, and so on, is time-consuming and deviates us from the main purpose of the business: the creation of valuable solutions for our customers. We need to focus on product development and not on infrastructure and operations. Large-scale in-house solutions are complicated, and they require a high level of expertise and budget for research and fine-tuning:
Best practice
: Cloud-based services abstract the problem of the operation of this specialized services on a large-scale and offload the operations management to a third party. Many AWS services are backed by
Ser
vice Level Agreements
(
SLAs
), and these SLAs are passed down to our customers. In the end, this improves the brand's reputation and our security posture; also, this will enable organizations to reach higher levels of compliance:
Choose the right database solutions
:
Antipattern
: Taking a
relational database management system
(
RDBMS
) as a silver bullet and using the same approach to solve any kind of problem, from temporal storage to Big Data. Not taking into account the usage patterns of our data, not having a governance data model, and an incorrect classification can leave data exposed to third parties. Working in silos will result in increased costs associated with the
extract, transform, and load
(
ETL
) pipelines from dispersed data avoiding valuable knowledge:
Big Data and analytics workloads will require a lot of effort and capacity from an RDBMS datastore.
Best practice
: Classify information according to its level of sensibility and risk, to establish controls that allow clear security objectives, such as
confidentiality, integrity, and availability
(
CIA
). Designing solutions with specific-purpose services and ad hoc use cases such as caching and distributed search engines and non-relational databases. All of these will contribute to the flexibility and adaptability of the business. Understanding data temperature will improve the efficiency of storage and recovery solutions while optimizing costs. Working with managed databases will make it possible to analyze data at petabyte scale, process huge quantities of information in parallel, and create data-processing pipelines for batch and real-time patterns:
Avoid single points of failure
:
Antipattern
: A chain is as strong as its weakest link. Monolithic applications, a low throughput network card, and web servers without enough RAM memory can bring a whole system down. Non-scalable resources can represent single points of failure or even a database license that prevents the use of more CPU cores. Points of failure can also be related to people by performing unsupervised activities and processes without the proper documentation; also the lack of agility could represent a constraint in critical production operations:
Best practice
: Active-passive architectures avoid complete service outages, and redundant components enable business continuity by performing switchover and failover when necessary. The data and control planes must take this N+1 design paradigm, avoiding bottlenecks and single component failures that compromise the full operation. Managed services offer up to 99.95% availability regionally, offloading responsibilities from the customer. Experienced AWS solutions architects can design
sophisticated
solutions with SLAs of up to five nines:
Optimize for cost
:
Antipattern
: Using big servers for simple compute functions, such as authentication or email relay, could lead to elevated costs in the long term and keeping instances running 24/7 when traffic is intermittent. Adding bigger instances to improve performance without a performance objective and proper tuning won't solve the problem. Even poorly designed storage solutions will cost more than expected. Billing can go out of control if expenses are not monitored in detail.
It is common to provision resources for testing purposes or to leave instances idle, forgetting to remove them. Sometimes, instances need to communicate with resources or services in another geographic region increasing complexity and costs due to inter-region transfer rates:
Best practice
: Replacing traditional servers with containers or serverless solutions. Consider using Docker to maximize the instance resources and AWS Lambda; use recurring compute resources only when needed. Reserving compute capacity can decrease your costs
significantly, by
up to 95%. Leverage managed services features that can store transient data such as sessions, messages, streams, system metrics, and logs:
Use caching
:
Antipattern
: Repeatedly accessing the same group of data or storing this data in a medium not optimized for reading workloads, applications dealing with the physical distance between the client, and the service endpoint. The user receives a pretty bad experience when the network is not available.
Increasing costs due to redundant read requests and cross-region transfer rates, also not having a life cycle for storage and no metrics that can warn about the usage patterns of data:
Best practice
: Identify the most used queries and objects to optimize this information by transferring a copy of this data to the closest location to your end users. By using memory storage technologies it is possible to achieve microsecond latencies and the ability to retrieve huge amounts of data. It is necessary to use caching strategies in multiple levels, even storing commonly accessed data directly on the client, for example mobile applications using search catalogs.
Caching services can lower your costs and offload backend stress by moving this data closer to the consumer. It is even possible to offer degraded experiences without the total service disruption in the case of a backend failure:
Secure your infrastructure everywhere
:
Antipattern
: Trusting in the operating system's firewalls and being naive about the idea that every workload in the cloud is 100% secure out of the box. Waiting until a security breach is made to take corrective measures maybe thinking that only Fortune 500 companies are victims of
Distributed Denial of Service
(
DDoS
) attacks. Implementing HTTPS but no security at-rest compromise greatly the organization assets and become an easy prey of ransomware attacks.
Using the root account and the lack of logs management structures will take visibilityand auditability away. Not having an InfoSec department or ignoring security best practices can lead to a complete loss of credibility and of our clients:
Best practice
: Security must be an holistic labor. It must be implemented at every layer using a systemic approach. Security needs to be automated to be able to react immediately when a breach or unusual activity is found; this way, it is possible to remediate as detected. In AWS, it is possible to segregate network traffic and use managed services that help us to protect valuable assets with complete visibility of operations and management.
At-rest data can be safeguarded by using encryption semantics using cryptographic keys generated on demand delegating the management and usage of these keys to users designated for these jobs. Data in transit must be protected by standard transmission protocols such as IPSec and TLS:
Cloud design patterns are a collection of solutions for solving common problems using AWS technologies. It is a library of recurrent problems that solutions architects deal with frequently when implementing and designing for the cloud. This is curated by the Ninja of Three (NoT) team.
These patterns will be used throughout this book as a reference, so you can get acquainted with them and understand their rationale and implementation in detail.
The Cloud Adoption Framework offers six perspectives to help business and organizations to create an actionable plan for the change management associated with their cloud strategies. It is a way to align businesses and technology to produce successful results:
The six perspectives are grouped as follows:
CAF business perspectives:
Business perspective
:Aligns business and IT into a single model
People perspective
:Personnel management to improve their cloud competencies
Governance perspective
: Follows best practices to improve enterprise management and performance
CAF technical perspectives:
Platform perspective
:
Includes the strategic design, the principles, patterns, and tools to help you with the architecture transition process
Security perspective
: Focuses on compliance and risk management to define security controls
Operations perspective
: This perspective helps to identify the processes and stakeholders relevant to execute change management and business continuity
The AWS Well-Architected Framework takes a structured approach to design, implement, and adopt cloud technologies, and it works around five perspectives so you can look at a problem from different angles. These areas of interest are sometimes neglected because of time constraints and misalignment with compliance frameworks. This book relies strongly on the pillars that are listed here:
Operational excellence
Security
Reliability
Performance efficiency
Cost optimization
This model is the way AWS frees the customer from the responsibility of establishing controls at the infrastructure, platform, and services levels by implementing them through their services. In this sense, the customer must provide full control of implementation in some cases, or work in a hybrid model where the customer provides their own solutions by complementing existing ones in the cloud:
The previous diagram shows that AWS is responsible for the security of the cloud; this involves software and hardware infrastructure and core services. The customer is responsible for everything in the cloud and the data they are the owner of.
To clarify this model, we will use a simple web server example and explain for every step which controls are in place for the customer and for AWS:
To create our web server, we will create an instance.
In the EC2 console choose Launch Instance:
Following are the details of the instance:
AWS/customer
In this example, let's create an instance (1); this image (Amazon Linux AMI) is managed by AWS, and it is security hardened, and it comes preconfigured from software packages from only trusted sources
Instances run isolated from other clients by virtual interfaces that run on a custom version of the Xen hypervisor
Every disk block is zeroized and RAM memory is randomized
The previous example is an example of an inherited control (virtualization type) and a shared control (virtual image).
The next screen is for the configuration of the network attributes and the tenancy mode:
The following are the details of instance configuration:
Every instance runs in a virtual private cloud (Network) (1); the network is an infrastructure-protected service, and the customer inherits this protection, which enables workload isolation to the account level.
Is possible to segregate the network by means of public and private subnetting, route tables function as a traffic control mechanism between networks, service endpoints, and on-premises networks.
Identity and Access Management is the service dedicated to user management and account access.
IAM Roles are meant to improve security from the customer perspective by establishing trust relationships between services and other parties. EC2AccessToS3Role (2) will allow an instance to invoke service actions on S3 securely to store and retrieve data.
AWS/customer
The Tenancy property (3) is a shared control by which AWS implements security at some layers and the customer will implement security in other layers. It is common to run your instance in shared hosts (multi-tenant), but it can be done on a dedicated host (single tenant); this will make your workloads compliant with FIPS-140 and PCI-DSS standards.
The virtual private cloud (VPC) is an example of an inherited control, since AWS runs the network infrastructure; nevertheless, segmentation and subnet configuration is an example of a hybrid control, because the client is responsible for the full implementation by performing a correct configuration and resource distribution.
IAM operations are customer-related, and this represents a specific customer control. IAM roles and all the account access must be managed properly by the client.
Making use of dedicated resources is an example of shared controls. AWS will provide the dedicated infrastructure and the client provides all the management from the hypervisor upwards (operating system, applications).
The highlighted components represent the ones relevant for this example. Add a persistent EBS volume to our EC2 instance:
Security at rest for EBS with KMS cryptographic keys
AWS/customer
EBS volumes can be ciphered on demand by using cryptographic keys provided by the Key Management Service (KMS); this way all data at rest will be kept confidential
The EBS encryption attribute is an example of a shared control, because AWS will provide these facilities as part of EBS and KMS services, but the client must enable this configuration properties because by default, disks are not encrypted. The customer has the ability to use specific controls such as Linux Unified Key Setup (LUKS) to encrypt EBS volumes with third-party tools:
Create a security group to filter the network traffic:
Detail:
AWS/customer
Security groups act as firewalls at the instance level, denying all inbound traffic and opening access only by customer-specified IPs, networks, ports, and protocols. It is a best practice to compartmentalize access by chaining multiple security groups restricting access on every layer. In this example, we create only one security group for the web server in which will be allowed HTTP traffic from any IP address (
0.0.0.0/0
) and restricted access via SSH only from a management machine—in this case, my IP.
This is a hybrid control because the function of network traffic filtering is from AWS, but the full implementation is given by the customer through the service API:
Create a key pair to access the EC2 instance:
Detail:
AWS/Customer
Every compute instance in EC2, whether Linux or Windows, is associated with a key pair, one public key and one private key. The public key is used to cipher the login information of a specific instance. The private key is guarded by the customer so they can provide their identity through SSH for Linux instances. Windows instances use the private key to decrypt the administrator's password.
This is a shared control because the customer and AWS keep responsibility for the guarding of these keys and avoid third-party access that does not have the private key in their possession:
The last step has a dual responsibility:
The customer must protect the platform on which the application will be running, their applications, and everything related to the identity and access management from the app of the middleware perspective.
AWS is responsible for the storage and protection of the public key and the instance configuration.
Let's discuss the core services to manage security in the AWS account scope. Identity and AccessManagement (IAM) and CloudTrail. IAM is the service responsible for all the user administration, and their credentials, access, and permissions with respect to the AWS service APIs. CloudTrail will give us visibility on how this accesses are used, since CloudTrail records all the account activity at the API level.
To enable CloudTrail, you must access the AWS console, find CloudTrail in the services pane and then click on
Create trail
:
The configuration is flexible enough to record events in one region only, or cross-region in the same account, and you can even record CloudTrail events with multiple accounts; it is recommended to choose
All
for
Read/Write events
:
It is important to enable this service initially in newly created accounts and always keep it active because it also helps in troubleshooting when configuration problems occur, when there are production service outages, or to attribute actions to IAM users.
We will create an IAM user and this user can be used by a person for the purpose of everyday operations. But it can also be used by an application invoking service APIs.
Let's navigate to the IAM service in the console, choose the left menu
Users
, and then
Add user
:
The user we will create can only access the web console, and for this, we will create credentials that consist of a username and a password:
In this case, check the
AWS Management Console access
(
1
) as shown in the next screenshot. You have the possibility to assign a custom password for the user or to generate one randomly. It is a good practice to enable password resetting on the next sign-in. This will automatically assign the policy
IAMUserChangePassword
(
2
) that will allow the user to change their own password as shown here:
Choose
Next: Permissions
; on this screen, we will leave it as is. Select
Create User
to demonstrate the default user level that every new user has. The last screen shows us the login data, such as the URL. This URL is different than root access:
Copy the URL, username, and password shown in the previous screenshot.
The access URL can be customized; by default, it has the account number but it is up to the administrator to generate an alias by clicking
Customize
:
Close your current session and use the new access URL. You will see a screen like the following:
Use your new IAM administrator username and password; once logged in, search for the EC2 services. You will get the following behavior:
Let's validate the current access scope by trying to list the account buckets in S3:
This simple test shows up something fundamental about AWS security. Every IAM user, once created, has no permissions; only when permissions are assigned explicitly will the IAM user be able to use APIs, including the AWS console. Let's close this session and log in again; this time, with the root account to create a secure access structure.
We will work on the access structure using the following model:
This diagram shows different use cases for IAM Users, IAM Groups, IAM Roles, and the IAM Policies for everyone.
We will create IAM groups solely for administration purposes, and a unique user that has wide access to this account.
Navigate to
IAM
|
Groups
and then choose the action
Create New Group
:
Use the name
Administration
, and select
Next Step
. In this screen, we will add an IAM policy. IAM policies are JSON formatted documents that specify granuraly the permissions that an entity has (user, group, role).
In the search bar of the step, write the word
administrator
to filter some of the options available for administration policies; the policy we need is called
Administrator Access
. This policy is an AWS-managed policy and is available for every AWS customer for their use when a full administration policy is required for a principle.
Select
Next Step
.
