29,99 €
Make the most of AWS’ unparalleled opportunities for professionals seeking to reskill and future-proof their careers with this comprehensive guide, serving as your strategic pathway to enhancing your career potential and validating your expertise with an AWS certification.
With Michelle Chismon’s unique blend of academic credentials, industry-spanning cloud consulting experience, and role as an AWS Authorized Instructor training global audiences, combined with Kate Gawron's extensive career in applications and databases and AWS expertise helping clients optimize their AWS environments, this exam guide offers technical depth, practical insights, and teaching expertise to help you master AWS.
Packed with detailed explanations, chapter-end review questions, and exam-level mock exams, this all-in-one exam guide equips you to excel. From essential design and architectural principles, including building secure, resilient systems and optimizing costs, to key exam domains, such as VPCs, serverless computing, and database design, you’ll cover every aspect of the AWS SAA-C03 exam.
In addition to technical knowledge, this guide offers exam strategies and expert tips, to build confidence and increase your chances of success. Begin your certification journey and turn your AWS certification into a springboard for success in cloud computing.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 532
Veröffentlichungsjahr: 2024
AWS Certified Solutions Architect - Associate (SAA-C03) Exam Guide
Aligned with the latest AWS SAA-C03 exam objectives to help you pass the exam on your first attempt
Michelle Chismon
Kate Gawron
Copyright © 2024 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Authors: Michelle Chismon and Kate Gawron
Reviewer: Saibal Ghosh
Publishing Product Manager: Sneha Shinde
Senior Development Editor: Ketan Giri
Development Editor: Kalyani S.
Digital Editor: M Keerthi Nair
Presentation Designer: Salma Patel
Editorial Board: Vijin Boricha, Megan Carlisle, Simon Cox, Saurabh Kadave, Alex Mazonowicz, Gandhali Raut, and Ankita Thakur
First Published: November 2024
Production Reference: 1291124
Published by Packt Publishing Ltd.
Grosvenor House
11 St Paul’s Square
Birmingham
B3 1RB
ISBN: 978-1-83763-000-4
www.packtpub.com
Michelle Chismon is a Senior Cloud Architect with a diverse background that spans the fields of bioinformatics, consulting, and education. After completing her Ph.D. in genetics and molecular medicine, Michelle transitioned into the world of cloud consulting, where she leveraged her unique background to provide innovative cloud infrastructure solutions for clients in various industries. Michelle’s dedication to continuous learning and improvement drove her to master the AWS ecosystem, and she subsequently became an AWS Authorized Instructor, delivering training on behalf of AWS worldwide.
During the 2020 pandemic and lockdown, Michelle trained a successful cohort of students in the AWS re/Start program, giving several people from disadvantaged backgrounds their jumpstart into the tech industry. She now works full-time at AWS working with some of the largest companies globally to solve their cloud infrastructure challenges.
LinkedIn profile: https://www.linkedin.com/in/beaumontmichelle/
Kate Gawron is a full-time Senior Cloud Consultant. She has worked with applications and databases for 18 years and AWS for 5 years. She holds four AWS certifications, including the AWS Certified Solution Architect–Associate certification, and two Google Cloud certifications. Kate currently works as a senior cloud architect, helping customers to migrate and refactor their applications and databases to work optimally within the AWS cloud. Kate has published a highly regarded exam guide for the AWS Certified Database – Specialty with Packt in 2022. She is also a part-time future racing driver. She was a competitor in Formula Woman, and she aspires to become a professional Gran Turismo (GT) racing driver.
LinkedIn profile:https://www.linkedin.com/in/katehollow
Saibal Ghosh is a seasoned professional with extensive expertise in Databases, Machine Learning, Cloud Security, Docker, Kubernetes, and the AWS Cloud.
He previously specialized as an Oracle DBA but has since expanded his expertise to include a broader range of databases beyond Oracle. His current focus also encompasses Data Engineering and Machine Learning.
As a Senior Technical Account Manager at Amazon Web Services, Saibal leverages his deep knowledge of cloud technologies to simplify AWS Cloud's complexities for customers, empowering them to effectively address their business challenges.
He is also the author of Docker Demystified published by BPB Publications, which underscores his ability to convey complex technological concepts clearly. Throughout his career, Saibal has embraced diverse roles, including developer, database administrator focused on performance tuning, trainer, and technical writer. His recent work deals with cloud technology, cloud security, telecommunications, and database management. With over two decades of experience combining technical expertise and business acumen, Saibal excels at delivering solutions that balance technical excellence with organizational goals.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is a key certification for IT professionals looking to demonstrate their ability to design and deploy scalable, highly available, and secure systems on AWS. This book is crafted to provide essential knowledge, practical exercises, and the insight needed to confidently pass the SAA-C03 exam. Whether you are an experienced professional or new to cloud computing, this guide will help you navigate the complexities of the exam with ease.
The SAA-C03 exam covers a wide array of topics, from fundamental AWS services to advanced architectural concepts. This book simplifies these topics into digestible sections, providing real-world examples and hands-on labs to ensure that you can not only understand the material but also apply it effectively. By the end of this book, you will be well prepared to take the exam and implement your skills in real-world scenarios.
Chapter 1, Understanding Cloud Fundamentals, helps you get a clear understanding of the fundamentals of cloud computing.
Chapter 2, Virtual Private Cloud, teaches you about the intricacies of VPCs, giving you an in-depth understanding of their structure and functionality.
Chapter 3, Identity and Access Management, provides you with a comprehensive understanding of IAM’s capabilities and mechanisms.
Chapter 4, Compute, explores the diverse compute options available, ranging from traditional instances to modern container services.
Chapter 5, Storage, delves deeper into the specifics of AWS’s storage options, ensuring that you are able to choose the most appropriate solution for your needs.
Chapter 6, DNS and Load Balancing, covers the core concepts of DNS, Route 53, load balancing, and ELB to help you make optimal design decisions when architecting highly available applications on AWS.
Chapter 7, Data and Analytics, explores the various AWS data and analytics services, teaching you how to evaluate the choices available.
Chapter 8, Migrations and Data Transfer, teaches you about the processes and tools for migrating and transferring data to AWS.
Chapter 9, Serverless and Application Integration, delves deep into the core principles and services that underpin the serverless paradigm, focusing on equipping you with the skills required to design and implement efficient, cost-effective, and resilient serverless applications.
Chapter 10, Security, provides an overview of the key AWS security services.
Chapter 11, Management and Governance, explains how to create compliance rules so that you can be alerted when rules are broken, how to auto-remediate broken rules, and how to enforce permissions across an entire cross-region, multi-account platform, among other things.
Chapter 12, Design Secure Architectures, shows you how the services covered in previous chapters fit into the Design Secure Architecturesexam domain.
Chapter 13, Design Resilient Architectures, covers the two task statements from the Design Resilient Architecturesexam domain.
Chapter 14, Design High-Performing Architectures, focuses on the Design High-Performing Architectures exam domain and explores the key considerations across various components that contribute to building solutions that not only perform well under current loads but are also scalable.
Chapter 15, Design Cost-Optimized Architectures, covers the four task statements from the Design Cost-Optimized Architecturesexam domain.
This book is crafted to equip you with the skills necessary to excel in the SAA-C03 exam through practical explanations of major domain topics. It covers the core domains critical to the expertise that candidates need to pass the exam. For each domain, you will work through content that reflects real-world challenges and also complete hands-on labs for some. At the end of each chapter, you will assess your understanding by taking chapter-specific quizzes. This not only prepares you for the SAA-C03 exam but also allows you to dive deeper into the topics.
With this book, you will unlock unlimited access to our online exam-prep platform (Figure 0.1). This is your place to practice everything you learn in the book.
How to access the resources
To learn how to access the online resources, refer to Chapter 16, Accessing the Online Practice Resources, at the end of this book.
Figure 0.1 – Online exam-prep platform on a desktop device
Sharpen your knowledge of AWS SAA-C03 concepts with multiple sets of mock exams, interactive flashcards, and exam tips accessible from all modern web browsers.
We also provide a PDF file that has color images of the screenshots/diagrams used in this book.
You can download it here: https://packt.link/SAAC03graphicbundle
Code words in the text, database table names, folder names, filenames, file extensions, screen text, pathnames, dummy URLs, user input, and X handles are shown as follows: “Type aws—version in the Terminal.”
A block of code is set as follows:
aws ec2 create-vpc --cidr-block 10.0.0.0/16New terms and important words are shown like this: “In its early days, it offered Simple Storage Solution (S3) for storage and Elastic Compute Cloud (EC2) for computing power.”
Tips or important notes
Appear like this.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata, selecting your book, clicking on the Errata Submission Form link, and entering the details. We ensure that all valid errata are promptly updated in the GitHub repository, with the relevant information available in the Readme.md file. You can access the GitHub repository at https://packt.link/SAAC03github.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
To fully engage with the content and exercises in this book, you will need to meet the following technical requirements:
AWS account with root access:You will need an AWS account with root access to complete the exercises in this book. Most of the services and examples fall under the AWS Free Tier, allowing you to experiment without incurring costs, provided your account is within the first 12 months of creation.If you do not have an AWS account, you can create one at https://aws.amazon.com/free/.Command-line interface (CLI) access:The AWS CLI will be used frequently throughout this book for interacting with AWS services from the command line.To set up the AWS CLI, do the following:Download the AWS CLI: Get the latest version from the CLI Installpage: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html.Create an IAM user: Follow the steps in the User Creation Guide, https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html, to create an IAM user with administrative access and generate an access key.Configure the AWS CLI: Use the aws configure command to set up your CLI profile with the necessary credentials. Detailed instructions can be found in the AWS CLI Configuration Guide: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html.A basic understanding of AWS services: While this book will teach you everything you need to know for the SAA-C03 exam, having a basic understanding of core AWS services such as EC2, S3, RDS, and IAM will be beneficial.In addition to these technical requirements, it is important to have hands-on practice. Passing the SAA-C03 exam requires not just theoretical knowledge but also practical experience. Be sure to complete the exercises, experiment with AWS services, and apply your learning to real-world scenarios.
With these technical requirements met, you will be ready to begin your journey toward passing the AWS Certified Solutions Architect - Associate (SAA-C03) exam. Let’s dive in and unlock your full potential in cloud architecture, starting with an overview of the exam.
To assist in your preparations for the exam, it is worth looking at both the format of the exam and the topics that will be covered. This can guide you through your revision by allowing you to focus on the areas you are least confident in.
In this section, you are going to read about the following:
Exam format: What type of questions here are and how long you will have during the examExam domains: The areas you will be tested on during the examFirst, let’s look at the exam format so you know what to expect after you have booked the exam.
All AWS exams are taken electronically, either at a test center or remotely via an online proctoring session.
The exam lasts 130 minutes and there will be 65 questions. If English is your second language or you have a disability that may impact your ability to complete the exam in 130 minutes, you can request an additional 30 minutes of exam time.
The pass mark will vary slightly between each exam, but the minimum will always be 720 out of 1,000. This variation is due to the questions being rated with varying difficulty, so they are weighted for fairness. As a rough guide, a pass should be obtained by answering 50 questions correctly.
Each exam has 15 questions that are not scored. These are used to evaluate questions for future versions of the exam. These unscored questions are not identified in the exam, so you should answer every question.
You are not penalized for incorrect answers and therefore you should attempt to answer all questions, even if you do not know the answer.
When you start the exam, you will first need to confirm your details, check that you have the right exam, and then sign a Non-Disclosure Agreement (NDA) that you will not share the exam questions. Once this is done, you will be given a brief overview of the exam and shown how to navigate through the screens.
The majority of the questions are situational, requiring you to be able to interpret the question to work out the correct answer.
The questions are all multiple choice, with two different styles:
Multiple choice: One correct answer and three incorrect answers.Multiple answer: Two or more correct answers out of five or more options. The question will state how many answers are expected.You can mark any questions for review at the end.
At the end of the exam, there is a survey about the exam and your preparation for it. You must complete this before receiving your exam result.
You will not typically receive your pass or fail result immediately, and you will only receive your full results and score once they have been verified. This verification normally takes three working days. Once the verification is complete, you will receive an email to your registered address and you will be able to obtain your full score report, which shows you how well you performed in each domain. This is particularly useful if you do not meet the passing grade as you will be given areas to focus your studies on for the next attempt.
You have learned the exam format and style of the questions. Now, take a look at the topics that will be covered in the exam, which this book will guide you through.
The AWS Certified Solutions Architect – Associate (SAA-C03) exam covers four high-level topics encompassing a wide range of subjects and AWS services and solutions. These are as follows:
Domain
Percentage
Domain 1: Design Secure Architectures
30%
Domain 2: Design Resilient Architectures
26%
Domain 3: Design High-Performing Architectures
24%
Domain 4: Design Cost-Optimized Architectures
20%
TOTAL
100%
Table 0.1: The four exam domains in the SAA-C03 exam
The percentage refers to the most likely number of questions that will be asked in the exam. You can expect roughly the following number of questions in each domain:
Domain
Questions
Domain 1: Design Secure Architectures
19
Domain 2: Design Resilient Architectures
17
Domain 3: Design High-Performing Architectures
16
Domain 4: Design Cost-Optimized Architectures
13
TOTAL
65
Table 0.2: Rough number of questions from each domain
The AWS Certifications team provides a high-level description of each domain, including the key AWS services and technologies you will need to know to pass the exam. However, this exam expects you to be able to use multiple services to architect solutions based on scenarios, so simply knowing the names of AWS services is unlikely to be enough to earn a pass. In the next section, you are going to learn what each domain really means and the key topics within each. This can be used to help guide you while you study and prepare for the exam. Let’s begin with domain 1: Design Secure Architectures.
Building secure AWS architectures is vital for protecting data, applications, and infrastructure from threats. This requires knowledge of AWS services, infrastructure, and security best practices, including access control, identity services, and flexible authorization. In this section, we will cover three key task statements for designing secure systems:
Design Secure Access to AWS ResourcesDesign Secure Workloads and ApplicationsDetermine Appropriate Data Security ControlsDesigning secure access to AWS resources requires understanding access controls, federated identity services, AWS infrastructure, security best practices, and the shared responsibility model. Key skills include applying IAM best practices, creating flexible authorization models, implementing role-based access control, managing security for multiple accounts, using resource policies effectively, and integrating directory services with IAM roles when needed.
You will need to know how to design and appropriately apply the following:
Adhering to AWS security best practices for IAM users and root users, which includes the use of multi-factor authentication (MFA) when appropriate.Designing a flexible authorization model. This includes IAM users, groups, roles, and policies.Creating a role-based access control strategy that incorporates AWS Security Token Service (AWS STS), role switching, and cross-account access.Creating a security strategy for multiple AWS accounts, including AWS Control Tower and service control policies (SCPs).Deciding the right use of resource policies for AWS services.Deciding when to integrate a directory service with IAM roles.Designing secure workloads and applications requires understanding application security, AWS service endpoints, protocols, network traffic, secure access, and external threats. Key skills include creating secure VPC architectures, planning network segmentation, integrating AWS security services, and securing external connections to and from AWS.
This includes the following topics:
Creating virtual private cloud (VPC) architectures with security components, including security groups, route tables, network access control lists (NACLs), and network address translation (NAT) gateways.Planning network segmentation strategies, which involves determining how to structure your network using public and private subnets.Integrating various AWS services to enhance the security of applications. This includes AWS Shield, AWS Web Application Firewall (AWS WAF), AWS Single Sign On (AWS SSO), and AWS Secrets Manager.Securing external network connections to and from the AWS cloud, including VPN and AWS Direct Connect.Determining appropriate data security controls requires knowledge of data access, governance, recovery, retention, classification, and encryption with key management. Key skills include meeting compliance requirements with AWS technologies, encrypting data at rest and in transit, managing access policies for encryption keys, implementing backups and data lifecycle policies, rotating encryption keys, and renewing certificates.
The following areas are covered in this section:
Aligning AWS technologies to meet compliance requirementsUsing AWS Key Management Service (KMS) to encrypt data stored on AWSEncrypting data in transit using AWS Certificate Manager (AWS ACM) and Transport Layer Security (TLS)Setting up access policies for encryption keysSetting up automated backup and data replication strategiesImplementing policies for data access, lifecycle, and protectionRegularly rotating encryption keys and renewing certificates to maintain securityIn conclusion, domain 1 of the SAA-C03 exam covers the design of secure architectures on AWS. It requires knowledge of various AWS services, security best practices, and the shared responsibility model. It also tests your skills in designing secure access to AWS resources and secure workloads and applications. To succeed in this domain, you will need to have a deep understanding of AWS security, networking, and identity and access management.
Let’s now look at the second domain in the exam, Design Resilient Architectures.
Designing resilient architectures is crucial for organizations utilizing AWS to ensure their systems can withstand failures and maintain high availability. Resilient architectures are designed to be scalable, fault-tolerant, and capable of handling disruption, allowing businesses to deliver reliable services to their users. In this section, you will explore two task statements within the domain of designing resilient architectures:
Design Scalable and Loosely Coupled ArchitecturesDesign Highly Available and/or Fault-Tolerant ArchitecturesCreating scalable and loosely coupled architectures involves designing systems that can handle varying workloads and adapt to changing demands. It entails building components that can scale independently, enabling resource adjustments based on specific requirements. Important considerations in this area include the following:
Leveraging AWS services such as Auto Scaling to automatically scale resources based on workload fluctuationsImplementing loosely coupled architectures using services such as AWS Lambda, Amazon Simple Queue Service (SQS), or Amazon Simple Notification Service (SNS) to decouple components and enhance flexibility and scalabilityUtilizing services such as Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS) to manage containerized workloads efficiently and facilitate scalingDesigning highly available and fault-tolerant architectures ensures system operability even in the face of failure or disruption. It involves implementing redundancy, fault isolation, and automated failover mechanisms. Key considerations in this area include the following:
Deploying solutions such as AWS Elastic Load Balancer (ELB) or Amazon Route 53 to distribute traffic across multiple instances or regions, ensuring continuous availabilityUtilizing AWS services such as Amazon RDS Multi-AZ, which provides automated synchronous replication of databases to ensure data availability during failuresIncorporating fault isolation principles using concepts such as Availability Zones (AZs) or multi-region deployments to mitigate the impact of failuresImplementing automated failover mechanisms through services such as Amazon Route 53 DNS failover or AWS Elastic Beanstalk rolling deploymentsIn summary, domain 2 of the SAA-C03 exam focuses on designing resilient architectures on AWS. It requires expertise in designing multi-tier architectures for high availability and fault tolerance, as well as ensuring business continuity through disaster recovery and failover strategies. To succeed in this domain, you will need to have a thorough understanding of AWS services such as EC2, ELB, Route 53, and CloudFormation, as well as experience in designing highly available and fault-tolerant architectures.
Let’s now learn what domain 3, Design High-Performing Architectures, covers.
Domain 3: Design High-Performing Architectures
Designing high-performance architectures is vital for ensuring the smooth and efficient functioning of workloads on AWS. It involves identifying and selecting the right compute, storage, and networking solutions for your workload. To design high-performance architectures, you need to be familiar with various AWS services and understand their capabilities and limitations.
In this section, you will read about the five task statements related to designing high-performance architectures:
Determine High-Performance and/or Scalable Storage SolutionsDesign High-Performance and Elastic Compute SolutionsDetermine High-Performance Database SolutionsDetermine High-Performance and/or Scalable Network ArchitecturesDetermine High-Performance Data Ingestion and Transformation SolutionsSelecting the right storage solutions is essential to achieve high performance and scalability in your architecture, ensuring efficient data storage, retrieval, and durability. When designing high-performance architectures, you need to consider the specific requirements of your workload, including data volume, access patterns, latency needs, and durability expectations.
You will need to understand how to do the following:
Evaluate AWS storage services such as Amazon S3, Amazon EBS, and Amazon EFS based on the specific performance needs of your workloadImplement caching mechanisms using services such as Amazon ElastiCache and Amazon CloudFront to enhance storage performanceUtilize sharding or partitioning techniques to distribute data across multiple storage instances for improved scalabilityDesigning high-performance and elastic compute solutions involves a careful evaluation of various compute resources provided by AWS and optimizing their performance to meet the requirements of your workload. This includes considering factors such as computational power, memory capacity, storage options, and networking capabilities.
You will be tested on your knowledge of the following:
Choosing AWS compute services such as Amazon EC2, AWS Lambda, and AWS Fargate based on workload characteristics and performance requirementsImplementing auto-scaling configurations to dynamically adjust compute resources based on workload demandsLeveraging AWS services such as Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS) to efficiently manage containerized workloads and enhance performanceSelecting the right database solutions is crucial for achieving high performance and scalability in your architecture, enabling efficient data storage, retrieval, and management. When designing high-performance architectures, it is essential to consider factors such as data volume, throughput requirements, latency sensitivity, and scalability needs.
The exam will feature questions on how to do the following:
Evaluate AWS database services such as Amazon RDS, Amazon DynamoDB, and Amazon Aurora based on the specific performance and scalability requirements of your workloadImplement read replicas or sharding techniques to distribute database load and improve performanceUtilize caching mechanisms using services such as Amazon ElastiCache to reduce database access latency and enhance performanceDesigning high-performance and scalable network architectures is vital for achieving optimal performance across your infrastructure, ensuring reliable and efficient communication between various components of your system. A well-designed network architecture can minimize latency, reduce bottlenecks, and provide high bandwidth to support the demands of your workload.
You will need to learn how to do the following:
Design your network using Amazon VPC to provide isolated and secure communication between resourcesImplement AWS services such as AWS Direct Connect and AWS Global Accelerator to optimize network connectivity and reduce latencyUtilize content delivery networks (CDNs) such as Amazon CloudFront to cache and deliver content closer to end users, improving performanceEfficiently handling data ingestion and transformation is crucial for high-performance architectures, enabling seamless and timely processing of data to drive actionable insights and meet business requirements. In today’s data-driven landscape, organizations need to effectively handle the continuous influx of data from various sources and transform it into valuable information.
You will be tested on how to do the following:
Evaluate AWS services such as Amazon Kinesis and AWS Data Pipeline for real-time or batch data ingestionUtilize services such as AWS Glue or Amazon EMR for data transformation and processing at scaleImplement parallel processing techniques and distributed computing frameworks to optimize data ingestion and transformation performanceTo summarize, domain 3 of the SAA-C03 exam delves into the design of high-performance architectures on AWS. This domain encompasses a broad range of topics, including determining high-performing and scalable storage solutions, designing high-performing and elastic compute solutions, selecting high-performing database solutions, crafting high-performing and scalable network architectures, and determining efficient data ingestion and transformation solutions. To excel in this domain, you need to possess a comprehensive understanding of AWS services such as Amazon S3, Amazon EC2, Amazon RDS, Amazon VPC, and AWS Glue. Additionally, hands-on experience in designing architectures that prioritize performance, scalability, and efficiency will prove invaluable.
We will now look at domain 4, Design Cost-Optimized Architectures, the final exam domain.
Designing cost-optimized architectures is an important aspect of cloud computing, as it can help organizations maximize the value of their AWS investments while reducing unnecessary expenses. In order to design cost-effective architectures, you need to be familiar with various AWS services, understand how to balance performance requirements with cost, and have expertise in data lifecycle management. In this section, we will cover four task statements related to designing cost-optimized architectures:
Design Cost-Optimized Storage SolutionsDesign Cost-Optimized Compute SolutionsDesign Cost-Optimized Database SolutionsDesign Cost-Optimized Network ArchitecturesDesigning cost-optimized storage solutions involves a meticulous approach to selecting the most suitable storage services and strategies that not only meet the performance requirements of the workload but also optimize costs. It requires a thorough understanding of the data access patterns, usage frequency, and expected growth of the storage needs. By considering these factors, organizations can make informed decisions to strike the right balance between performance and cost. Key considerations in this area include the following:
Assessing data access patterns and leveraging appropriate storage classes, such as Amazon S3 Standard, Amazon S3 Glacier, and Amazon EBS, to match the needs of different data typesImplementing data lifecycle management techniques, such as transitioning infrequently accessed data to lower-cost storage tiers or archiving data for long-term retentionUtilizing AWS storage services such as Amazon S3 Intelligent-Tiering to automatically optimize costs by moving data between storage tiers based on usage patternsDesigning cost-optimized compute solutions involves a strategic approach to selecting compute resources that align with the performance requirements of the workload while optimizing costs. It entails understanding the specific needs of the application or workload and making informed decisions to maximize efficiency and cost-effectiveness. Consider the following:
Choosing the appropriate instance types based on workload characteristics, such as CPU, memory, and networking requirementsUtilizing AWS services such as Amazon EC2 Spot Instances, which offers cost savings by leveraging spare capacityImplementing auto-scaling configurations to dynamically adjust compute resources based on demand, avoiding over-provisioning and reducing costsDesigning cost-optimized database solutions requires the careful evaluation of database services and configurations to ensure they align with the performance needs of the workload while optimizing costs. It involves considering factors such as data volume, query patterns, and desired response times to make informed decisions that strike the right balance between performance and cost efficiency. Consider the following:
Choosing the appropriate database service based on workload characteristics, such as Amazon RDS, Amazon DynamoDB, or Amazon AuroraRight-sizing database instances to match workload demands and avoid unnecessary costsImplementing database caching techniques, such as Amazon ElastiCache, to improve performance and reduce database loadDesigning cost-optimized network architectures involves a comprehensive approach to optimizing network configurations and services in order to minimize costs while meeting the performance and security requirements of the workload. It requires a deep understanding of the network infrastructure and the specific needs of the applications or services running on it. Consider the following:
Utilizing AWS networking services, such as Amazon VPC, to design efficient and cost-effective network topologiesImplementing traffic management strategies, such as CDNs like Amazon CloudFront, to reduce data transfer costs and improve content delivery performanceLeveraging AWS Direct Connect or VPN connections effectively to optimize network connectivity costsDomain 4 of the exam covers the design of cost-optimized architectures on AWS. This domain requires you to identify cost-effective compute and database services, use cost-effective storage solutions, and design solutions that can optimize costs for operational efficiency based on business requirements. To succeed in this domain, you will need to have a solid understanding of AWS pricing models, cost optimization strategies, and how to balance cost with performance and other business needs.
Now that you have learned about all the domains of the exam, it’s time to dive in and learn all about AWS.
Once you’ve read AWS Certified Solutions Architect - Associate (SAA-C03) Exam Guide, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.
Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.
Thanks for purchasing this book!
Do you like to read on the go but are unable to carry your print books everywhere?
Is your eBook purchase not compatible with the device of your choice?
Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.
Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application.
The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily.
Follow these simple steps to get the benefits:
Scan the QR code or visit the link below:https://packt.link/free-ebook/9781837630004
Submit your proof of purchase.That’s it! We’ll send your free PDF and other benefits to your email directly.In this chapter, we will delve into the fundamental concepts of cloud computing. Whether you are new to solutions architecture or have experience with traditional on-premises deployments, this chapter aims to provide you with a solid foundation to understand cloud computing and its key principles. While this chapter is part of an Amazon Web Services (AWS) exam guide, it aims to give a general overview of the concepts of cloud computing across all cloud providers, with a specific section on AWS specifics.
To become a successful cloud solutions architect, it is vital that you understand the reasons why cloud computing exists and what challenges it aims to resolve before you start diving into deeper technical implementations. In the exam, there are often questions that require you to understand the main benefits of migrating to the cloud from on-premises. By the end of this chapter, you will be able to confidently answer the exam questions focused on the benefits of cloud computing.
This book and its accompanying online resources are designed to be a complete preparation tool for your AWS Certified Solutions Architect - Associate (SAA-C03) exam.
The book is written in a way that means you can apply everything you’ve learned here even after your certification. The online practice resources that come with this book (Figure 1.1) are designed to improve your test-taking skills. They are loaded with timed mock exams, chapter review questions, interactive flashcards, case studies, and exam tips to help you work on your exam readiness from now till your test day.
Before You Proceed
To learn how to access these resources, head over to Chapter 16, Accessing the Online Practice Resources, at the end of the book.
Figure 1.1: Dashboard interface of the online practice resources
Here are some tips on how to make the most of this book so that you can clear your certification and retain your knowledge beyond your exam:
Read each section thoroughly.Make ample notes: You can use your favorite online note-taking tool or use a physical notebook. The free online resources also give you access to an online version of this book. Click the BACK TO THE BOOK link from the dashboard to access the book in Packt Reader. You can highlight specific sections of the book there.Chapter review questions: At the end of this chapter, you’ll find a link to review questions for this chapter. These are designed to test your knowledge of the chapter. Aim to score at least 75% before moving on to the next chapter. You’ll find detailed instructions on how to make the most of these questions at the end of this chapter in the Exam Readiness Drill – Chapter Review Questions section. That way, you’re improving your exam-taking skills after each chapter, rather than at the end of the book.Flashcards: After you’ve gone through the book and scored 75% or more in each of the chapter review questions, start reviewing the online flashcards. They will help you memorize key concepts.Mock exams: Revise by solving the mock exams that come with the book till your exam day. If you get some answers wrong, go back to the book and revisit the concepts you’re weak in.Exam tips: Review these from time to time to improve your exam readiness even further.In this chapter, we are going to cover the following main topics:
Cloud computingThe AWS cloudAWS architecture and key infrastructureCloud economicsLet’s get startedCloud computing is particularly important today due to its ability to offer scalability and flexibility, which are essential in our rapidly changing market environments. Organizations can scale their IT resources up or down based on demand, providing a critical competitive advantage in responding swiftly to opportunities or challenges, which, in turn, drives faster innovation. Additionally, cloud computing promotes cost efficiency by allowing businesses to minimize capital expenses. Instead of investing in extensive hardware setups and ongoing maintenance, companies can use cloud services to access advanced computing capabilities, paying only for what they use. This shift not only reduces overhead costs but also enables businesses to allocate resources more strategically to foster innovation and growth.
A possible definition of cloud computing is that it is a framework designed to offer ubiquitous, user-friendly, and instant access to a collectively available and adaptable set of computing resources, encompassing networks, servers, storage, applications, and services. These resources can be swiftly allocated and de-allocated, requiring minimal administrative oversight and interaction with service providers.
Cloud computing represents a significant shift in the way that organizations and individuals utilize computing resources. This means that rather than having to install a suite of software for each computer, users can access their applications and data from any device with an internet connection. This approach to computing offers enhanced flexibility and scalability, making it increasingly popular among businesses and individuals alike.
The evolution of cloud computing marks a significant departure from the traditional IT infrastructure, which was characterized by on-premises hardware and software. In the past, companies needed to invest heavily in physical servers and dedicated IT teams to manage and maintain them. This model was not only costly but also lacked flexibility and scalability. The advent of cloud computing revolutionized this, enabling businesses to access computing resources as a service via the internet. This shift meant that organizations could scale resources up or down based on their needs, without the need for significant upfront investment. The evolution of cloud computing is also marked by advancements in virtualization technology, which allows multiple virtual machines to operate on a single physical server, enhancing the efficiency and cost-effectiveness of computing resources. Take a look at Figure 1.2, which shows the basics of cloud computing:
Figure 1.2: Cloud computing basics
Cloud computing is defined by several key characteristics that distinguish it from traditional computing models. These include the following:
On-demand self-service: Users, such as developers, can automatically provision computing resources, such as server time and network storage, as needed, without requiring manual intervention from the service provider. This allows companies to react faster, as they can get the resources they need without lengthy procurement processes.Network access: Services are accessible over a network and can be utilized through standard protocols (for example, transmission control protocol/internet protocol or application programming interface calls) that support usage across a wide range of different client platforms, whether thin or thick (e.g., mobile phones and laptops).Resource sharing: The computing resources of the provider are shared across multiple consumers using a multi-tenant model. Different physical and virtual resources are dynamically assigned and reassigned, based on consumer demand. This generally makes cloud computing more cost-effective, as the service providers can offer economies of scale that would be difficult for smaller organizations to match.Elasticity: Capabilities can be swiftly and elastically provisioned, sometimes automatically, to rapidly scale both outward and inward, in alignment with the fluctuating demand.Service charges: Cloud systems automatically optimize resource usage through metering, allowing a pay-per-use model and ensuring cost efficiency.There is a common misbelief that when you discuss cloud computing, you always refer to a cloud service that is managed by someone else. This is not correct. Cloud computing architectures and philosophies can be created and managed within your existing data centers, but this would require a large amount of coding, automation, and expense. In fact, there are four different types of cloud deployment, which you will learn about next.
Understanding the various cloud deployment models is crucial for businesses and individuals looking to leverage cloud technology effectively. There are four different types of cloud computing available – private, community, public, and hybrid:
Private cloud: This is designed for exclusive use by a single organization, offering enhanced control and securityCommunity cloud: This serves a group of organizations with common goals and requirementsPublic cloud: This is the most common type, providing services over the internet to the public or large industry groups, often delivering scalability and cost-effectivenessHybrid cloud: This blends elements of both the private and public clouds, offering a balanced approach that maximizes both security and flexibility, as shown in Figure 1.3:Figure 1.3: Cloud deployment models
Let’s take a deeper look at the four cloud deployment models and how they work, starting with the private cloud.
A private cloud is a cloud computing environment dedicated solely to one organization. It offers the following:
Exclusivity: Serves only one organization, providing tailored IT solutionsControl and customization: Gives you full control over the cloud setup, enabling specific customizations for business needsEnhanced security: Offers higher security levels, beneficial for sensitive data and compliance with regulatory standardsReliable performance: With dedicated resources, it ensures efficient and stable performanceHigher costs: Typically, it is more expensive than public clouds due to the costs of infrastructure, maintenance, and managementDeployment flexibility: Can be hosted either on-premises or by a third-party provider, but it is used exclusively by one organizationLimited scalability: Offers scalability, although it is not as extensive as public clouds as you are constrained to the servers you ownPrivate clouds are best suited for organizations needing specific control, high security, and customization in their cloud infrastructure, but they come with higher costs and limited scalability compared to public clouds. Organizations such as banks, government bodies, and the military may consider using a private cloud to meet their security requirements.
A community cloud is a cloud computing model shared by several organizations with common goals or requirements. Its main features include the following:
Shared infrastructure: Designed for a specific community of users with similar needs, allowing cost and resource sharingA collaborative environment: Facilitates collaboration and data sharing among member organizations, often benefiting from collective expertiseCustomized security and compliance: Offers a level of security and compliance tailored to the specific community, often more focused than public clouds but less exclusive than private cloudsCost-effectiveness: More cost-efficient than private clouds, as expenses are shared among the participating organizationsScalability and flexibility: Provides scalability and flexibility to accommodate the needs of the community, although it may not match the scale of public cloudsCommunity clouds are ideal for groups of organizations with shared interests and requirements, offering a balance of security, collaboration, and cost savings.
A public cloud is where services and infrastructure are provided over the internet and shared among multiple users, offering limited customization. Key characteristics include the following:
Shared resources: Operated by third-party providers, it serves multiple clients using the same shared infrastructure; however, there are strict guardrails between customer environmentsScalability and flexibility: Offers high scalability, easily accommodating fluctuating demandsCost-effectiveness: Typically operates on a pay-as-you-go model, which can be more cost-effective than maintaining private infrastructureEase of access: Users can access services and manage their accounts via the internetMinimal maintenance: Users are not responsible for hardware and software maintenance, as this is managed by the service providerPublic clouds are well-suited for businesses seeking cost-effective, scalable, and easily accessible cloud services without the need for direct management of the infrastructure.
A hybrid cloud combines private and public cloud elements, offering a versatile cloud computing model. Its main features are as follows:
The integration of private and public clouds: Blends the control and security of private clouds with the scalability and cost-efficiency of public cloudsFlexibility and scalability: Allows businesses to keep sensitive data in a private cloud while leveraging the expansive resources of a public cloud for less sensitive operationsCost-effective and efficient: Provides a balance between cost and performance, allowing organizations to optimize their cloud spendingCustomizable security and compliance: Offers tailored security and compliance options, meeting specific organizational needsComplex management: Management can be more complex due to the integration of different cloud environmentsHybrid clouds are ideal for organizations that need both the security of a private cloud and the scalability and cost benefits of a public cloud.
Once you have chosen which cloud deployment model works best for your organization, you then need to choose the type of service you wish to exploit.
In addition to choosing which cloud deployment you want to use, you will also need to decide how best to run your services. The three fundamental service models – Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) – are offered by most cloud providers and allow users to choose what level of control they need, versus the operational benefits of using a fully managed service. IaaS provides the most basic level of cloud services, offering fundamental computing infrastructure such as servers, storage, and networking resources on demand. PaaS builds upon this by adding a layer of tools and software, allowing developers to create and deploy applications without managing the underlying infrastructure. At the top is SaaS, delivering fully functional software applications over the internet, eliminating the need for users to install or run applications on individual devices. Unlike the cloud deployment model, you can choose a different type of service for each use case that you have, allowing you to customize your service to your specific business needs. Figure 1.4 shows the three service models:
Figure 1.4: IaaS, PaaS, and SaaS
So, now that you can explain the different types of cloud providers and the different types of services available on those clouds, you will learn how AWS handles its own services and offerings.
AWS entered the cloud computing arena in 2006. It was the first public cloud provider. It was initially created to support the growing Amazon.com business, but it was quickly realized that AWS could provide services for other businesses, too. In its early days, it offered Simple Storage Solution (S3) for storage and Elastic Compute Cloud (EC2) for computing power. As the years went by, AWS expanded its portfolio to include cutting-edge technologies such as artificial intelligence, machine learning, and the Internet of Things (IoT). This growth trajectory was not just about diversifying services; it fundamentally reshaped how businesses approach scalability and adaptability, offering unprecedented efficiency and flexibility.
In today’s cloud computing landscape, AWS stands as a dominant force, consistently ranking as a top provider globally. Its comprehensive suite of services, known for reliability and scalability, has made it the preferred choice for a diverse spectrum of clients, ranging from emerging start-ups to established enterprises. AWS’s impact on the cloud computing sector is significant. It has not only captured a substantial market share but also played a pivotal role in driving cloud adoption across various industries, thus spearheading a wave of digital transformation and fostering a culture of continuous technological innovation.
We will now look at some of the key AWS services that you will need to know for the exam. All of them will be covered in much greater depth in later chapters.
AWS offers a wide range of services that form the backbone of its cloud computing platform, letting businesses choose from multiple robust and versatile tools. At the time of writing, AWS offers over 200 different services. A service may include a combination of hardware, software, storage, and tooling to support a business in its goals. Key services include Virtual Private Cloud (VPC) for secure and isolated network configuration, EC2 for scalable computing capacity, S3 for reliable data storage solutions, Lambda to run code in response to events without managing servers, and Relational Database Service (RDS) for the easy setup, operation, and scaling of databases. These services collectively provide a comprehensive, integrated cloud environment that supports a wide range of business applications and workflows, demonstrating AWS’s commitment to offering scalable, efficient, and flexible cloud solutions.
AWS VPC enables you to create a logically isolated area of the AWS cloud where you can deploy your workloads:
Custom network configuration: Set up an IP address range, subnets, and gateways for secure and custom network environmentsEnhanced security controls: Control network access to instances and subnets for improved securitySeamless AWS integration: Easily connect with other AWS services, maintaining a secure and efficient cloud ecosystemEC2 provides resizable servers or compute in the AWS cloud, allowing you to rapidly deploy and scale your compute needs:
Flexible compute options: A wide range of instance types for different computational needsScalable resources: Easily scale capacity up or down as neededRDS simplifies the setup, operation, and scaling of relational databases in the cloud:
Automated management: Handles routine database tasks like provisioning, patching, backup, and recoveryMultiple database engine support: Compatible with engines such as MySQL and PostgreSQLScalability: Adjust compute and storage resources with minimal downtimeS3 provides scalable object storage, ideal for a wide range of storage applications:
High durability and availability: Ensures data is stored reliably across multiple facilitiesSimple and scalable: A user-friendly interface to store and retrieve vast amounts of dataCost-effective: Store large volumes of data at a low cost, scaling as per requirementAWS Lambda enables you to run code without server management, with billing for the compute time used:
Serverless execution: Automatically manages computing resourcesEvent-driven: Triggers execution in response to various eventsScalable: Adjusts automatically to handle the workloadNow that you know of some key services that AWS offers, you can start to imagine how you would use them to support the different applications that your organization runs. You should also be able to see that Lambda is a PaaS service, whereas EC2 is an IaaS, as you have more control with EC2 than with Lambda.
AWS has established a vast and robust global infrastructure to support its cloud services, ensuring high availability, low latency, and strong data sovereignty compliance for its users worldwide. This infrastructure is meticulously designed and strategically distributed across various geographical locations. It includes multiple components, such as Regions, Availability Zones (AZs), Edge Locations, and Outposts, each serving a specific purpose to enhance the performance, reliability, and scalability of AWS services. Figure 1.5 displays the AWS global infrastructure:
Figure 1.5: AWS global infrastructure
AWS Regions are geographical areas that host multiple AWS data centers. Each Region is a separate geographic area, isolated and independent from the other Regions to prevent service failures from affecting multiple Regions. This design enhances fault tolerance and stability, ensuring that even if there is a disaster, data integrity and service continuity are maintained. Regions also help you to adhere to data residency requirements, as customers can choose where their data is stored.
Within each AWS Region, there are AZs. An AZ is a cluster of data centers, each with its own off-grid power, networking capabilities, and connectivity, located in separate buildings that are far enough apart to be protected from a local event (for example, a flood) that could cause an outage. These AZs offer protection against failures of individual servers or entire data centers. By distributing resources across multiple AZs within a Region, AWS provides high availability and fault tolerance to applications and databases.
Edge Locations are endpoints for AWS that are used to cache content. This aspect of AWS’s global infrastructure is primarily used by Amazon CloudFront (AWS’s content delivery network) to distribute content to end users with lower latency. These locations are positioned in major cities and highly populated areas around the world, and they bring AWS services closer to the end users, reducing latency and improving the speed of data delivery.
AWS Outposts brings multiple AWS services, including its infrastructure, operating methods, and APIs, to your own data center or on-premises facility. It is part of AWS’s hybrid cloud solutions, allowing businesses with low latency or high-security requirements to integrate between on-premises data centers and AWS’s cloud services. This allows them to run local workloads as if they were on AWS.