32,39 €
A strategic guide that will help you make key decisions related to cloud-based architectures, modernize your infrastructure and applications, and transform your business using AWS with real-world case studies
Key Features
Learn cloud migration and modernization strategies on AWS
Innovate your applications, data, architecture and networking by adopting AWS
Leverage AWS technologies with real world use-cases to implement cloud operations
Purchase of the print or Kindle book includes a free eBook in the PDF format
Book Description
AWS cloud technologies help businesses scale and innovate, however, adopting modern architecture and applications can be a real challenge. This book is a comprehensive guide that ensures your switch to AWS services is smooth and hitch-free. It will enable you to make optimal decisions to bring out the best ROI from AWS cloud adoption.
Beginning with nuances of cloud transformation on AWS, you’ll be able to plan and implement the migration steps. The book will facilitate your system modernization journey by getting you acquainted with various technical domains, namely, applications, databases, big data, analytics, networking, and security. Once you’ve learned about the different operations, budgeting, and management best practices such as the 6 Rs of migration approaches and the AWS Well-Architected Framework, you’ll be able to achieve operational excellence in cloud adoption. You'll also learn how to deploy some of the important AWS tools and services with real-life case studies and use cases.
By the end of this book, you'll be able to successfully implement cloud migration and modernization on AWS and make decisions that best suit your organization.
What you will learn
Strategize approaches for cloud adoption and digital transformation
Understand the catalysts for business reinvention
Select the right tools for cloud migration and modernization processes
Leverage the potential of AWS to maximize the value of cloud investments
Understand the importance of implementing secure workloads on the cloud
Explore AWS services such as computation, databases, security, and networking
Implement various real-life use cases and technology case studies for modernization
Discover the benefits of operational excellence on the cloud
Who this book is for
If you are a cloud enthusiast, solutions architect, enterprise technologist, or a C-suite executive and want to learn about the strategies and AWS services to transform your IT portfolio, this book is for you. Basic knowledge of AWS services and an understanding of technologies such as computation, databases, networking, and security will be helpful.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 630
Veröffentlichungsjahr: 2023
Best practices for transforming your applications and infrastructure on the cloud
Mridula Grandhi
BIRMINGHAM—MUMBAI
Copyright © 2023 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Associate Group Product Manager: Preet Ahuja
Publishing Product Manager: Surbhi Suman
Senior Editor: Divya Vijayan
Technical Editor: Rajat Sharma
Copy Editor: Safis Editing
Project Coordinator: Ashwin Kharwa
Proofreader: Safis Editing
Indexer: Rekha Nair
Production Designer: Jyoti Chauhan
Marketing Coordinator: Rohan Dobhal
First published: July 2023
Production reference: 1070623
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-80323-454-0
www.packtpub.com
To all those who inspire, influence, educate, motivate, and lead by example.
– Mridula Grandhi
Mridula Grandhi is a senior leader of solutions architecture specializing in the Amazon Web Services (AWS) Compute portfolio of services such as containers, serverless, Graviton, and hybrid services. She has more than 16 years of experience architecting and building distributed software systems across industry verticals such as the supply chain, the automotive industry, telecommunications, and financial services. In her current leadership position, she works with AWS customers and provides strategic guidance on optimal pathways to modernize their workloads and achieve their business objectives.
I want to thank all those in my life who see me for who I am, celebrate every dimension of who I am, and give me permission to be the way I am.
Mario Mercado has worked in software development for around 10 years in different roles, including developer, quality assurance (QA) engineer, DevOps engineer, and even director of cloud infrastructure. Mario has helped multiple companies in the last few years succeed on their cloud journeys while keeping cost optimization and automation at the forefront. In addition, Mario has worked with the community, sharing his knowledge on social media and on other platforms such as the AWS Community Builders program and working with the teaching platform, A Cloud Guru, to create courses about Amazon Elastic Kubernetes Service (EKS) and AWS CodeStar, for example. His result-oriented focus has been his most important tool, as he knows what businesses need and how it translates to topics in his area of expertise.
I am truly honored to have worked with Packt on this book. I always thought about that while reviewing each chapter, which I highly enjoyed. It was a huge pleasure for me to work with this amazing team, and I especially acknowledge the outstanding work of the author, Mrindula. Thanks to all the Packt team members who coordinated my contribution to this book!
Vikas Kanwar is a solution strategist for cloud services and technology adoption who aims to create and capture value for clients. Vikas has 15 years of experience in IT and has worked with large organizations and global enterprises such as AWS, Wipro Ltd., Gartner Inc., and HCL. Vikas has been responsible for cloud pre-sales and delivery, architecture and design, cloud transformation, and data center migration for large clients across geographies. Vikas previously published papers on cloud computing while working as an analyst with Gartner.
I truly believe that technological advancement has changed our lives and that with the myriad of choices made available by cloud computing, it has become easier for people with a solutions-oriented mindset to innovate and make a positive impact. Thank you to all these people who make this area of technology exciting and rewarding.
In this part, we will introduce cloud migration and why companies are moving some or all of their data center capabilities to the cloud. We will cover the benefits of migrating to the cloud, the common cloud migration challenges, and how to make a business case for cloud migration. We will take a deeper dive into why you need a plan that covers the technicalities behind cloud migration and also gets buy-in from your stakeholders. As we perform the cloud migration analysis, you will learn to evaluate your the current state of your business and IT environment and carry out business process mapping to improve the way your organization operates. You will learn about the various cloud migration strategies you can apply to fit your needs and achieve your business outcomes most efficiently.
This part comprises the following chapters:
Chapter 1, An Introduction to Cloud TransformationChapter 2, Understanding Cloud MigrationChapter 3, Preparing for Cloud MigrationChapter 4, Cloud Migration StrategiesInnovation, efficiency, and profitability are some of the main tenets for businesses to be able to adapt to the changing needs of the world. Amazon, Microsoft, and Apple are organizations that effectively continue to strive for innovation and manage to reinstate their successes at multiple turning points in their journeys. Netflix was able to reinvent its business and make the streaming experience more enjoyable by innovating its platforms so that they can withstand disruptions. Technology plays a crucial role in helping organizations increase their innovation and agility.
Cloud transformation becomes a topmost priority for organizations who want to explore, improve their day-to-day operations, and succeed in their businesses in these constantly changing times. Businesses around the world are embracing the cloud to supercharge their organization’s growth, as well as innovating, running, scaling, delivering, optimizing, and mitigating any business risks quickly and efficiently. Cloud transformation often poses barriers that are difficult to break down and requires a clear vision of where to start.
In this chapter, we will cover the following topics:
Introduction to the cloudKey characteristics of cloud computingMotivators for cloud adoptionCloud service providers at a glance – AWS, GCP, Azure, and moreService models (IaaS, PaaS, and SaaS)Exploring the deployment models (private, public, hybrid, multi, and community)Many aspects of our everyday life have been transformed by ever-evolving digital solutions. Technology is changing rapidly, and industries are adapting to these changes at a rapid pace. The cloud has become the dominant term in technology in the past few years and the impact it’s brought to businesses is beyond resounding. Before we learn more about cloud transformation, let’s look at the cloud and what cloud computing is.
Cloud transformation
Cloud transformation is the step-by-step process of moving your workloads from local servers to the cloud. It is a process that brings technology and organizational processes together to accelerate the development, implementation, and delivery of new services.
The cloud is referred to as a collection of software, servers, storage, databases, networking, analytics, and intelligence that can be accessed via the internet instead of being locally available on your computer or device. These services are delivered through data centers located across the globe and linked through the internet.
The following diagram depicts the use of cloud computing and the accessibility of various devices via the cloud:
Figure 1.1 – Cloud computing
Before we dive deep into cloud computing, let’s learn how we got here and how cloud computing began.
The history of cloud computing began almost 70 years ago, when corporations and large organizations began exploring computers and mainframe systems. In the 1950s and 1960s, these were only a reality for organizations with sufficient financial resources. Computers were simply large, expensive interfaces that required human operators to interact with the mainframe computer terminals to process complex data.
These early mainframe clients had limited computing power and needed the bulk of the available physical servers to get the work done. This model of computing is the predecessor of cloud computing.
In the 1970s, hardware-assisted virtualization was first introduced by IBM, which allowed organizations to run many virtual servers on a physical server at a given time. This was a milestone for mainframe owners as they could run virtual machines using the VM’s operating system. Virtualization has come a long way and VM operating systems are a deployment option for many organizations for building and deploying applications. Today’s cloud computing model couldn’t have been possible without the concept of virtualization.
Technically, the concept of virtualization evolved with the internet as businesses started providing virtual private networks as a paid service. This resulted in a great momentum back in the 1990s leading to the development of a foundational block for modern cloud computing.
The term cloud computing specifies where the boundaries of computing follow the economic rationale rather than the technical limits alone.
Virtualization
Virtualization is the process of running a virtual instance by creating an abstraction layer over dedicated amounts of CPU, memory, and storage that are borrowed from a physical host computer.
In the following diagram, each VM runs an operating system (OS) of choice with its own software, libraries, and so on that are needed for its applications. This VM silo runs the hypervisor on top of the bare-metal environment:
Figure 1.2 – Cloud computing
This virtualization technique forms the foundational component of cloud computing, where a hypervisor runs on a real machine and creates virtual operating systems on that particular machine.
In 2006, Amazon launched Amazon Web Services (AWS), the first cloud provider to offer online services to other websites of customers. In 2007, IBM, Google, and several other interested parties such as Carnegie Mellon University, MIT, Stanford University, the University of Maryland, and the University of California at Berkeley joined forces and developed research projects. Through these projects, they realized that computer experiments can be conducted faster and for cheaper by renting virtual computers rather than using their hardware, programs, and applications. The same year also saw the birth of Netflix’s video streaming service, which uses the cloud and has revolutionized the practice of binge-watching.
Cloud Computing
AWS states that “Cloud computing is the on-demand delivery of IT resources over the internet with pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data centers and servers, you can access technology services, such as computing power, storage, and databases, on an as-needed basis from a cloud provider”.
This completes our introduction to the cloud, the history of cloud computing, and its evolution. In the next section, we will look at the key characteristics of cloud computing so to understand how cloud computing is beneficial for businesses in this new computing era.
According to the National Institute of Standards and Technology (NIST), cloud computing has six essential characteristics, as follows:
On-demand self-serviceWide range of network accessMulti-tenant model and resource poolingRapid elasticityPAYG modelMeasured service and reportingWe will discuss each of these in the following subsections.
In traditional enterprise IT settings, companies used to build the required infrastructure to run their applications locally; that is, on-premises. This means that the enterprises must set up the server’s hardware, software licenses, integration capabilities, and IT employees to support and manage these infrastructure components. Because the software itself resides within an organization’s premises, enterprises are responsible for the security of their data and vulnerability management, which entails training IT staff to be aware of security vulnerabilities and installing updates regularly and promptly.
Cloud computing is different from traditional IT hosting services since consumers don’t have to own the required infrastructure to run their applications. With the cloud, a third-party provider will host and maintain all of this for you. Provisioning, configuring, and managing the infrastructure is automated in the cloud, which reduces the time to streamline activities and make decisions about capacity and performance in real time.
Cloud automation
Cloud automation is the process of automating tasks such as discovering, provisioning, configuring, scaling, deploying, monitoring, and backing up every component within the cloud infrastructure in real time. This involves streamlining tasks without human interaction and caters to the changing needs of your business.
On-demand makes it possible for consumers to benefit from the resources on the cloud as and when required. The cloud supplier caters to the demand in a real-time manner, enabling the consumers to decide on when and how much to subscribe for these resources. The consumers will have full control over this to help meet their evolving needs.
The self-service aspect allows customers to procure and access the services they want instantaneously. Cloud providers facilitate this via simple user portals to make this quick and easy. For example, a cloud consumer can request a new virtual machine and expect it to be provisioned and running within a few minutes. On-premises procurement of the same typically takes 90-120 days and also requires accurate forecasting to purchase the required RAM specifications and the associated hardware for a given business use case.
Global reach capability is an essential tenet that makes cloud computing accessible and convenient. Consumers can access cloud resources they need from anywhere, and from any device over the network through standard mechanisms such as authentication and authorization. The availability of such resources from thin or thick client platforms such as tablets, PCs, smartphones, netbooks, personal digital assistants, laptops, and more help the cloud touch every possible end user.
Multi-tenancy is one of the foundational aspects that makes cloud services practical. To help you understand multitenancy, think of the safe-deposit boxes that are located in banks, which are used to store your valuable possessions and documents. These assets are stored in isolated and secure vaults, even though they’re stored in the same location. Bank customers don’t have access to other deposit boxes and are not even aware of or interact with each other. Bank customers rent these boxes throughout their lifetime and use security mechanisms to provide identification and access to their metal boxes. In cloud computing, the term multi-tenancy has a broader meaning, where a single instance of a piece of software runs on a server and serves multiple tenants.
Multi-tenancy
Multi-tenancy is a software architecture in which one or more instances of a piece of software are created and executed on a server that serves multiple, distinct tenants. It also refers to shared hosting, where server resources are divided and leveraged by end users.
The following diagram shows single-tenant versus multi-tenant models, both of which can be used to design software applications:
Figure 1.3 – Single-tenancy versus multi-tenancy
As an example of a multi-tenancy model, imagine an end user uploading content to social media application(s) from multiple devices.
Using the multi-tenant model, cloud resources are pooled via resource pooling. The intention behind resource pooling is that the consumers will be provided with ways to choose from an infinite pool of resources on demand. This creates a sense of immediate availability to those resources, without them being bound to any of the limitations of physical or virtual dependencies.
Resource pooling
Resource pooling is a strategy where cloud-based applications dynamically provision, scale, and control resource adjustments at the meta level.
Resource pooling can be used for services that support data, storage, computing, and many more processing technologies, thereby facilitating dynamic provisioning and scaling. This enables on-demand self-service for services where consumers can use these services and change the level of their usage as per their needs. Resource pooling, coupled with automation, replaces the following mechanisms:
Traditional mechanismsLabor-intensive mechanismsWith new strategies that rely on increasingly powerful virtual networks and data handling technologies, cloud providers can provide an abstraction for resource administration, thereby enhancing the consumer experience of leveraging cloud resources.
Elasticity is one of the most important factors and experts indicate this as the major selling point for businesses to migrate from their local data centers. End users can take advantage of seamless provisioning because of this setup in the cloud.
What is cloud elasticity? What are the benefits?
Before we answer these questions, let’s take a look at the definition of elasticity.
Elasticity in the cloud refers to the end user’s ability to acquire or release resources automatically to serve the varying needs of a cloud-based application while remaining operational.
Another criterion that is used in the cloud is scalability. Let’s look at what it is and how it differs from cloud elasticity.
Scalability in the cloud refers to the ability to handle the changing needs of on-demand by either adding or removing resources within the infrastructure’s boundaries.
Although the fundamental theme of these two concepts is adaptability, both of these differ in terms of their functions.
Scalability versus Elasticity
Scalability is a strategic resource allocation operation, whereas elasticity is a tactical resource allocation operation. Elasticity is a fundamental characteristic of cloud computing and involves taking advantage of the scalable nature of a specific system.
The inherent nature of dynamically adapting capacity helps businesses handle heavy workloads, as well as ensure that their operations go uninterrupted.
For example, take an online retail shipping website that is experiencing sudden bursts of popularity and their volume of transactions is peaking. To handle the workload, the website can leverage the cloud’s rapid elasticity by adding resources to meet the transaction spikes. When the workloads do not have to meet such peaks, the services can be taken down just as quickly as they were added. You only pay for the services that you use at any given point.
Automatically commissioning and decommissioning resources is inherent to cloud elasticity and can be used to meet the in and out demands of businesses, thereby helping them manage and maintain their operating expenditure (OpEx) costs without having to put in any upfront capital expenditure (CapEx) costs and being locked into any long-term contracts.
The pay-per-use or Pay As You Go (PAYG) pricing model is a major highlight that’s geared toward an economic model for organizations and end users. The per-second billing pricing plans that are provided by the cloud providers make it easy for businesses to witness a major shift from CapEx to OpEx. This enables the businesses to not worry about the upfront capital that they need to spend on on-premises infrastructures and capacity planning to meet ongoing demands. The traditional self-provisioning processes are often prone to extreme inefficiency and waste due to the complex supply chain model, which usually involves seamless communication between decision-makers and stakeholders.
However, cloud-based architectures and their inherent design models allow you to scale up your applications on the cloud during peak traffic and scale back down during periods where they’re not needed as much, without having to worry about annual contracts or long-term license termination fees.
What are CapEx and OpEx?
CapEx involves funds that have been incurred by businesses to acquire and upgrade a company’s fixed assets. This includes expenditures toward setting up the technology, the required hardware and software to run the services, and more.
OpEx involves the expenses that have been incurred by businesses through the course of their normal business operations. Such expenses include property maintenance, inventory costs, funds allocated for research and development, and more.
The businesses witness heavy OpExs when it comes to service and software procurement and management, tasks that are often expensive and inefficient. This model also often leads to complex payment structures and makes it difficult for businesses to fluctuate their usage. With the PAYG model, you pay for the resource charges for user-based services, versus an entire infrastructure. Once you stop using the service, there is typically no fee to terminate, and the billing for that service stops immediately.
Let’s look at an example of how the PAYG model is applied to cloud resources. A user provisioning a cloud compute instance is generally billed for the time that the instance is used. You can add or remove the compute capacity based on your application’s demands and only pay for what you used by the second, depending on the cloud provider you chose.
The ability to measure cloud service usage is an important characteristic to ensure optimum usage and resource spending. This characteristic is key for both cloud providers and end users as they can measure and report on what services have been used and their purpose.
NIST states the following:
“Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (for example, storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.”
The cloud provider’s billing component is mainly dependent on the capability to measure customers’ usage and calculate the billing invoices accordingly. Cloud providers can understand the overall consumption and potentially improve their infrastructure’s and service’s processing speeds and bandwidth.
Businesses get the visibility and transparency they need to utilize their rates costs across large enterprises, which is limited in traditional IT environments. This is especially helpful for usage accounting, reporting, chargebacks, and also for monitoring purposes for their key IT stakeholders. In addition to the billing aspect, rapid elasticity and resource pooling feed into this characteristic, where end users can leverage monitoring and trigger automation to scale their resources.
In this section, we learned about the essential characteristics of cloud computing: on-demand self-service, elasticity, resource pooling, the PAYG model, measured services, CapEx/OpEx, and reporting abilities. In the next section, we look at what makes businesses inclined toward moving to the cloud.
The cloud has numerous offerings that help many organizations run their workloads on the cloud. By enabling cloud adoption, companies can accelerate their business transformations and expansions. Operating on the cloud helps companies classify and find motivations to help evaluate the necessities of migrating to the cloud. Let’s look at some motivation-driven strategies that enterprises can expect as business outcomes upon performing cloud migration.
The cloud’s infrastructure is built on virtual servers that are built to handle substantial computing power and data volume changes. This helps cloud consumers leverage and build their applications so that they run without interruption. The cloud offers durable, redundant, pre-configured, and distributed resources that can be accessed from a variety of devices, such as laptops, smartphones, PCs, and more. The sophistication of this infrastructure allows you to build heterogeneous and multi-layer architectures that can withstand failures that are caused by unanticipated configuration changes or natural disasters when they’re built right.
Having high levels of real-time monitoring and reporting capabilities in cloud environments to guarantee service-level agreements (SLAs) is nearly impossible for traditional data centers to build without substantial costs. This characteristic makes it easy for businesses to build robust and resilient applications with resource guarantees.
Service-level agreement (SLA)
An SLA is a measurement parameter (often expressed as a percentage) that defines a cloud service’s expected performance and often serves as an agreement between the cloud service provider and the cloud consumer.
Note that cloud resiliency still requires businesses to build their critical systems with the right design, architecture, monitoring, orchestration, reporting, and governance to continue to run the businesses in the event of a disruption. However, with the cloud’s underlying infrastructure, you can assess, evaluate, plan, implement, and manage your critical workloads and drive resiliency for your businesses as per your recovery time objective (RTO) and recovery point objective (RPO).
What are RTO and RPO?
RTO is a business continuity metric that measures the amount of time a given application can stop working and the business can withstand the damage, as well as the time spent restoring the application and its data.
RPO is a business continuity metric that measures the amount of lost data within a given period that is impacting the business, from the point of a critical event to the previous backup.
Cloud offerings, when used the right way with the proper security controls, can bring increased security to cloud consumers. The cloud service providers architect their infrastructures according to security standards and best practices to provide secure computing environments. The security-specific tools that are offered by the cloud service providers use controls to build their data center and network architectures, which are designed for high security and tightly restrict access to your data.
Cloud computing continues to play a key role in reducing global energy consumption rates. Cloud computing is becoming an increasingly popular option for replacing on-premises server rooms and closets, which often lack the operational practices to consume energy efficiently, causing environmental impacts. The cloud enables organizations to share resources globally, resulting in higher efficiency and resource utilization compared to small private organizations that depend on standalone data centers.
As environment and climate awareness grows around the world, cloud service providers (CSPs) are continuously embracing and building their core physical infrastructure assets, which feed off of renewable energy. As the consumption of renewable energy increases, the overall carbon intensity will steadily decrease, resulting in energy transitions that help with the global climate and clean energy challenges.
At the macro level, cloud data centers invest in newer, more efficient equipment to achieve extremely high virtualization ratios, which are less likely to occur for typical enterprise data centers. The equipment’s power consumption and cooling characteristics are an ever-evolving exercise that also helps reduce carbon emissions.
Cost savings is one of the key motivators for companies who are thinking of moving to the cloud. The setup and maintenance costs are usually reducedsignificantly by implementing cloud apps and their infrastructure. Surveys on cost savings and driving factors indicate that companies could save up to 50% on IT costs and cut down on the in-house equipment and the ongoing costs of maintaining IT departments with growing capacity needs.
Let’s discuss a few factors that can drive cost savings:
Underlying hardware costs: You don’t need to invest in in-house equipment with cloud computing. This is a major cost cut for companies that don’t have to worry about investing in upfront expenses to acquire underlying hardware and build on-premises server rooms or data centers. You can maximize the real-estate and office space, which also cuts down on costs.IT operation costs: You don’t have to invest in employing any in-house staff to repair or replace equipment as this responsibility shifts from you to the cloud vendor when you migrate to the cloud. This is a major shift from capital expenditure to operational expenditure. You can free up your staff and focus on diversifying your workforce, who can work from anywhere with an internet connection.Hardware maintenance: Labor and maintenance costs are significant when it comes to building and maintaining an in-house data center. Ongoing upgrades or repairs are not your responsibility anymore, given that your data is stored offsite. This task will fall to the vendors, resulting in spending less time on installations from weeks or months to hours.Moving to the cloud alone doesn’t help with maximizing cost savings. You have to establish a cadence or a routine of monitoring cloud spending with the available tools by shutting down idle resources or rightsizing resources to realize extreme cost savings.
With cloud computing, companies can become more business-focused than IT-focused and drive programs where the benefits matter the most. Some of these benefits are as follows:
Faster time to market: Cloud-native offers end-to-end automation platforms that enable you to release code into production any number of times per day. As a result, businesses can adopt and bring new business use cases to the market about 40% faster.Accelerates the innovation of business offerings: Many popular cloud service providers have hundreds of native services in domains such as networking, databases, compute, machine learning (ML), security, storage, artificial intelligence (AI), business analytics, and many more. These can serve almost any industry, especially automotive, advertising and marketing, consumer packaged goods, education, energy, financial services, game tech, government, healthcare, and life sciences. Cloud offers a wide range of options for you to build, deploy, and host any application and this empowers companies to innovate rapidly.In this section, we looked at why many businesses are moving to the cloud. We learned about the various factors that help them reduce IT operation costs and increase their business agility. Next, we’ll learn about some of the leading cloud service providers and how their infrastructure is configured.
When it comes to the on-demand availability and accessibility of cloud computing, CSPs offer these resources in many forms and sizes to businesses and individuals. Cloud consumers can rent access to any form of computing resources from applications to storage through these CSPs.
What is a CSP?
A CSP is a third party that offers on-demand cloud computing in the form of computing resources to other businesses or individuals without having them manage anything directly.
Some of the prominent cloud service providers across the worldwide cloud market are AWS, Microsoft Azure, Google Cloud, IBM Cloud, Alibaba Cloud, Salesforce, SAP, Rackspace Cloud, and VMware.
Let’s take a look at a few of these cloud service providers and see what their offerings look like.
Launched in 2006, AWS is a cloud service provider that aims to offer a platform that is highly reliable and scalable. Over the years, AWS had strived to provide services that span geographical regions across the world. With over 170 fully-featured services, AWS is the world’s most comprehensive and broadly adopted cloud platform.
Its service offerings feature across technical categories such as compute, databases, infrastructure management, data management, migration, networking, application development, security, AI, ML, and more.
As of 2022, AWS cloud spans 26 geographic regions and 84 availability zones around the world:
Regions and availability zones in AWS
An AWS region is a physical location around the world where data centers are clustered.
Each group of logical data centers is called an AWS availability zone.
Figure 1.4 – Magic Quadrant for Cloud Infrastructure and Platform Services
The preceding diagram shows the magic quadrant for the cloud infrastructure that was published by Gartner in 2021.
Note
Each CSP has terminology to indicate the cloud regions for the consumer’s needs based on technical and regulatory considerations.
Launched in 2010, Microsoft Azure is one of the fastest-growing clouds and offers hundreds of services across categories such as AI, ML, analytics, blockchain, compute, databases, and more.
Azure’s global infrastructure is made up of two key components – physical infrastructure and connective network components. The physical component comprises 200+ physical data centers, arranged into regions, and linked by one of the largest interconnected networks on the planet (source: https://docs.microsoft.com/en-us/azure/availability-zones/az-overview).
As of 2022, Azure consists of over 60 regions worldwide across 140 countries.
Regions and availability zones in Microsoft
A region is a set of data centers that are deployed within a latency-defined.
Unique physical locations within a region are called availability zones. Each zone is made up of one or more data centers.
Launched in 2008, Google Cloud Platform (GCP) is a suite of over 100 products and services offered by Google. Its core service offerings include compute, networking, storage and databases, AI, big data, identity and security, and more.
As of 2022, Google Cloud spans over 28 cloud regions, 85 zones, and 146 network edge locations across 200+ countries and territories.
Regions and zones in Google
Each data center that has a location that comprises physical assets such as virtual machines, hard disk drives, and more is defined as a region.
Each region is a collection of zones that are isolated from each other within the region.
Founded in 2009, Alibaba Cloud’s wide range of high-performance cloud products include large-scale computing, networking, databases, storage security, Internet of Things (IoT), media services, and more.
As of 2022, Alibaba Cloud operates around the world with over 78 availability zones in 24 regions.
Regions and zones in Alibaba
A region is a geographic area where a data center resides.
A zone is a physical area with independent power grids and networks in a region. The network latency for access between instances within the same zone is shorter.
In this section, we looked at some of the popular companies that are managing cloud computing through their cloud technology offerings. Next, we will provide an overview of the cloud service models and discuss the level of management each model provides.
As you navigate your path to the cloud, there are key decisions that you must make that revolve around how much you want to manage yourself and how much you want your service provider to manage. These cloud service models can be put into three categories that match your current needs so that you’re prepared for the future:
Infrastructure as a Service (IaaS): IaaS is a service model that offers consumers on-demand access to virtualized compute, storage, and networking.Platform as a Service (PaaS): PaaS is a service model that offers consumers on-demand access to a ready-to-use cloud-native platform for developing, running, hosting, managing, and maintaining applications.Software as a Service (SaaS): SaaS is a service model that offers consumers on-demand access to ready-to-use software for cloud-hosted applications.The following diagram shows what you manage for each type of model:
Figure 1.5 – Cloud models
Let’s discuss each of these in more detail.
Infrastructure services enable companies to acquire resources on-demand and as-needed. This gives users cloud-based alternatives instead of them having to buy the required hardware, which is often expensive and labor-intensive. The main offerings include computing resources such as storage, servers, and networking.
The features of IaaS are as follows:
Highly flexible and highly scalableOn-demand offerings that can be accessed via the internetHighly redundant as data lives in the cloudZero management is needed for the virtualization tasksCost-effectiveEase of use
As shown in the following diagram, IaaS is often called an everything-as-a-servicebusiness model:
Figure 1.6 – The IaaS model
IaaS is suitable for companies of any size and complexity. However, there are some use cases where companies can find IaaS more beneficial:
High-performance computing: Performing groundbreaking complex calculations for batch processing workloads, media transcoding, scientific modeling, and gaming requires a high-performance computing architecture with clustered compute servers and data storage. IaaS can be leveraged to take advantage of its rapid scalability and support for networking compute resources.Disaster recovery and backup solutions: Building a disaster recovery plan on-premises involves a complex infrastructure that requires fixed capital expenses. With IaaS, this can be achieved in a few steps; all you need to do is set up the required infrastructure services for disaster recovery and backup solutions.Real-time data analytics: The ever-increasing requirement of applications to analyze data in real time requires decisions to be made in seconds. Collecting real-time data and processing it can be a time-consuming and expensive development endeavor.IaaS can be used to manage, store, and analyze big data and handle large workloads while easily incorporating business intelligence tools. Getting business insights out of this raw data and predicting trends can be rendered effectively.
You should consider the following factors if you wish to choose an IaaS provider:
Security: Protecting sensitive data, standardizing identity management procedures, and evaluating compliance standards are some of the security procedures that can dramatically impact your security posture when you’re using an IaaS model. It’s important to make sure that the IaaS provider is protected against security risks.Pricing model: In addition to the initial expenses, make sure that you understand your IaaS provider’s pricing structure and the different monitoring tools and mechanisms that they are providing for monitoring and tracking your spending. Sometimes, the initial pricing may convince you to migrate from your on-premises infrastructure, but laying out a long-term plan with expected savings will enable you to plan for resource provisioning effectively.SLA and support process: Knowing your vendor’s SLAs to ensure any infrastructure issues are resolved promptly is crucial for your businesses to run without interruptions. Understanding the level of support they provide once you become a paying customer is crucial.Integration capabilities: When you’re migrating to an IaaS model, it is important to understand how your current workflows can be incorporated into the cloud without major customizations. Without proper integration, your products may suffer from additional development and administrative efforts, which will often translate into higher costs and application support.Latency requirements: Analyzing which IaaS will provide you with the closest/less latency to your customers is also important. Also, if the data you need must be located physically in a country, make sure that the IaaS provider has facilities in that country. The following questions can help you address this:Where are this IaaS provider’s closest data centers?How many data centers are in the region I’m interested in?Is there a region/data center/facility in the country that my data needs to live in?In summary, IaaS represents general purpose compute resources to support customers-facing websites, web applications or customers that are heavy on data, analytics and warehousing. IaaS supports a diverse set of workloads and, as we explore in later chapters, we will look into the emerging compute models that are positioned for modern application architectures such as microservices.
Platform services enable developers to build applications by providing hardware and software tools that can be accessed from the internet. Businesses have the freedom to incorporate special software components while they are designing and creating applications. The cloud’s inherent characteristics enable these components to be highly scalable and available. PaaS provides application life cycle management tools and integrated development environments (IDEs) to help you select the best for your needs.
An important characteristic of PaaS is that it lets you manage how different tenants are isolated. So, if the load on one tenant becomes high, the demand is distributed to the right instances of the applications. This function enables high scaling and availability. Developers can build applications anywhere in the world and don’t have to worry about operating systems, storage, the underlying infrastructure, or software updates:
Figure 1.7 – The PaaS model
The features of PaaS are as follows:
Developer-friendlyBuilt on virtualization technologyScalable and highly availableQuicker churn in coding and testingPluggable customizationsAvailable to multiple developers at the same timeEasy integration of web services and databasesLower capital commitmentYou can manage applications easilyReusabilityWith PaaS, you get support for application development, operating software, deploying to IaaS infrastructure automatically, handling runbook scenarios automatically, and bringing many improvements, including monitoring your end-to-end applications. Let’s look at why companies usually implement PaaS:
Many forward-thinking companies, including large businesses, small start-ups, and everything in-between want to build projects and create an open source-like world where everybody inside the company can have access to the code of all the other projects to reuse. They are hoping that they will be able to use common code and services to increase their productivity and innovation.
With PaaS, developer efficiency can be tremendously enhanced by leveraging common tools so that they can realize the benefits of the open source technology approach.
A key success criterion is moving as many teams and projects to the new PaaS as quickly as possible. This ensures that there’s sufficient mass within the company to create the desired innovation and shared development benefits such as the following:
Cut down on costs: Many companies invest large amounts of capital to run existing legacy applications where