37,99 €
Gain in-depth knowledge of cloud computing concepts and apply them to accelerate your career in any cloud engineering role
Key Features
Book Description
If you want to upskill yourself in cloud computing domains to thrive in the IT industry, then you've come to the right place. Cloud Computing Demystified for Aspiring Professionals helps you to master cloud computing essentials and important technologies offered by cloud service providers needed to succeed in a cloud-centric job role.
This book begins with an overview of transformation from traditional to modern-day cloud computing infrastructure, and various types and models of cloud computing. You'll learn how to implement secure virtual networks, virtual machines, and data warehouse resources including data lake services used in big data analytics — as well as when to use SQL and NoSQL databases and how to build microservices using multi-cloud Kubernetes services across AWS, Microsoft Azure, and Google Cloud. You'll also get step-by-step demonstrations of infrastructure, platform, and software cloud services and optimization recommendations derived from certified industry experts using hands-on tutorials, self-assessment questions, and real-world case studies.
By the end of this book, you'll be ready to successfully implement cloud computing standardized concepts, services, and best practices in your workplace.
What you will learn
Who this book is for
The book is for aspiring cloud engineers, as well as college graduates, IT enthusiasts, and beginner-level cloud practitioners looking to get into cloud computing or transforming their career and upskilling themselves in a cloud engineering role in any industry. A basic understanding of networking, database development, and data analysis concepts and experience in programming languages such as Python and C# will help you get the most out of this book.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 604
Veröffentlichungsjahr: 2023
Hone your skills in AWS, Azure, and Google cloud computing and boost your career as a cloud engineer
David Santana
BIRMINGHAM—MUMBAI
Copyright © 2023 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Mohd Riyan Khan
Publishing Product Manager: Surbhi Suman
Senior Editor: Athikho Sapuni Rishana
Technical Editor: Arjun Varma
Copy Editor: Safis Editing
Project Coordinator: Ashwin Kharwa
Proofreader: Safis Editing
Indexer: Pratik Shirodkar
Production Designer: Alishon Mendonca
Marketing Coordinator: Gaurav Christian
Senior Marketing Coordinator: Nimisha Dua
First published: March 2023
Production reference: 1060323
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-80324-331-3
www.packtpub.com
To my mother, Lucy, and to the memory of my father, Luis, for their sacrifices and for exemplifying perseverance in the face of all obstacles. And to my family and children, thank you for motivating me to dream and grind to make this and many more ambitions a reality. This work has been the culmination of decades of service to aspiring professionals and gratitude in educating and working alongside soldiers and veterans of the United States armed forces and helping veteran services such as JVS SOCAL and DAV. Finally, to my mentors Richard Luckett, who inspired me to do great things and to continue moving forward toward the noble pursuit of helping humanity achieve greatness through education!
Thank you, God, for this blessing.
– David Santana
As someone rightly said, “Teachers are born and not made.” I absolutely agree with that when I think of David. His breadth and depth of knowledge just always leaves me delighted. I have had the pleasure of knowing David through our collaboration on workshops for nearly five years now, and from the start I have admired his passion for technology and teaching, sharing a plethora of expertise and insights with the world.
If you’re reading this, you’re likely interested in learning more about cloud computing, a technology that has revolutionized the way we store, process, and access data. As an aspiring professional, you will understand the importance of staying up to date with the latest developments in your field, and mastering the fundamentals of cloud computing is a crucial step in that journey.
In this book, you will learn the fundamentals of cloud computing and how it differs from traditional computing models. You will explore the different types of cloud computing services and understand how they can be used to support various business needs. The book also covers key technologies and concepts, such as virtualization, containers, and cloud security.
Whether you’re a complete beginner or have some experience with cloud computing, this book has something for you. It starts with an introduction to the core concepts and principles of cloud computing, explaining what it is, how it works, and why it’s so important. From there, it dives into the different types of cloud computing, including public, private, and hybrid clouds, and explains the pros and cons of each.
As an entertaining presenter, active community contributor, passionate advocate, and ex-military, David imparts the knowledge, experience, and discipline gained through this period of progressive innovation. With easy-to-understand yet comprehensive descriptions, step-by-step instructions, screenshots, snippets, real-world examples, and links to additional sources of information, Cloud Computing Demystified for Aspiring Professionals facilitates the enhancement of skills that will enable successful careers in cloud engineering and beyond.
In today’s fast-paced and competitive business environment, having a strong understanding of cloud computing is crucial for success. With Cloud Computing Demystified for Aspiring Professionals, you can gain the knowledge and skills you need to thrive in this exciting field. Having read it myself, I hope this book reaches as many learners as possible and lets the world know about David and his expertise.
Chief Operating Officer, Spektra Systems
David Santana is a multi-cloud engineer, certified trainer for Amazon, Microsoft, and Google, course developer, and cloud computing evangelist. His vast experience in cloud software engineering, data science, and managing training events have made him the Fast Lane US cloud director and lead subject matter expert. With over 20+ years managing training events, and developing, and implementing application workloads, with B2B consulting including leadership experience. He has been supporting Microsoft, Amazon, and Google cloud business partners such as, Deloitte, Accenture, Humana, ABB, and public government agencies such as CJIS, Dept of State, DoD, DIA, and Naval veteran services. He has authored other published works, such as Azure Resources for AWS Architects, showing IT professionals how to adopt a Microsoft and Amazon service including using AWS and Azure infrastructure as code tools.
Amogh Raghunath is a software engineer with 5+ years of experience, building data platforms and implementing data pipelines on AWS and Azure. He is currently working at Amazon (Audible group) as a part of the data capture scrum team to help build data solutions for data science, marketing and BI teams. He has a master’s degree in data science, with focus on data engineering, from Worcester Polytechnic Institute in Massachusetts.
If you want to upskill yourself in cloud computing domains to thrive in the IT industry, then you’ve come to the right place. Cloud Computing Demystified for Aspiring Professionals helps you to master the cloud computing essentials and the important technologies offered by cloud service providers that are needed to succeed in a cloud-centric job role.
This book begins with an overview of the transformation from traditional to modern-day cloud computing infrastructure, and various types and models of cloud computing. You’ll learn how to implement secure virtual networks, virtual machines, and data warehouse resources including data lake services used in big data analytics — as well as when to use SQL and NoSQL databases and how to build microservices using multi-cloud Kubernetes services across AWS, Microsoft Azure, and Google Cloud. You’ll also get step-by-step demonstrations of infrastructure, platform, and software cloud services and optimization recommendations derived from certified industry experts using hands-on tutorials, self-assessment questions, and real-world case studies.
By the end of this book, you’ll be ready to successfully implement standardized cloud computing concepts, services, and best practices in your workplace.
The book is mainly intended for those aspiring to become cloud engineers. Novice cloud practitioners, fresh college graduates, IT enthusiasts, and anyone looking to get into cloud computing or transform their career with a cloud engineering role in any industry will also benefit from this book. A basic understanding of networking concepts such as IP addressing, client and server devices, and communication protocols and experience in any programming language will be helpful for reading this book.
Chapter 1, Introduction to Cloud Computing, leads you through a journey from traditional infrastructure to the rise of cloud computing, and also describes in great detail what cloud computing is and its various advantages over traditional technology infrastructures.
Chapter 2, Unveiling the Cloud, demystifies the cloud by describing in detail the underlying technology that comprises cloud computing services. Here, you will learn how these technologies support core services such as compute, storage, and containers.
Chapter 3, Understanding the Benefits of Public Clouds (AWS, Azure, and GCP), helps you understand the benefits of the Azure, AWS, and GCP cloud infrastructure. Here, you will learn about their worldwide infrastructure presence, service availability, cloud scaling capability, built-in resilience, and how adhering to well-architected frameworks optimizes overall operational costs.
Chapter 4, Developing Infrastructure Services Using Public Cloud Providers (IaaS), takes you through how infrastructure as a service (IaaS) solutions are implemented. You will learn how to architect, deploy, and manage networking components, compute services, and storage resources, and you will gain the knowledge required to maintain IaaS workloads throughout their life cycle by understanding the responsibility you share with the cloud provider.
Chapter 5, Developing Platform Services Using Public Cloud Providers (PaaS), takes you through how platform as a service (PaaS) solutions are implemented. You will learn how to architect, configure, and manage core application services, serverless resources, object-level storage services, and database resource types, and you will gain the knowledge required to maintain PaaS workloads throughout their life cycle by understanding the responsibility you share with the cloud provider.
Chapter 6, Utilizing Turnkey Software Solutions (SaaS), takes you through how software as a service (SaaS) solutions are implemented. You will learn how to configure and utilize at a high level core Microsoft Office 365, Amazon WorkDocs, and Google Docs services. You will also learn about the role of the SaaS marketplace, and you will gain the knowledge required to maintain SaaS workloads throughout their life cycle by understanding the responsibility you share with the cloud provider.
Chapter 7, Implementing Virtual Network Resources for Security, takes you through implementing various fundamental networking services. You will also learn how to set up a public load balancer and a site-to-site (hybrid) virtual private network, and you will reinforce concepts and configuration procedures by completing review questions.
Chapter 8, Launching Compute Service Resources for Scalability, takes you through implementing various fundamental compute services. You will also learn how to set up a virtual machine, web application services, container services, and serverless function services. You will reinforce concepts and configuration procedures by completing review questions.
Chapter 9, Configuring Storage Resources for Resiliency, takes you through implementing various fundamental storage services. You will also learn how to set up object-level storage services, file-sharing services, key-value storage services, and message-queueing services. You will reinforce concepts and configuration procedures by completing review questions.
Chapter 10, Developing Database Services for APIs, takes you through utilizing key database services, including how to create relational databases and non-relational database resources. You will reinforce your learning by completing review questions.
Chapter 11, Building Data Warehouse Services for Scalability, takes you through building instrumental data warehouse databases, and Data Lake storage resources. You will reinforce concepts and configuration procedures by completing review questions.
Chapter 12, Implementing Native Cyber Security Controls for Protection, takes you through implementing native-cloud cyber security features. You will also learn how to configure built-in database, storage, compute, and network security features, and you’ll learn about the concepts of defense-in-depth while exploring these capabilities.
Chapter 13, Managing API Tools for Agility, takes you through configuring native fundamental cloud management API tools. You will learn how to manage resources utilizing web-based portals and interfaces and web-based CLIs, and you will learn how to use cloud-native infrastructure as code tools to efficiently develop IaaS and PaaS resources.
Chapter 14, Accelerating the Continuous Learning Journey, takes you through utilizing supplemental learning resources to successfully master cloud computing. You will learn about online learning communities and self-paced, live instructor-led, and mentorship resources.
Chapter 15, Driving Growth, and the Future of the Cloud, the final chapter of the book, explores the significance of certifications, role requirements, examination preparation resources, and best practice testing strategies, which inevitably will lead you to a milestone in your successful journey in cloud computing.
You should have a basic understanding of networking and computer concepts, including networking topologies, IP addressing, routing, and DNS, and basic experience with command-line and programming languages such as Python and C#.
Software/hardware covered in the book
Operating system requirements
AWS, Microsoft Azure and Google Cloud Platform.
Windows, macOS, or Linux
If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.
You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Cloud-Computing-Demystified-for-Aspiring-Professionals
. If there’s an update to the code, it will be updated in the GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://packt.link/rmL2p.
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “Mount the downloaded WebStorm-10*.dmg disk image file as another disk in your system.”
Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “Select System info from the Administration panel.”
Tips or important notes
Appear like this.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.
Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Once you’ve read Cloud Computing Demystified for Aspiring Professionals, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.
Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.
Thanks for purchasing this book!
Do you like to read on the go but are unable to carry your print books everywhere?
Is your eBook purchase not compatible with the device of your choice?
Don’t worry! Now with every Packt book you get a DRM-free PDF version of that book at no cost.
Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application.
The perks don’t stop there. You can get exclusive access to discounts, newsletters, and great free content in your inbox daily.
Follow these simple steps to get the benefits:
Scan the QR code or visit the link:https://packt.link/free-ebook/9781803243313
Submit your proof of purchase.That’s it! We’ll send your free PDF and other benefits to your email directly.This part provides an introduction to cloud computing, its foundation, and architecture.
This part comprises the following chapters:
Chapter 1, Introduction to Cloud ComputingChapter 2, Unveiling the CloudChapter 3, Understanding the Benefits of Public Clouds (AWS, Azure, and GCP)As organizations today continue to move their customer interfacing services and internal line-of-business (LOB) applications to the cloud, it becomes imperative that IT professionals, developers, and enthusiasts understand the essentials and tenets that formed cloud computing. Practitioners should understand and explore the core advantages ingrained in cloud computing that empower users of any skill level to deploy, configure, and manage with confidence cloud-hosted applications, modern services, and core infrastructure resources optimally.
This chapter will lead you through a historic journey from traditional infrastructures to the rise of cloud computing as we know it and then describe in detail cloud computing and its various advantages over traditional technology infrastructures.
In this chapter, we’re going to cover the following topics:
GenesisMonolithic on-premises technologyThe advent of cloud computingCloud computing exploredAdvantages of cloud computingThe fact that you’re reading this implies you know that cloud computing is not some fad on its way out to be discarded to the annals of time. It’s also an excellent modern technology to master, however improbable, due to its ambiguity and cosmic scale. I may be embellishing, but this is not a far cry from the undoubtable truth, which is there are no cloud gurus to speak of in the literal sense. It’s also an exciting cutting-edge service-oriented technology that will expand your existing capabilities, and successfully mastering cloud computing through attaining industry-standard certifications subsequently leads to sustainable careers with the promise of future growth in some of the most prestigious organizations in the private and public sectors. I speak from experience, and when I tell you it’s feasible, it is, because like you, I too was seeking but not finding, and my hopes are that you find your niche in modern technology by embracing the cloud.
In this section, you’ll learn about the history of key technology being used in cloud computing, and you will also learn how these technologies are derived from the traditional mainframes used in data centers. Lastly, you will come to understand a technology architecture pattern known as distributed systems.
I will describe Advanced Research Projects Agency Network (ARPANET), Multics, and mainframes, and introduce virtualization.
“Ever since the dawn of civilization, people have not been content to see events as unconnected and inexplicable. They have craved an understanding of the underlying order in the world. Today we still yearn to know why we are here and where we came from. Humanity’s deepest desire for knowledge is justification enough for our continuing quest. And our goal is nothing less than a complete description of the universe we live in.” (Stephen Hawking)
The future of tomorrow’s technological wonders arose from the turbulent ’60s, arguably an era of counter-cultural beginnings. Through this chaos, humanity rose to accomplish impossible feats, such as the US and Soviet Union’s Space Race, but there are more developments that may have not been as significant to the public, such as the discovery of a rapidly spinning neutron star, the automotive industry’s contribution to acceleration in the literal sense with vehicles such as the Ford Mustang, and nuances in size classes, which included compact, mid-sized and full-sized, coming into being.
The very first computers were connected in 1969, and this was only possible due to the beginnings of the Advanced Research Projects Agency Network (ARPANET) in 1966, a project that would implement the TCP/IP protocol suite. Out of the project rose many nuances such as remote login, file transfer, and email. The project over the years continued to evolve; internetworking research in the ’70s cultivated transmission control programs’ later versions. These later TCP/IP versions were then used in 1983 by the Department of Defense.
Scientific communities in the early 1980s invested in supercomputing mainframes and supported network access and interconnectivity. In later years, interconnectivity flourished to develop what is known today as the internet.
Practical application concepts on remote job entry became realized due to market demands in the 1960s for time-sharing solutions, led by vendors such as IBM and others. GE was another major computer company—along with Honeywell, RCA, and IBM—that had a line of general-purpose computers originally designed for batch processing. Later, this extended into full-time-sharing solutions developed by GE, Bell Laboratories, and MIT, wherein the finished product was known as the Multics operating system, which ran on a mainframe computer. These mainframes became known as data centers, where users submitted jobs for operators to manage, which became prevalent in the years that followed.
The mainframes were mammoth physical infrastructures installed in what later was coined a server room. It became practical for multiple users to access the same data and utilize the CPUs’ power from a terminal (computer). This allowed enterprise organizations the ability to get a better return on their investment.
Following the early achievements of mainframes, corporations such as IBM developed virtual machines (VMs). VMs supported multiple virtual systems on a single physical node—in layman’s terms, multiple computer environments coexisting in the same physical environment. VM operating systems such as CP-40 and CP-67 potentially pathed the way for future virtualization technologies. Consequently, mainframes could run multiple applications and processes in parallel, making the hardware even more efficient and cost-effective.
IBM wasn’t the only corporation that developed and leveraged virtualization technologies. Compaq introduced Intel’s 80386 microprocessor, the Deskpro 386, in 1986. The Deskpro 386 included platform virtualization in the virtual 8086 mode for Windows 386, which supported Microsoft Windows running on top of the MS-DOS operating system. The virtual 8086 mode supported multiple virtual processors, optimizing performance.
In the years to come, virtualization functionality could be traced back to the earliest implementations. Virtual infrastructures can support guest operating systems, including VM memory, CPU cores, disk drives, input devices, output devices, and shared networking resources.
Telecommunication pioneers in the 1990s such as AT&T and Verizon, who previously marketed point-to-point data circuits, began offering virtual networking resources with a similar quality of service, but at a lower cost. Various telecommunication providers began utilizing cloud symbols to denote demarcation points between the provider and the users’ network infrastructure responsibilities.
As distributed computing became more mainstream throughout the years of 2000-2013, organizations examined ways to make scaled computing accessible to more users through time-sharing, underpinned by virtual internetworking. Corporations such as Amazon, Microsoft, and Google offered on-demand self-service, broad network access, resource pooling, and efficient elasticity, whereby compute, storage, apps, and networking resources can be provisioned and released rapidly utilizing virtualization technology.
The advantages of virtualization go far beyond what I have written here, but most notably include reducing electronic equipment costs, resource pooling (sharing), multi-user VM administration, and site-to-site internetworking implementation acceleration—moreover, decreasing exorbitant operational maintenance costs. Virtualization is one of the most pivotal protagonists that catalyzed organizations such as Microsoft, Amazon, and Google to introduce cloud resources in a fully managed service-oriented architecture (SOA) that presently spans the globe.
These virtualization ecosystems, now known as cloud computing services, are delivering turnkey innovations such as Amazon’s e-commerce software as a service (SaaS), known as Amazon Prime, specializing in distributing goods and services. The business benefits are staggering: agility, flexible costs, and rapid elasticity to support highly available access. General statistics shows that millions of consumers rely on Amazon services to facilitate daily shopping needs, especially during the holidays. Amazon has even surpassed retail organization giants such as Walmart as the world’s largest online public retailer. These facts conclusively support the benefits of services delivered using virtualization infrastructure resources. As the age-old adage proclaims, less is more!
In this section, you will learn about the core traditional data center resources. Moreover, I will describe data centers by type, maintenance, compliance, implementation, business continuity (BC), cost, energy, environmental controls, and administration.
To understand the advent of the cloud, we need to address the elephant in the room—while this is figurative in nature, this closely resembles the enormity of the current subject matter: the traditional on-premises data center. I’ll elaborate on the term on-premises in the current context in a later section of this chapter. For now, let’s focus on the traditional data center architecture.
In an unembellished but detailed description, data centers are physical facilities that host networking infrastructure services that manage data resources. The data center is known to house thousands of servers, formerly known as mainframes, and data centers are comprised of various resources, such as computers and networking devices, including utilities and environmental control systems supporting the physical infrastructure.
All data centers, regardless of type, include networking infrastructure resources such as media (cable), repeaters, hubs, bridges, switches, and routers to connect computer clients to servers. Networking devices support internetworking connectivity with internal and external networks. Even the traditional data center supported remote connectivity and had the capability to implement networking topologies such as site-to-site connectivity using an array of networking technologies that supported customers and remote workers who had to connect to the enterprise organization from outside of the company’s private local area network securely.
Storage system resources were prevalent and consisted of infrastructure resources such as storage area network (SAN), networked attached storage (NAS), and direct-attached storage (DAS) devices. Regardless of the data structure, volume, velocity, and accessibility pattern, data was stored in one of these primitives. Later, I’ll elaborate on the variances to help you not only differentiate but to better understand the advantages and disadvantages, which may ultimately drive organizations of all sizes to adopt modern cloud computing data services.
It’s important to note the essential legacy data center infrastructure hosted services on physical servers, which served as the compute infrastructure, typically mounted on physical racks and occasionally installed into cabinets. Here are some important facts: statistically, data centers are in office buildings or similar physical edifices, and data centers have either raised or overhead floored architecture that contains additional equipment, such as electrical wiring, cabling, and environmental controls required to sustain the data center services.
Overall maintenance is another important factor to consider for traditional data centers. This includes administering and maintaining industry-standard regulatory and non-regulatory business best practices, which is a perennial expense. Planning, preparing, and deploying new applications, as well as existing LOB applications or services, using monolithic infrastructures typically incurs substantial costs that directly impact capital and operational expenditures. And innovation, experimentation, and deployment iteration while plausible are not cost-effective in monolithic environments, which delays—if not prevents—new services from general availability. Decommissioning these hardware infrastructure resources is a process within itself, and nigh impossible for some organizations who do not have either the internal talent or budget to successfully complete the project. This more often than not leads companies to try other solutions based on different data center implementations.
The content herein only scratches the surface of traditional on-premises infrastructures’ compliance considerations, such as business requirements and system maintenance concerns. But make no mistake—whether your company is small or large, regulatory compliance policies are very important. I highly recommend reviewing governance and compliance documentation for any technology. Similarly, later sections will elaborate on various compliance controls that organizations such as Amazon, Microsoft, and Google must adhere to. Organizations that implement and manage data centers, traditional or cloud, must adhere to compliance controls set forth by various governmental or non-governmental agencies. My apologies—I have digressed a little from my previous paragraph’s subject. So, let us continue our journey regarding different data center implementations.
Did you know that on-premises data centers come in various implementations? Enterprise data centers are the most common. They are owned and operated by the company for their internal users and clientele. Managed data centers are operated by third parties on behalf of the organization. Companies typically lease the equipment instead of owning it. Some organizations rent space within a data center, where the data center is owned and operated by third-party service providers (SPs) that offer off-premises implementations known as colocation data center models. Each implementation includes redundant data center operational infrastructure resources, such as physical or virtual servers, storage systems, uninterruptable power systems, on-site direct current systems, networking equipment, cooling systems, data center infrastructure management resources, and—commonly—a secondary data center for redundancy.
High availability (HA) and disaster recovery (DR) are other important factors to weigh up. Data center infrastructures are categorized into tiers, which is an efficient way to describe the HA (or lack of HA) infrastructure components being utilized at each data center. Believe it or not, some organizations do not require the HA that a tier 4 data center proposes. Organizations run a risk if they do not plan carefully. For example, organizations that invest in only a tier 1 infrastructure might make a business vulnerable. However, organizations that decide on a tier 4 infrastructure might over-invest, depending on their budget constraints.
Let’s have a look at the various HA data center tiers:
Tier 1 data centers have a single power and cooling system with little, if any, redundancy. They have an expected uptime of 99.671% (28.8 hours of downtime annually).Tier 2 data centers have a single power and cooling system with some redundancy. They have an expected uptime of 99.741% (22 hours of downtime annually).Tier 3 data centers have multiple power and cooling systems with redundancy in place to update and maintain them without taking them offline. They have an expected uptime of 99.982% (1.6 hours of downtime annually).Tier 4 data centers are built fault-tolerant and have redundant components. They have an expected uptime of 99.995% (26.3 minutes of downtime annually).The total cost of ownership (TCO) may be too costly for some start-ups. Most enterprise organizations are looking to offload these costs using third-party vendors but learn eventually the upfront capital expenses are too much to bear, so they continue investing in their own on-premises data center. The public cloud provides advantages regarding capital expenditures and HA. We will discuss these topics in more detail in the section titled The advantages of cloud computing.
What about utility costs? Data centers use various IT devices to provide services, and the electricity used by these IT devices consequently converts to heat, which must be removed from the data center by heating, ventilation, and air conditioning (HVAC) systems, which also use electricity.
Did you know that utilities and environmental control systems are other important items to consider when reviewing on-premises data center costs? On average, on-premises data center infrastructure systems containing typically tens of thousands of network devices require an abundant amount of energy. Several case studies illustrate that traditional data centers use enough electricity to power an estimated 80,000 homes in the US.
These traditional on-premises data centers’ HVAC infrastructure systems are also sometimes not efficient and require the capability to deliver central and distributed cooling, which is costly due to some buildings’ older architectural designs. Moreover, to be more cost- and energy-efficient, newer modular data center models are required for optimal cooling paths, but that’s not always feasible in older structures. Achieving optimal performance from your computing infrastructure requires a different modern modular design that can support the ongoing business demands of today.
Managing a traditional data center requires employing large teams, and supervisors of varying skill sets. Operations team members are responsible for the maintenance and upkeep of the infrastructure within a data center. Governing data center standards for networking, compute, and storage throughout an organization’s application life cycle may not be efficient due to the monolithic architectural design of most traditional data centers. If a company required scaling, it would have to invest in expanding its data center resources by procuring more hardware, which is inevitable. Upgrading the on-premises hardware technology also requires multi-vendor support and sometimes even granting those third parties access to the data center, which poses numerous risks. These concerns would be far fewer if the overall quantity of physical servers in a traditional data center were proportional to the services rendered. But that idea becomes a reality when virtualization becomes prevalent.
Let’s summarize—traditional data centers are listed as a type, such as enterprise data centers, implemented and managed by the company. Data center locations are physical buildings and can include offices and closets. Data centers incur a myriad of costs, some functional and others non-functional. Data center governance includes conforming to policies, laws, regulations, and standards. A data center’s architectural design may impact energy and efficiency. More importantly, data center designs based on type have an impact on our environment. A data center’s power and cooling system design require consistent monitoring and optimization, which consequently will decrease emissions and energy consumption and decrease TCO. Finally, the quintessential traditional data center has a 1:1 ratio between physical servers and services published by an organization. This method of implementation is to be expected because of monolithic architecture. Consequently, this method incurs substantial capital and operational expenditures that have a direct negative impact on an organization’s return on investment (ROI).
In this section, I will introduce, describe, and define virtualization types and vendors, and I will describe how virtualization is different from physical servers. Then, I will explore the distributed computing API architecture. I will also describe how demand has driven technology. Finally, I will define cloud computing models.
This section’s objectives are the following:
From physical to virtualVirtualization contributions by vendorDistributed computing APIsExponential growthCloud computing technology emerged from a multitude of innovations and computing requirements. This emergence included computer science technology advancements that leveraged the underpinnings of mainframe computing, which changed the way we do business. Let us not forget the fickle customer service-level expectations related to IT BC.
The mainframe system features and architecture topology would be one of the important legacy technologies that, through several joint ventures from various stakeholders and evolution, contributed to the advent of cloud computing.
As described in the Genesis section, CP-40 provided a VM environment. Mainframes such as IBM’s 360-hosted CP-40, which supported multiple cloud computing engineer VM operating system instances—arguably, are the very first hardware virtualization prototype.
Let us define virtualization first before we explain how Amazon, Microsoft, and Google use this underpinning to drive their ubiquitous services.
In the Genesis section, we saw how achievements in virtualization technology played an important role in the emergence of cloud computing. Understanding the intricacies of VMs—arguably referred to as “server virtualization”—is critical in the grand scheme of things.
Virtualization abstracts physical infrastructure resources, which support running one or more VMs guest operating systems that resemble a similar or different computer operating system on one physical computer host. This approach was pioneered in the 1960s by IBM. IBM developed several products, such as CP-40 and CP-67, arguably the very first virtualization technologies. While virtualization is one of the key technologies in the advent of cloud computing, this book will not delve into virtualization implementations, such as hardware-assisted virtualization, paravirtualization, and operating system-level virtualization.
Over the years, many technology-driven companies have developed different virtualization offerings of varying types.
VMware is a technology company known for virtualization. VMware launched VMware Workstation in the ’90s, heralding virtualization software that allowed users to run one or more instances of x86 or x86-64 operating systems on a single personal device.
Xen is another technology company known for developing hypervisors that support multiple computer operating systems running on the same hardware concurrently.
Citrix is a virtualization technology company that offers several virtualization products, such as XenApp (app virtualization), which supports XenDesktop (desktop virtualization). There is even a product for Apple devices that hosts Microsoft Windows desktops virtually. Citrix also offers XenServer, which delivers server virtualization. Additionally, Citrix offers the NetScaler product suite: in particular, software-defined wide area networking (SD-WAN), NetScaler SDX, and VPX networking appliances that support virtual networking.
Microsoft, known for its personal and business computer software, has contributed as well to virtualization. Microsoft started offering application virtualization products and services. Microsoft’s App-V delivered application virtualization, and soon thereafter, Microsoft developed Hyper-V, which supported server virtualization.
There are many more organizations that, through acquisition or development, have contributed to modern advancements in various virtualization nuances that are the foundation of cloud computing wonders today. But I would be remiss if I didn’t elaborate on the ubiquitous cloud’s distributed nature—or, more accurately denoted, distributed computing architecture.
Distributed computing, also known as distributed systems, rose out of the ’60s, and its earliest successful implementation was the ARPANET email infrastructure. Distributed computing architectures categorically are labeled as loosely coupled or tightly coupled architectures. Architectures such as client-server are the most known and were prevalent during the traditional mainframe era. N-tier or three-tier architectures provide many of today’s modern cloud computing service architecture characteristics: in particular, sending message requests to middle-tier services that queue requests for other consuming services—for example, in a three-tier web, application, and database server architecture. The application server or application queue-like service would be the middle-tier service, and then queue input messages for other distributed programs on the same server to consume (input) and if required send (output). Another aspect of distributed computing architectures is that of peer-to-peer (P2P), where all clients are peers that can provide either client or server functionality. Each peer or service communicates asynchronously, contains local memory, and can act autonomously. Distributed system architectures deliver cost efficiency and increased reliability. Cloud computing SPs offer distributed services that are loosely coupled, delivering cost-efficient infrastructure resources as a service. This is also due to distributed systems utilizing low-end hardware systems. The top three cloud computing providers are decreasing, if not eliminating single points of failure (SPOFs), consequently providing highly available resources in an SOA. These characteristics are derived from distributed computing.
The rise of cloud computing is also arguably due to the exponential growth of the IT industry. This has a direct correlation with HA, scalability, and BC in the event of planned or unplanned failures. This growth also resulted in mass increases in energy consumption.
Traditional IT computing infrastructures must procure their own hardware as capital expenses. Additionally, they must encounter operating expenses, which include maintaining the computer operating systems and the operational costs incurred by human services. Here is something to ponder—variable operational costs and fixed capital investments are to be expected. Fixed or capital costs are upfront, which could be lowered by increasing the number of users. However, the operational costs may increase quickly with a larger number of users. Consequently, the total cost increases rapidly with the number of users. Modern IT computing infrastructures such as the cloud offer a pay-per-use model, which provides cloud computing engineers and architects greater control over operational expenditures not feasible in a traditional data center.
Meeting the demands of HA and the capability to scale becomes more and more important due to the growth of the IT industry. Enterprise data centers, which are operated and managed by the corporation’s IT department, are known to procure expensive brand-name hardware and networking devices due to their traditional implementation and familiarity. However, cloud architectures are built with commodity hardware and network devices. Amazon, Microsoft, and Google platforms choose low-cost disks and Ethernet to build their modular data centers. Cloud designs emphasize the performance/price ratio rather than the performance alone.
As the number of global internet users continues to rise, so too has the demand for data center services, giving rise to concerns regarding growing data center energy utilization. The quantity of data traversing the internet has increased exponentially, while global data center storage capacity has increased by several factors.
These growth trends are expected to continue as the world consumes more and more data. In fact, energy consumption is one of the main contributors to on-premises capital and operational expenses.
Inevitably this leads to rising concern in electricity utilization, consequently voicing concerns over environmental issues, such as carbon dioxide (CO2) emissions. Knowing the electricity use of data centers provides a useful benchmark for testing theories about the CO2 implications of data center services.
The cost of energy produced by IT devices impacts environmental and economic standards. Industrialized countries such as the US consume more energy than non-industrialized ones. The IT industry is essential to the global economy and plays a role in every sector and industry. Due to the frequency of IT usage, this will no doubt continue to increase demand, which makes it important that we consider designing an eco-friendly infrastructure architecture.
On-premises data centers, which are also referred to as enterprise data center types, require IT to handle and manage everything, including purchasing and installing the hardware, virtualization, operating system, and applications, and setting up the network, network firewall devices, and secure data storage. Furthermore, IT is responsible for maintaining the infrastructure hardware throughout an LOB app’s life cycle. This imposes both significant upfront costs for the hardware and ongoing data center operating costs due to patching. Don’t forget—you should also factor in paying for resources regardless of utilization.
Cloud computing provides an alternative to the on-premises data center. Amazon, Microsoft, and Google cloud providers are responsible for hardware procurement and overall maintenance costs and provide a variety of services you can use. Lease whatever hardware capacity and services you need for your LOB application, only when required, thus converting what had been a capital expense or fixed into an operational expense. This allows the cloud computing engineer to lease hardware capacity and deliver modern software services that would be too expensive to purchase traditionally.
In this section, you will learn about the cloud computing concepts derived from the National Institute of Standards and Technology (NIST). Then I will describe the cloud computing models used by cloud computing providers today. Additionally, I will describe cloud computing deployment models.
Cloud computing plays an increasingly important role in IT. Therefore, as an IT professional, you must be cognizant of the fundamental cloud principles and methods. There are three main cloud computing deployment models: public, private, and hybrid. Each provides a range of services but implements the resources differently. Equally important to consider are the cloud computing service models: infrastructure as a service (IaaS), platform as a service (PaaS), and SaaS. They are the core tenets from which cloud computing is defined.
By now, you should understand the historic journey that led us to cloud computing. However, if you are still unsure, I highly recommend you revisit the section titled Genesis and then correlate the researched data with the section titled The advent of cloud computing. Nevertheless, the cloud is the culmination of evolutionary technological advancements in human history.
To understand cloud services, we first refer to standards.
There are undoubtedly many organizations that develop standards built to ensure we measure, innovate, and lead following industry best practices to reach an overarching goal, and that is to improve our quality of life.
These standard entities may be deemed regulatory or non-regulatory. For simplicity’s sake, regulatory organizations such as the International Energy Agency (IEA) are appointed by international legislation to devise energy requirement standards, and entities such as NIST are non-regulatory because they define supplemental standards that are not official rules enforced by some regulation delegated through legislation. However, in cloud computing, NIST is the gold standard.
NIST proclaims the following regarding cloud computing services:
“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”
The cloud computing model(s), which we will define in grandiose detail in later sections, are derived from NIST standards, which you can review at your leisure, by navigating to their online website. You can use the URL located at http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf.
Cloud computing provides a modern alternative to the traditional on-premises data center. Public cloud providers such as Amazon, Microsoft, and Google are responsible for hardware procurement and continual maintenance, and the public cloud provides on-demand resources. Rent hardware to support your software whenever you need it; organizations can convert what had been an upfront capital expenditure for hardware to an operating expense. This allows you to rent resources that would be traditionally too costly for some companies, and you pay if the resources are being utilized.
Cloud computing typically provides an online website experience, making it user-friendly to administrators who are responsible for managing compute, storage, networking, and other resources. For example, administrators can quickly define VMs by compute size, which includes VM capacity settings, such as virtual CPU core quantity, amount of RAM, disk size, disk performance, an operating system image such as Linux, preconfigured software, and the virtual network configuration, and then has the capability to deploy the VM using the described configuration anywhere in the world, and within several minutes securely access the deployed compute instance where the IT pro or developer can perform role-based tasks. This illustrates the rapid deployment capability of cloud computing defined by NIST.
Cloud computing supports various deployment options as well, such as public, private, and hybrid cloud. These options are known as cloud computing deployment models, not to be confused with IaaS, PaaS, and SaaS cloud computing models.
You will have a general understanding of the public cloud deployment model once you have completed reading this book. So, let me take a moment to elaborate on private and hybrid. In a private cloud, your organization creates a cloud environment in your on-premises data center and provides engineers in your organization with access to private resources. This deployment model offers services similar to the public cloud but exclusively for your users, but your organization remains responsible for procuring the hardware infrastructure and ongoing maintenance of the software services and hardware. In a hybrid-cloud deployment model, enterprise organizations integrate the public and private cloud, permitting you to host workloads in whichever cloud computing deployment model meets your current business requirements. For example, your organization can host highly available website services in the public cloud and then connect it to a non-relational database managed in your private cloud.
Planning, preparing, and implementing a cloud service model is as imperative as deciding whether to remain utilizing traditional systems built using monolithic architecture topology or choose an all-in cloud approach. From a consumer’s point of view, the myriad resources that cloud computing providers such as Amazon, Microsoft, and Google provide are daunting to the untrained eye. Thankfully, Amazon, Microsoft, and Google organize their distributed services into three major categories, referred to as cloud computing models.
One of the first cloud computing models is known as IaaS. In this model, the customer pays the cloud SP (CSP) to host VMs in the public cloud. Customers are responsible for managing the VM guest operating system, including hosted services or applications. This cloud computing model offers the customer complete control and flexibility.
The second cloud computing model is known as PaaS. In this cloud computing model, customers are responsible for the deployment, configuration, and management of applications in an agile manner using the cloud platform. The CSP manages the application runtime environment and is responsible for managing and maintaining the platform’s underlying VM guest operating system.
Another widely utilized cloud computing model is known as SaaS. In this model, clients utilize turnkey online software services such as storage, or email software managed by the cloud computing provider. Customers access cross-platform installable and online apps. These products are typically pay-as-you-go.
Cloud computing providers Amazon, Microsoft, and Google offer all three cloud computing models: IaaS, PaaS, and SaaS. These services are made available as consumption-based offerings. The cloud computing service models form three pillars on top of which cloud computing resources are administered.
All three service models allow the cloud computing engineer to access the services over the internet. The service models are supported by the global infrastructure of the CSP. Every service includes a service-level agreement (SLA) between the provider and the user. The SLA is addressed in terms of the services’ availability, performance, and general security controls.
To help you better understand these service models, I’ll describe in detail IaaS and PaaS enterprise implementation by sharing real-world examples in later chapters from Amazon, Microsoft, and Google.
All three major cloud providers’ solutions are built on virtualization technology, which abstracts physical hardware as a layer of virtualized resources for networking, storage, and processing. Amazon, Microsoft, and Google add further abstraction layers to define specific services that you can provision and manage. Regardless of the unique technology that one of these organizations uses to implement cloud computing solutions, the characteristics commonly observed remain on-demand, broad network access, shared resource pools, and rapid elasticity, and include metering capabilities, which allows enterprise organizations to track resource utilization at a cloud scale.
Cloud computing resources are built-in data centers that are commonly owned and operated by the cloud provider. The cloud core platform includes SANs, database systems, firewalls, and security devices. APIs enable programmatic access to underlying resources. Monitoring and metering are used to track the utilization and performance of resources dynamically provisioned.
The cloud platforms handle resource management and maintenance automatically. Internal services detect the status of each node and server joining and leaving and do the tasks accordingly. Cloud computing providers such as Amazon, Microsoft, and Google have built many economically efficient, eco-friendly data centers all over the world. Each data center theoretically houses tens of thousands of servers.
Here is a layered example of the cloud computing architecture: infrastructure, platform, and application. These layers are implemented with virtualization and provisioned in adherence to each cloud provider’s well-architected framework, which will be explored in a later section for each cloud provider. The infrastructure layer is implemented first to support IaaS resources. This infrastructure layer serves as the foundation to build PaaS resources. In turn, the platform layer is a foundation to implement the application layer for the SaaS.
This begs the question: What are the benefits of said services?
In this section, you will learn to describe the advantages of cloud computing architecture. I will describe the benefits of trading capital expense for variable expenses, cloud economics, capacity planning, optimized agility, improved focus, and leveraging global resources in comparison to the traditional architecture, and will review and define HA and BC.
Cloud computing offers many advantages in comparison to traditional on-premises data centers. Let us review some of the key advantages.
Organizations generally consider moving their workloads to the cloud because of the expense advantages. Instead of having to invest in data centers and servers before organizations know how they are going to use them, only pay when you consume cloud computing resources, and only pay for how much the organization consumes. This expense advantage allows any industry to rapidly get up and running while only paying for what is being utilized.
Using cloud computing, organizations can achieve a lower variable cost than they can get on their own. Because usage from tens of thousands of customers is collected and combined in the cloud, cloud computing providers such as Amazon, Microsoft, and Google can achieve higher economies of scale, which translates into lower subscription prices.
And cloud computing providers such as Amazon, Microsoft, and Google invest in low-end commodity devices optimized for large-scale clouds instead of purchasing high-end devices. The volume of subscription purchases coupled with lower-cost commodity hardware grants cloud computing providers the ability to lower prices for new customers.
As aforementioned, enterprise organizations only pay when utilizing cloud computing resources. Organizations access as much or as little as needed, and scale up and down, in and out as required on-demand.
Capacity planning is not only arduous but tedious and error-prone, particularly if you do not know what the customer’s response will be. Customers’ demands fluctuate dynamically, and the capability to scale becomes critical. Cloud computing engineers can demand more capacity during real-time shifts and spikes in customer demand, reduce costs using commodity compute, storage, and networking resources pooled by the cloud computing provider, and can be provisioned at a moment’s notice. For general concerns such as whether your LOB application needs more compute resources to meet increasing customer demands, hosting your workload in the cloud can help keep your customers satisfied. Does a decline in business mean that you don’t need all that capacity your cloud computing service is providing for your LOB applications? Cloud computing engineers can scale down compute capacity to control costs, offering a huge advantage over static on-premises data center solutions.
On-premises data centers can take generally several weeks to months to provision a server. With cloud computing ecosystems, organizations can provision tens of thousands of resources in minutes, and the ability to rapidly scale your workloads both horizontally and vertically allows you to address SLAs that are in constant flux. Developing new applications in the cloud can significantly decrease time to market (TTM), which is an improvement over traditional monolithic development for several reasons. You do not have to deploy, configure, and maintain the underlying hardware for the compute, storage, and networking on which your applications will run. Instead, use the infrastructure resources accessible to you by your cloud computing provider.
Another reason why cloud computing-developed applications are faster to deploy has to do with how modern applications are developed. In an enterprise setting, developers create and test their applications in a test environment that simulates the final production environment. For example, an application might be developed and tested on a single-instance VM, also known as the dev environment, for eventual deployment onto two VM instances clustered across different Availability Zones (AZs
