34,79 €
Google Cloud, the public cloud platform from Google, has a variety of networking options, which are instrumental in managing a networking architecture. This book will give you hands-on experience of implementing and securing networks in Google Cloud Platform (GCP).
You will understand the basics of Google Cloud infrastructure and learn to design, plan, and prototype a network on GCP. After implementing a Virtual Private Cloud (VPC), you will configure network services and implement hybrid connectivity. Later, the book focuses on security, which forms an important aspect of a network. You will also get to grips with network security and learn to manage and monitor network operations in GCP. Finally, you will learn to optimize network resources and delve into advanced networking. The book also helps you to reinforce your knowledge with the help of mock tests featuring exam-like questions.
By the end of this book, you will have gained a complete understanding of networking in Google Cloud and learned everything you need to pass the certification exam.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 342
Veröffentlichungsjahr: 2022
Design, implement, manage, and secure a network architecture in Google Cloud
Maurizio Ipsale
Mirko Gilioli
BIRMINGHAM—MUMBAI
Copyright © 2021 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Group Product Manager: Vijin Boricha
Publishing Product Manager: Mohd Riyan Khan
Senior Editor: Shazeen Iqbal
Content Development Editor: Rafiaa Khan
Technical Editor: Shruthi Shetty
Copy Editor: Safis Editing
Project Coordinator: Shagun Saini
Proofreader: Safis Editing
Indexer: Manju Arasan
Production Designer: Aparna Bhagat
First published: January 2022
Production reference: 1171121
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-80107-269-4
www.packt.com
Writing this book was harder than I thought but more rewarding than I could have ever imagined. None of this would have been possible without the support of my wonderful children, Simone and Alessia, and my lovely wife, Liliana: I feel so grateful to have such a loving family. I would like to thank my dear parents, Pippo and Maria, and my brother, Marco, for everything I have learned from them.
Thanks to all my colleagues and co-workers in K Labs in Italy and ROI Training in the U.S.A. Everything I know about technology is also due to the ongoing cooperation with them.
Thanks to everyone in the Packt team who helped me so much in writing this book.
– Maurizio Ipsale
To my sweet daughter, Alessia, my lovely wife, Fiorenza, my dear parents, Mara and Mauro, and my sister, Elena, whose never-failing encouragement made this book possible. To all my colleagues and co-workers at K Labs (Italy) and ROI Training in the U.S.A. Thanks to everyone on the Packt team who helped me so much.
– Mirko Gilioli
Maurizio Ipsale was born in Messina (Italy) in 1978, where he graduated in electronic engineering at the age of 23 and obtained a PhD. Passionate about the ICT world, his professional curriculum has been enriched with many certifications, even as an instructor of official training courses, such as Cisco, Juniper, Huawei, AWS, and Google Cloud. He delivers training courses all around the world on many state-of-the-art technologies, including the cloud, machine learning, DevOps, data engineering, IoT, and Kubernetes. Maurizio currently lives in Modena (Italy) with his wife, Liliana, and two children, Simone and Alessia. He is a Training and Professional Service Engineer at K Labs, and a Google Cloud Authorized Trainer at ROI Training.
Mirko Gilioli was born in Reggio Emilia (Italy) in 1983, where he graduated with an MSc in computer science and engineering after spending 1 year in the USA as a research assistant at the IHMC in Pensacola, Florida. At the early age of 27 years old, he started his career as an ICT instructor at K Labs, an Italian company focused on ICT training services. Here, Mirko developed long training records in many technological areas, including networking and cloud technologies. He has been awarded as a Cisco Certified System Instructor (CCSI#35749) and Google Cloud Authorized Trainer.
Mirko currently lives in Sassuolo (Italy) with his wife, Fiorenza, and his daughter, Alessia. He is a passionate trainer at K Labs and a Google Cloud Authorized Trainer at ROI Training.
Fady Ibrahim is a Google Cloud Authorized Trainer certified by Google Cloud as a Professional Cloud Network Engineer. He holds other certificates from Google Cloud, such as Professional Cloud Security Engineer and Professional Cloud Architect. As a cloud consultant, he helps people and the community deploy their apps to Google Cloud.
He has volunteered as a Learning Community Ambassador for the Google Africa Developer Scholarship for Google Cloud Track for 3 consecutive years.
Fady has a PhD thesis in computer engineering titled Using Trusted Cloud Computing to Provide Trust in Multi-blockchain Ecosystems. He has more than 12 years of experience as an instructor at Cisco Networking Academy, teaching CCNA and Linux Essentials.
I need to thank the Google Developer Group - Cairo Chapter team for their outstanding community. Especial thanks to the chapter leaders, Bassant and Mo Nagy, for all their support, encouragement, and patience.
I need to thank the team at Cloud11, a Google Cloud Partner. Special thanks to Abdel-Rahman Wahid, the CEO of Cloud 11, for his understanding and support.
Finally, to all my friends and family, all the love and gratitude.
Google Cloud, the public cloud platform from Google, has a variety of networking options, which are instrumental in managing a networking architecture. This book will give you hands-on experience of implementing and securing networks in Google Cloud Platform (GCP).
You will understand the basics of Google Cloud infrastructure and learn to design, plan, and prototype a network on GCP. After implementing a Virtual Private Cloud (VPC), you will configure network services and implement hybrid connectivity. Later, the book focuses on security, which forms an important aspect of a network. You will also get to grips with network security and learn to manage and monitor network operations in GCP. Finally, you will learn to optimize network resources and delve into advanced networking.
By the end of this book, you will have gained a complete understanding of networking in Google Cloud and learned everything you need to pass the certification exam.
This Google Cloud certification book is for cloud network engineers, cloud architects, cloud engineers, administrators, and anyone who is looking to design, implement, and manage network architectures in Google Cloud Platform. You can use this book as a guide for passing the Professional Cloud Network Engineer certification exam. You need to have at least a year of experience in Google Cloud, basic enterprise-level network design experience, and a fundamental understanding of Cloud Shell to get started with this book.
Chapter 1, Google Cloud Platform Infrastructure, provides an overview on what cloud computing is and a description of Google Cloud Platform architecture and its main components. Moreover, Chapter 1 introduces Google Compute Engine, Cloud DNS, Cloud Load Balancing, Google Kubernetes Engine, and DevOps culture.
Chapter 2, Designing, Planning, and Prototyping a GCP Network, provides guidelines on how to design, plan, and prototype a Google Cloud network. It also discusses the main disaster recovery and failover strategies as well as IP network planning in the Google Cloud Virtual Private Cloud (VPC). Chapter 2 continues by describing interconnection options between an on-premises network and VPC in Google Cloud. Finally, Chapter 2 discusses Google Kubernetes Engine network design principals for large-scale application deployments.
Chapter 3, Implementing a GCP Virtual Private Cloud (VPC), describes how to implement VPC resources in Google Cloud. The main topics covered here are VPC subnets, Cloud Router, VPC interconnection, and Cloud NAT.
Chapter 4, Configuring Network Services in GCP, deep dives into Google Cloud Load Balancing and Cloud CDN services. Indeed, the chapter describes how to implement Global and Internal Network Load Balancing services with Google Cloud Platform. Moreover, the chapter covers how to implement Cloud CDN to reduce network latency for static content stored in Google Cloud Storage.
Chapter 5, Implementing Hybrid Connectivity in GCP, focuses on hybrid connectivity between on-premises and Google Cloud networks. The chapter describes how to implement Dedicated Interconnect, Partner Interconnect, and IPsec VPN in Google Cloud Platform as well as diving into Cloud Router.
Chapter 6, Implementing Network Security, deep dives into security implementation in Google Cloud Virtual Private Cloud. The chapter shows how to configure Identity and Access Management and Google Cloud Armor. Moreover, the chapter describes how to insert a third-party next-generation firewall into the VPC with multiple network interface cards.
Chapter 7, Managing and Monitoring Network Operations, describes how to use Google Cloud Logging and Monitoring to monitor network and security operations.
Chapter 8, Advanced Networking in Google Cloud Platform, describes Google Traffic Director, Service Directory, and Network Connectivity Center. Indeed, the chapter describes what Service Mesh networks are and how they fit into Traffic Director. Then, the chapter moves on to exploring how to discover services with Service Directory and its implementation in Google Cloud Platform. Finally, the chapter shows how to build Hub and Spoke network topologies with Google Cloud Network Connectivity Center.
Chapter 9, Professional Cloud Network Engineer Certification Preparation, provides a set of questions that would work as preparation for the Google Cloud Professional Network Engineer exam.
You should have a basic knowledge about the current IP networking technologies.
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://static.packt-cdn.com/downloads/9781801072694_ColorImages.pdf.
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "To configure the host side of the network, you need the tunctl command from the User Mode Linux (UML) project."
A block of code is set as follows:
for ((i=0;i<10;i++)); \
do curl \
-w %{time_total}\n \
-o /dev/null \
-s http://$LB_IP_ADDRESS/cdn.png; \
done
Any command-line input or output is written as follows:
gcloud compute networks peerings list
Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Click Flash from Etcher to write the image."
Tips or important notes
Appear like this.
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Once you've read Google Cloud Certified Professional Cloud Network Engineer Guide, we'd love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.
Your review is important to us and the tech community and will help us make sure we're delivering excellent quality content.
In the first part of the book, you will learn how to design, plan, and implement a Google VPC network starting from Google Cloud infrastructure fundamentals.
This part of the book comprises the following chapters:
Chapter 1, Google Cloud Platform InfrastructureChapter 2, Designing, Planning, and Prototyping a GCP NetworkChapter 3, Implementing a GCP Virtual Private Cloud (VPC)To learn about Google Cloud Platform's infrastructure, you must have a good understanding of what cloud computing is and the cloud service models that are available, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Moreover, since Google Cloud Platform is a public cloud provider, we will provide a brief explanation of the differences between public, private, and hybrid cloud services.
Google Cloud Platform's physical architecture will be described. We will also specify the regions and zones, as well as the logical architecture that specifies the organizations, folders, projects, and resources.
A deep explanation of what a Google Compute Engine instance is, and how you can use one for your workload, will be provided in the second part of this chapter.
After introducing a few of the Google Cloud Platform (GCP) services, such as Cloud DNS, Cloud CDN, and Cloud Load Balancer, we will provide an overview of the DevOps culture, as applied to Kubernetes and the Google Cloud implementation of Kubernetes – Google Kubernetes Engine (GKE).
In this chapter, we are going to cover the following main topics:
Introducing cloud computing and virtualizationIntroducing GCPGetting started with GCPUnderstanding virtual machines in the cloudExploring containers in the cloudThis section introduces the concepts of cloud computing and virtualization, which are fundamental to understanding how GCP works. We will go through the basic elements of cloud computing and virtualization that are required to dive into the chapter.
Whether you are a fresh entry to the cloud or not, we can consider cloud computing a model that enables ubiquitous, on-demand network access to a shared pool of configurable computing resources. These resources can be servers, storage, networks, applications, and services. A great advantage that cloud computing brings to users is that you can rapidly provision and de-provision computing resources with minimal management effort.
Cloud computing models can be oriented to private customers like you or to enterprises or public organizations. Many popular internet services have been introduced over the years. Think about Dropbox, Google Photos, Apple iCloud, and so on, which let you store your files or images in a private space that can be accessed anywhere, anytime. Additionally, Amazon Web Services, Microsoft Azure, and Google Cloud brought services to the market cloud to help enterprises and organizations scale their IT infrastructures and applications globally.
The cloud computing model is based on several important pillars:
Data center: This refers to a large building with independent power and cooling systems that hosts a large number of servers, storage, and networking devices.Virtualization: This is an enabling technology that allows physical resources to be shared across multiple users privately. Programmability: Every cloud resource (compute, storage, network, and so on) is software-driven. This means that there is no human interaction to request, deploy, or release a resource-enabling self-service model.Global network: This refers to the global private physical network that interconnects all the data centers around the world.Consumers can rent these services from cloud providers on-demand in a self-service manner. This model allows cloud users to pay only for the resources they reserve and consume, thus reducing Capital Expenditure (CAPEX) and time-to-market.
More specifically, cloud computing is built on five fundamental attributes:
On-demand self-service: Cloud users can request cloud computing services with a self-service model when they need them. This can be achieved with automated processes without any human interaction.Broadband network access: Cloud users can access their resources anytime, anywhere, through a broadband connection. This lets cloud users interact with remote resources as if they were on-premises.Resource pooling: Cloud users can access a wide, almost infinite pool of resources without worrying about its size and location.Rapid elasticity: Cloud users can rapidly scale their resources elastically based on their actual workload needs. This allows cloud users to increase resource utilization and reduce costs.PAYG (Pay As You Go) model: Cloud users only pay for what they reserve or use. This allows them to greatly reduce CAPEX, increase agility, and reduce time-to-market.There are three distinct kinds of cloud services that a user can choose from:
Infrastructure as a Service (IaaS): Cloud users can rent the entire IT infrastructure, including virtual machines, storage, network, and the operating system. With this type of service, the user has full access to and control over the virtual infrastructure and is responsible for it. The cloud provider is responsible for the physical architecture and virtualization infrastructure.Platform as a Service (PaaS): This type of service is ideal for developers who want an on-demand environment for developing, testing, and delivering applications. Here, developers can quickly deploy their applications without worrying about the underlying infrastructure. There is no need to manage servers, storage, and networking (which is the responsibility of the cloud provider) since the focus is on the application.Software as a Service (SaaS): Cloud providers can lease applications to users, who can use them without worrying about managing any software or hardware platforms.The following diagram shows a comparison between these three cloud services:
Figure 1.1 – A comparison of the IaaS, PaaS, and SaaS services
Your cloud infrastructure can be deployed in two ways:
On-premises: This deployment refers to resources that are deployed on a private data center that belong to a single organization.On a public cloud: This deployment refers to resources that are deployed in third-party data centers owned by the cloud provider. These resources will be running in a virtual private space in a multi-tenant scenario or sole-tenant scenario (https://cloud.google.com/compute/docs/nodes/sole-tenant-nodes) with dedicated hardware.It is quite common that cloud users need to interconnect services that are running on-premises and services that have been deployed on the public cloud. Thus, it is particularly important to create hybrid cloud services that span both private and public cloud infrastructure. GCP offers many services to build public cloud infrastructure and interconnect them to those running on-premises.
Now that you have learned what cloud computing is, let's introduce virtualization.
Sometimes, in the Information Technology (IT) industry, there is the need to abstract hardware components into software components. Virtualization is the technology that does this. Today, virtualization is used on servers to abstract hardware components (CPU, RAM, and disk) to virtual systems that require them to run. These virtual systems are commonly referred to as virtual machines and the software that abstracts the hardware components is called a hypervisor. By using virtualization, IT administrators can consolidate their physical assets in multiple virtual machines running on one or few physical servers. Hypervisors lets you have multiple virtual machines with different requirements in terms of the hardware and operating system. Moreover, the hypervisor isolates operating systems and their running applications from the underlying physical hardware. They run independently of each other.
The following diagram shows the architecture for virtualization:
Figure 1.2 – Virtualization architecture
As we can see, the hypervisor virtualizes the hardware and provides each operating system with an abstraction of it. The operating systems can only see the virtualized hardware that has been provisioned in the hypervisor. This allows you to maximize the hardware resource utilization and permits you to have different operating systems and their applications on the same physical server.
Virtualization brings several benefits compared to physical devices:
Partitioning: Virtualization allows you to partition virtual resources (vCPU, vRAM, and vDISK) to give to the virtual machine. This improves physical resource utilization.Isolation: Virtual machines are isolated from each other, thus improving security. Moreover, they can run operating systems and applications that can't exist on the same physical server.Encapsulation: Virtual machines can be backed up, duplicated, and migrated to other virtualized servers.Now that we have introduced cloud computing and virtualization, let's introduce GCP.
This section will provide an overview of GCP and its services. Additionally, we will look at the Google Cloud global network infrastructure, which includes regions and zones. Finally, we will describe the concepts of projects, billing, and quotas in GCP.
Over the years, Google has invested billions of dollars to build its private network and today can carry 40% of the world's internet traffic every day. The customers who decide to deploy their cloud services on GCP will benefit from the highest throughput and the lowest latency. Google offers connection to their cloud services from over 140 network edge locations (https://cloud.google.com/vpc/docs/edge-locations), as well as via private and public internet exchange locations (https://peeringdb.com/api/net/433). Thanks to Google's edge caching network sites, which are distributed all around the globe (https://cloud.google.com/cdn/docs/locations), latency can be reduced, allowing customers to interact with their cloud services in near real time. In the following diagram, you can see where Google's network has its presence in terms of regions and PoP:
Figure 1.3 – GCP regions and global network
As you can see, GCP data centers are organized into regions and zones around the globe and are interconnected with Google's physical private network. Regions are independent geographic areas that include three or more zones. For example, the us-central1 region includes the us-central1-a, us-central1-b, and us-central1-c zones. In GCP projects, there are global resources such as static external IP addresses:
Figure 1.4 – GCP regions, zones, and global resources
To design a robust and failure-tolerant cloud infrastructure, it is important to deploy resources across zones or even regions. This prevents you from having an infrastructure outage that affects all resources simultaneously. Thus, it is particularly important to know which of the following categories your resources belong to, as follows:
Zonal resource: This is a resource that is specific to a zone, such as a virtual machine instance.Regional resource: This is a resource that is specific to a region and spans over multiple zones, such as a static IP address.Global resource: This is a location-independent resource, such as a virtual machine instance image.Choosing a region and a zone where your resources should be deployed, as well as where data should be stored, is an especially important design task. There are several reasons you should consider this:
High availability: Distributing your resources across multiple zones and regions will help mitigate outages. Google has designed zones to minimize the risk of correlated failures caused by power, cooling, or networking outages. In the case of a zone outage, it is very easy to migrate to another zone to keep your service running. Similarly, you can mitigate the impact of a region outage by running backup services in another region, as well as using load balancing services.Decreased network latency: When latency is a crucial topic in your application, it is very important to choose the zone or region closest to your point of service. For example, if the end users of a service are located mostly in the west part of Europe, your service should be placed in that region or zone.At the time of writing, there are 24 available regions and 73 zones. Recently, Google announced that five new regions will be available soon in Warsaw (Poland), Melbourne (Australia), Toronto (Canada), Delhi (India), and Doha (Qatar). The full list of available regions can be queried from Cloud Shell, as shown in the following screenshot. Cloud Shell is a ready-to-use command-line interface that's available in GCP that allows the user to interact with all GCP products:
Figure 1.5 – GCP region list from Cloud Shell
The full list of available zones can also be queried from Cloud Shell, which is available in GCP, as shown in the following screenshot:
Figure 1.6 – GCP zone list from Cloud Shell
Each zone supports several types of CPU platforms between Ivy Bridge, Sandy Bridge, Haswell, Broadwell, Skylake, or Cascade Lake. This is an important aspect to know when you decide to deploy your virtual machine instance in one particular zone. You need to make sure that the zone you choose supports the instance that you are willing to deploy. To find out what CPU platform one zone supports, you can use Cloud Shell, as shown in the following screenshot:
Figure 1.7 – GCP CPU platform list from Cloud Shell
When selecting zones, keep the following tips in mind:
Communication within and across regions will have different costs: Generally, communication within regions will always be cheaper and faster than communication across different regions.Apply multi-zone redundancy to critical systems: To mitigate the effects of unexpected failure on your instances, you should duplicate critical assets in multiple zones and regions.Now, let's look at projects, billing, and quotas.
When cloud users request a resource or service in GCP, they need to have a project to track resources and quota usage. GCP projects are the basis for enabling and using GCP services. Inside a GCP project, users must enable billing to monitor, maintain, and address the costs of the GCP services running on the project itself.
Moreover, projects are separate compartments, and they are isolated from each other. GCP resources belong to exactly one project and they cannot be shared across projects, except for shared VPC networks, which can be shared with other projects. In addition, GCP projects can have different owners and users with several rights, such as project editor or project viewer. They are managed hierarchically using the Google Cloud resource manager, which will be described shortly.
GCP projects have three identifying attributes that uniquely distinguish them globally. These are as follows:
Project ID: This is a permanent, unchangeable identifier that is unique across GCP globally. GCP generates one at project creation time but you can choose your unique ID if needed. The project ID is a human-readable string that can be used as a seed for uniquely naming other GCP resources, such as Google Cloud Storage bucket names.Project name: This is a nickname that you can assign to your project for your convenience. It does not need to be unique, and it can be changed over time.Project number: This is a permanent, unchangeable number that is unique across GCP globally. This number is generated by GCP and it cannot be chosen.Projects can belong to a GCP organization for business scenarios, or they can exist without an organization. This happens when we have an individual private project. However, you can always migrate to a private project inside a GCP organization.
Projects must belong to a billing account, which is used as a reference for paying for Google Cloud resources. This billing account is linked to a payment profile, which contains payments methods that costs are charged for. As shown in the following diagram, one billing account can have multiple projects assigned:
Figure 1.8 – GCP billing and payment profile relation
The cloud billing account is responsible for tracking all the costs that are incurred using the GCP resources for all the projects attached to it. In practice, cloud billing has the following key features:
Cost reporting: This can monitor, share, and print monthly costs and keep track of the cost trends of your resource spending, as shown in the following screenshot:Figure 1.9 – Cost reporting in GCP
Cost breakdown: This shows how many discounts your base usage cost will benefit from in a month. This is shown as a waterfall chart, starting from the base cost and subtracting discounts progressively until you see the final costs, as shown here:Figure 1.10 – Cost breakdown in GCP
Budget and alerts: This is very important for setting budgets for your projects to avoid surprises at the end of the month. Here, you can decide the upper limit for a monthly expense and generate alerts for billing administrators to control costs once the trigger is reached. The following screenshot shows an example of a budget of 100 euros with the actual monthly expenses and three thresholds that trigger emails:Figure 1.11 – Budgets and alerts in GCP
Resources in projects can be limited with quotas. Google Cloud uses two categories of quotas:
Rate quotas: This limits a certain number of API requests to a GCP resource within a time interval, such as a minute or a day, after which the resource is not available.Allocation quotas: This limits the number of GCP resources that are available to the project at any given time. If this limit is reached, the resource must be released so that you can request a new one.Projects can have different quotas for the same services. This may depend on various aspects; for example, the quota administrator may reduce the quota for certain resources to equalize the number of services among all projects in one organization.
To find out what the quota is for the service you want to use in GCP, you can search for it on the Cloud IAM Quotas page. Here are all the quotas assigned to your project and you can request different quota sizes if needed. As shown in the following screenshot, you can display the actual usage of CPU quotas in all project regions:
Figure 1.12 – CPU quotas in the GCP project
In this section, you learned about the physical architecture of GCP. However, to start using it, you must understand how Google architects the resources that are available to users. This will be described in the next section.
In this section, we are going to describe how resources are organized inside GCP and how to interact with them. This is important, especially when the projects and their resources belong to large enterprises. Moreover, this section describes what tools users can use to interact with GCP.
The cloud resource hierarchy has two main functions inside GCP:
To manage a GCP project life cycle hierarchically inside one organization.Organization and Identity and Access Management (IAM) policies can be applied for project and resource access control.The best way to understand the GCP resource hierarchy is to look at it from the bottom up. Resources are grouped into projects, which may belong to a single folder or organization node. Thus, the resource hierarchy consists of four elements, as shown in the following diagram:
Figure 1.13 – Resource hierarchy in GCP
Let's understand what each of the four elements is, as follows:
Organization node: This is the root node for your organization and it centralizes the project's management in a single structure. The organization node is associated with a Google workspace or cloud identity account, which is mandatory. Folders: This is an additional grouping method that wraps projects and other folders hierarchically to improve separation and policy administration. You can apply an access control policy to the folder or even delegate rights to all the sub-folders and projects that are included.Projects: This is the fundamental grouping method for containing GCP resources and enabling billing. They are isolated from each other. Resources: These are GCP services that users can deploy.With the resource hierarchy, it is easy to apply access control at various levels of your organization. Google uses IAM to assign granular access to a specific Google resource. IAM administrators can control who can do what on which resources. IAM policies can be applied at the organization level, folder level, or project level. Note that with multiple IAM policies applied at various levels, the most effective policy for a resource will be the union between the policy set on the resource itself and the ones inherited from the ancestors.
There are five ways of interacting with GCP:
Cloud Platform Console: This is a web user interface that allows you to use all GCP resources and services graphically.Cloud Shell and Cloud SDK: This is a command-line interface that allows you to use all GCP resources.RESTful API: This is an API that can be accessed via RESTful calls and allows you to access and use GCP resources and services.API client libraries: These are open libraries that are available in various programming languages and allow you to access GCP resources. Infrastructure as Code (IaC): Open source IaC tools such as Terraform or Google Deployment Manager can be used to deploy and manage IaaS and PaaS resources on GCP (https://cloud.google.com/docs/terraform).The first two operating modes are more appropriate for cloud architects and administrators who prefer to have direct interaction with GCP. The other two are chosen by programmers and developers who build applications that use GCP services. In this book, we will focus more on the Console and Cloud Shell to explain GCP features.
The following screenshot shows the main components of the Console:
Figure 1.14 – Main components of the GCP Console
Let's explore what's labeled in the preceding screenshot:
The navigation menu lets you access all the GCP services and resources (1).The combo menu lets you select the project you want to work with (2).The search bar lets you search for resources and more within the project (3).The Cloud Shell button lets you start the Cloud Shell (4).The Project Info card lets you control the project settings (5).The Resources card lets you monitor the active resources (6).The Billing card lets you monitor the cost and its estimation (7).Cloud Shell is the preferred interaction method for administrators who want to use the command-line interface. Cloud Shell also has a graphical editor that you can use to develop and debug code. The following screenshot shows Cloud Shell:
Figure 1.15 – Cloud Shell
Cloud Shell Editor is shown in the following screenshot:
Figure 1.16 – Cloud Shell Editor
Cloud Shell comes with the Cloud SDK preinstalled, which allows administrators to interact with all GCP resources. gcloud, gsutil, and bq are the most important SDK tools that you will use to, for instance, manage Compute Engine instances, Cloud Storage, and BigQuery, respectively.
In this section, you learned about the logical architecture of GCP. In the next section, you will understand how virtual machines work in Google Cloud.
In this section, you will learn about Compute Engine in GCP and its major features. This includes the virtual machine types that are available in GCP, disk options, and encryption solutions. Moreover, this section will introduce Virtual Private Cloud and its main characteristics. Finally, we will look at Load Balancing, DNS, and CDN in GCP.
IaaS in GCP is implemented with Compute Engine. Compute Engine allows users to run virtual machines in the cloud. The use cases for Compute Engine are as follows:
WebsitesLegacy monolithic applicationsCustom databasesMicrosoft Windows applicationsCompute Engine is a regional service where, when you deploy it, you must specify the instance name, the region, and the zone that the instance will run in. Note that the instance must be unique within the zone. GCP allows administrators to deploy Compute Engine VMs with the same name, so long as they stay in different zones. We will discuss this in more detail when we look at internal DNS.
There are four virtual machine family types that you can choose from:
General-purpose: This category is for running generic workloads such as websites or customized databases. Compute-optimized: This category is for running specific heavy CPU workloads such as high-performance computing (HPC) or single-threaded applications.Memory-optimized: This category is for running specific heavy in-memory workloads such as large in-memory databases or in-memory analytics applications.GPU: This category is for running intensive workloads such as machine learning, graphics applications, or blockchain.In the general-purpose category, you can choose between four different machine types, as illustrated in the following diagram:
Figure 1.17 – General-purpose Compute Engine machine types in GCP
To choose the appropriate machine type for your workload, let's have a look at the following table:
Each of the previous machine types can have different configurations in terms of vCPUs and memory. Here, you can select between predefined and custom machine types. Predefined machine types let you choose a Compute Engine instance that has a predefined amount of vCPUs and RAM. On the other hand, the custom machine type allows you to select the vCPUs and RAM that are required for your workload. You can have additional options for your predefined Compute Engine instance. You can run a virtual machine that shares a core with other users to save money, or you can choose an instance that has a different balance of vCPUs and memory.
We can summarize all the machine type configurations with the following diagram:
Figure 1.18 – Machine type configurations in GCP
Another important aspect of Compute Engine is its boot disk. Each virtual machine instance requires a boot disk to run properly. In the boot disk, the operating system is installed, as well as the main partition. The boot disk is a permanent storage disk and it can be built from several types of images. GCP offers pre-built public images for both Linux and Windows operating systems. Some of them are license free such as CentOS, Ubuntu, and Debian. Others are premium images, and they incur license fees.
Boot disks can be divided into three types:
Standard persistent disk: This is a magnetic hard disk drive (HDD) that can have up to 7,500 IOPS in reading and 15,000 IOPS in writing operations.Balanced persistent disk: This is the entry-level solid-state drive (SSD) and can have up to 80,000 IOPS in both reading and writing operationsSSD persistent disk: This is the second-level SSD and can have up to 100,000 IOPS in both reading and writing operations.Boot disks are the primary disks for a Compute Engine instance. Additionally, you can attach more disks to your virtual machine if you need extra space or for extremely high performance. For the latter, you can add a local SSD as a secondary block storage disk. They are physically attached to the server that hosts your Compute Engine instance and can have up to 0.9/2.4 million IOPS in reading and 0.8/1.2 million IOPS in writing (with SCSI and NVMe technology, respectively).
Security is a particularly important feature when you design your Compute Engine instance. For this reason, Google lets you choose from three different encryption solutions that apply to all the persistent disks of your virtual machine, as follows:
Google-managed key