25,99 €
Easy-to-follow visual walkthrough of every important part of the Google Cloud Platform The Google Cloud Platform incorporates dozens of specialized services that enable organizations to offload technological needs onto the cloud. From routine IT operations like storage to sophisticated new capabilities including artificial intelligence and machine learning, the Google Cloud Platform offers enterprises the opportunity to scale and grow efficiently. In Visualizing Google Cloud: Illustrated References for Cloud Engineers & Architects, Google Cloud expert Priyanka Vergadia delivers a fully illustrated, visual guide to matching the best Google Cloud Platform services to your own unique use cases. After a brief introduction to the major categories of cloud services offered by Google, the author offers approximately 100 solutions divided into eight categories of services included in Google Cloud Platform: * Compute * Storage * Databases * Data Analytics * Data Science, Machine Learning and Artificial Intelligence * Application Development and Modernization with Containers * Networking * Security You'll find richly illustrated flowcharts and decision diagrams with straightforward explanations in each category, making it easy to adopt and adapt Google's cloud services to your use cases. With coverage of the major categories of cloud models--including infrastructure-, containers-, platforms-, functions-, and serverless--and discussions of storage types, databases and Machine Learning choices, Visualizing Google Cloud: Illustrated References for Cloud Engineers & Architects is perfect for every Google Cloud enthusiast, of course. It is for anyone who is planning a cloud migration or new cloud deployment. It is for anyone preparing for cloud certification, and for anyone looking to make the most of Google Cloud. It is for cloud solutions architects, IT decision-makers, and cloud data and ML engineers. In short, this book is for YOU.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 331
Veröffentlichungsjahr: 2022
COVER
TITLE PAGE
INTRODUCTION
Reader Support for This Book
CHAPTER ONE: Infrastructure
CHAPTER TWO: Storage
CHAPTER THREE: Databases
Relational Databases
Nonrelational databases
Which One Is Best?
How to Set Up Cloud SQL
Reliability and Availability
Migrating an Existing MySQL Database to Cloud SQL
Security and Compliance
Cloud SQL in Action
How Does Spanner Work?
How Does Spanner Provide High Availability and Scalability?
How Do Reads and Writes Work?
How Does Spanner Provide Global Consistency?
What Is Firestore?
Document-Model Database
How Do You Use Firestore?
Some Cloud Bigtable Features
Scale and High Availability (HA)
How Does It Optimize Throughput?
What are Your Application's Availability Needs?
Features and Capabilities
Use Cases
CHAPTER FOUR: Data Analytics
5 Steps to Create a Data Analytics Pipeline
How Does Pub/Sub Work?
Pub/Sub Features
Pub/Sub Use Cases
Main Components
How Does Cloud IoT Core Work?
Design Principles of Cloud IoT Core
Use Cases
How Does Data Processing Work?
How to Use Dataflow
Dataflow Governance
How Does Dataproc Work?
Migrating HDFS Data from On-Premises to Google Cloud
What Is Data Preparation?
How Does Dataprep Work?
BigQuery Unique Features
How Does It Work?
BigQuery Storage Internals
Dremel: BigQuery’s query engine
Security
Cost
Data Integration Capabilities
How Does Data Catalog Work?
Data Catalog Architecture
Data Governance
How Does Cloud Composer Work?
How to Run Workflows in Cloud Composer
Cloud Composer Security Features
How Does It Work?
Connectivity Options
Datastream Use Cases
Looker's Platform
In-Database Architecture
Semantic Modeling Layer
Cloud Native
Capture
Process
Store
Analyze
Use
Services Spanning the Pipeline
CHAPTER FIVE: Application Development and Modernization Opening
Building and Modernizing Cloud Applications
Microservices or Monolith?
What Do Most Microservices Need?
Where to Begin?
Should You Migrate to Google Cloud?
Which Migration Path Is Right for You?
Common Cloud Migration Use Cases
Why Is Traditional Hybrid and Multicloud Difficult?
How Does Anthos Make Hybrid and Multicloud Easy?
Deployment Option 1: Google Cloud
Deployment Option 2: VMware vSphere
Deployment Option 3: Bare-Metal Servers
Deployment Option 4: Anthos Attached Clusters
Deployment Option 5: AWS
Deployment Option 6: Microsoft Azure
How Has the Application Development Landscape Changed?
What Is Microservices Architecture?
How Are Monolithic and Microservices Architectures Different?
Microservices Use Cases
Service Choreography and Service Orchestration
Google Cloud Support for Service Orchestration
Google Cloud Support for Service Choreography
Additional Services That Help with Both Choreography and Orchestration
What Is API Management?
What Is Apigee?
What is API Gateway?
API Gateway Architecture
What's the Difference Between API Gateway and Apigee API Management Platform?
What Is the Operations Suite?
What Does Cloud Operations Include?
How Does Cloud Operations Work?
Sample Application Architecture
CHAPTER SIX: Networking
How Is the Google Cloud Physical Network Organized?
Cloud Networking Services
Premium Tier
Standard Tier
Choosing a Tier
Cloud Interconnect and Cloud VPN
Network Connectivity Center
Peering
CDN Interconnect
Features of VPC Networks
Shared VPC
VPC Network Peering
VPC Packet Mirroring
How Does DNS Work?
What Does Cloud DNS Offer?
Hybrid Deployments: DNS Forwarding
Hybrid Deployments: Hub and Spoke
What Is Cloud Load Balancing?
How Does Cloud Load Balancing Work?
How to Use Global HTTP(S) Load Balancing
How to Secure Your Application with Cloud Load Balancing
How to Choose the Right Load-Balancing Option
What Is Cloud CDN?
How Does Cloud CDN Work?
How to Use Cloud CDN
Security
How Is Cloud NAT Different from Typical NAT Proxies?
Benefits of Using Cloud NAT
NAT Rules
Basic Cloud NAT Configuration Examples
Network Topology
Connectivity Tests
Performance Dashboard
Firewall Insights
How Does a Typical Service Mesh Work in Kubernetes?
How Is Traffic Director Different?
How Does Traffic Director Support Proxy-less gRPC and VMs?
How Does Traffic Director Work Across Hybrid and Multicloud Environments?
Ingress and gateways
Why Service Directory?
How Service Directory Works with Load Balancer
Using Cloud DNS with Service Directory
Connect
Scale
Secure
Optimize
Modernize
CHAPTER SEVEN: Data Science, Machine Learning, and Artificial Intelligence
Data Engineering
Data Analysis
Model Development
ML Engineering
Insights Activation
Orchestration
Prepackaged AI Solutions
Pretrained APIs
BigQuery ML
Vertex AI
End-to-End Model Creation in Vertex AI
What Does Vertex AI Include?
AutoML Behind the Scenes
How Do I Work with AutoML in Vertex AI?
What Is MLOps?
Vertex AI Pipelines
Vertex AI Pipelines Under the Hood
Vertex AI Pipelines Open Source Support
Benefits of BigQuery ML
Supported Models in BigQuery ML
How to Use Vision AI
What Can I Do with Vision API?
How to Use Video AI
What Can I Do with the Video Intelligence API?
Use Case Scenarios
What Is Translation AI?
What If Your Business Has Specific Terms?
AutoML Translation
What Is the Media Translation API?
How to Use Natural Language AI
What Can I Do with the Natural Language API?
What Can I Do with the Speech-to-Text API?
How to Use the Speech-to-Text API
What Is Contact Center AI?
How Does Contact Center AI Work?
What Is Document AI?
How to Use Document AI
Sample Document AI Architecture
Vertical Solutions
What Is Recommendations AI?
Sample Customer Journey with Recommendations AI
How Does Recommendations AI Work?
Data Engineering
Data Analysis
Model Development
ML Engineering
Insights Activation
Orchestration
CHAPTER EIGHT: Security
Cloud Security Is Shared Fate
Infrastructure Security
Network Security
Application Security
Secure Software Supply Chain Security
Data Security
Identity and Access Management
Endpoint Security
Security Monitoring and Operations
Governance, Risk, and Compliance
Defense in Depth at Scale
End-to-End Provenance and Attestation
Application Security
Risk Points for a Software Supply Chain
How Does Google Secure the Software Supply Chain Internally?
What is SLSA?
How Does Google Cloud Help You Secure Your Software Supply Chain?
Encryption
At-Rest Encryption Options
Other Data Security Services
What Is DLP?
How Does It Work?
A Variety of Deidentification Techniques
What Is Cloud Identity?
Authentication Options
User experience
Advantages
What Is Cloud IAM?
Cloud IAM best practices
What Are Service Accounts?
Service Account Types
Service Account Credentials
Service Account Best Practices
What Is BeyondCorp?
What Is BeyondCorp Enterprise?
How Does BeyondCorp Enterprise Work?
What Is Security Command Center?
How Does Security Command Center work?
Infrastructure Security
Network Security
Application Security
Software Supply Chain Security
Data Security
Identity and Access Management (IAM)
Endpoint Security
Security Monitoring and Operations
Governance, Risk, and Compliance
COPYRIGHT
DEDICATION
ACKNOWLEDGMENTS
ABOUT THE AUTHOR
END USER LICENSE AGREEMENT
Cover Page
Title Page
Copyright
Dedication
Acknowledgments
About the Author
Introduction
Table of Contents
Begin Reading
End User License Agreement
iii
ix
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
iv
v
vi
vii
248
Shortly after I started creating and sharing visual explanations of Google Cloud concepts in 2020, I began receiving overwhelmingly positive feedback. That feedback led me to think about pulling the visual explanations together into a reference guide. So here it is!
This book provides an easy-to-follow visual walkthrough of every important part of Google Cloud, from table stakes — compute, storage, database, security, and networking — to advanced concepts such as data analytics, data science, machine learning, and AI.
Most humans are visual learners; I am definitely one of them. I think it is safe to assume that you are too, since you picked up this book. So, even though it might sound cliché, I am a big believer that a picture is worth (more than) a thousand words. With that in mind, this book is my attempt at making Google Cloud technical concepts fun and interesting. This book covers the essentials of Google Cloud from end to end, with a visual explanation of each concept, how it works, and how you can apply it in your business use-case.
Who is this book for? Google Cloud enthusiasts! It is for anyone who is planning a cloud migration, new cloud deployment, preparing for cloud certification, and for anyone who is looking to make the most of Google Cloud. If you are cloud solutions architects, IT decision-makers, data and machine learning engineers you will find this book a good starting point. In short, this book is for you!
I have read thousands of pages of Google Cloud documentation and experimented with virtually every Google Cloud product and distilled that experience down to this book of accessible, bite-sized visuals. I hope this book helps you on your Google Cloud journey by making it both easier and more fun. Are you ready? Let's go!
If you believe you've found a mistake in this book, please bring it to our attention. At John Wiley & Sons, we understand how important it is to provide our customers with accurate content, but even with our best efforts an error may occur.
In order to submit your possible errata, please email it to our Customer Service Team at [email protected] with the subject line “Possible Book Errata Submission.”
Cloud computing is the on-demand availability of computing resources—including servers, storage, databases, networking, software, analytics, and intelligence—over the Internet. It eliminates the need for enterprises to procure, configure, or manage these resources themselves, while enabling them to pay only for what they use. The benefits of cloud computing include:
Flexibility: You can access cloud resources from anywhere and scale services up or down as needed.
Efficiency: You can develop new applications and rapidly get them into production, without worrying about the underlying infrastructure.
Strategic value: When you choose a cloud provider that stays on top of the latest innovations and offers them as services, it opens opportunities for you to seize competitive advantages and higher returns on investment.
Security: The depth and breadth of the security mechanisms provided by cloud providers offer stronger security than many enterprise data centers. Plus, cloud providers also have top security experts working on their offerings.
Cost-effectiveness: You only pay for the computing resources you use. Because you don't need to overbuild data center capacity to handle unexpected spikes in demand or sudden surges in business growth, you can deploy resources and IT staff on more strategic initiatives.
In this first chapter, you will learn about cloud computing models and dive into the various compute options that Google Cloud offers. The following chapters provide a closer look at specific cloud resources and topics, including storage, databases, data analytics, networking, and more.
To understand the cloud and the different models you can choose from, let's map it with an everyday analogy of housing:
On-Premises
— If you decide to make your house from scratch, you do everything yourself: source the raw materials, tools, put them together, and run to the store every time you need anything. That is very close to running your application on-premises, where you own everything from the hardware to your applications and scaling.
Infrastructure as a Service
— Now, if you are busy you consider hiring a contractor to build a custom house. You tell them how you want the house to look and how many rooms you want. They take the instructions and make you a house. IaaS is the same for your applications; you rent the hardware to run your application on, but you are responsible for managing the OS, runtime, scale, and the data. Example: GCE.
Containers as a Service
— If you know that buying is just too much work due to the maintenance it comes with, then you decide to rent a house. The basic utilities are included, but you bring your own furniture and make the space yours. Containers are the same where you bring a containerized application so that you don't have to worry about the underlying operating system but you still have control over scale and runtime. Example: GKE.
Platform as a Service
— If you just want to enjoy the space without having to even worry about furnishing it, then you rent a furnished house. That is what PaaS is for; you can bring your own code and deploy it and leave the scale to the cloud provider. Example: App Engine & Cloud Run.
Function as a Service
— If you just need a small dedicated space in which to work that is away from your home, you rent a desk in a workspace. That is close to what FaaS offers; you deploy a piece of code or a function that performs a specific task, and every time a function executes, the cloud provider adds scale if needed. Example: Cloud Functions.
Software as a Service
— Now, you move into the house (rented or purchased), but you pay for upkeep such as cleaning or lawn care. SaaS is the same; you pay for the service, you are responsible for your data, but everything else is taken care of by the provider. Example: Google Drive.
Compute Engine is a customizable compute service that lets you create and run virtual machines on Google's infrastructure. You can create a virtual machine (VM) that fits your needs. Predefined machine types are prebuilt and ready-to-go configurations of VMs with specific amounts of vCPU and memory to start running apps quickly. With Custom Machine Types, you can create virtual machines with the optimal amount of CPU and memory for your workloads. This allows you to tailor your infrastructure to your workload. If requirements change, using the stop/start feature you can move your workload to a smaller or larger Custom Machine Type instance, or to a predefined configuration.
In Compute Engine, machine types are grouped and curated by families for different workloads. You can choose from general-purpose, memory-optimized, compute-optimized, and accelerator-optimized families.
General-purpose
machines are used for day-to-day computing at a lower cost and for balanced price/performance across a wide range of VM shapes. The use cases that best fit here are web serving, app serving, back office applications, databases, cache, media-streaming, microservices, virtual desktops, and development environments.
Memory-optimized
machines are recommended for ultra high-memory workloads such as in-memory analytics and large in-memory databases such as SAP HANA.
Compute-optimized
machines are recommended for ultra high-performance workloads such as High Performance Computing (HPC), Electronic Design Automation (EDA), gaming, video transcoding, and single-threaded applications.
Accelerator-optimized
machines are optimized for high-performance computing workloads such as machine learning (ML), massive parallelized computations, and High Performance Computing (HPC).
You can create a VM instance using a boot disk image, a boot disk snapshot, or a container image. The image can be a public operating system (OS) image or a custom one. Depending on where your users are, you can define the zone you want the virtual machine to be created in. By default all traffic from the Internet is blocked by the firewall, and you can enable the HTTP(s) traffic if needed.
Use snapshot schedules (hourly, daily, or weekly) as a best practice to back up your Compute Engine workloads. Compute Engine offers live migration by default to keep your virtual machine instances running even when software or hardware update occurs. Your running instances are migrated to another host in the same zone instead of requiring your VMs to be rebooted.
For High Availability (HA), Compute Engine offers automatic failover to other regions or zones in event of a failure. Managed instance groups (MIGs) help keep the instances running by automatically replicating instances from a predefined image. They also provide application-based auto-healing health checks. If an application is not responding on a VM, the auto-healer automatically re-creates that VM for you. Regional MIGs let you spread app load across multiple zones. This replication protects against zonal failures. MIGs work with load-balancing services to distribute traffic across all of the instances in the group.
Compute Engine offers autoscaling to automatically add or remove VM instances from a managed instance group based on increases or decreases in load. Autoscaling lets your apps gracefully handle increases in traffic, and it reduces cost when the need for resources is lower. You define the autoscaling policy for automatic scaling based on the measured load, CPU utilization, requests per second, or other metrics.
Active Assist's new feature, predictive autoscaling, helps improve response times for your applications. When you enable predictive autoscaling, Compute Engine forecasts future load based on your MIG's history and scales it out in advance of predicted load so that new instances are ready to serve when the load arrives. Without predictive autoscaling, an autoscaler can only scale a group reactively, based on observed changes in load in real time. With predictive autoscaling enabled, the autoscaler works with real-time data as well as with historical data to cover both the current and forecasted load. That makes predictive autoscaling ideal for those apps with long initialization times and whose workloads vary predictably with daily or weekly cycles. For more information, see How predictive autoscaling works or check if predictive autoscaling is suitable for your workload, and to learn more about other intelligent features, check out Active Assist.
You pay for what you use. But you can save cost by taking advantage of some discounts! Sustained use savings are automatic discounts applied for running instances for a significant portion of the month. If you know your usage upfront, you can take advantage of committed use discounts, which can lead to significant savings without any upfront cost. And by using short-lived preemptive instances, you can save up to 80%; they are great for batch jobs and fault-tolerant workloads. You can also optimize resource utilization with automatic recommendations. For example, if you are using a bigger instance for a workload that can run on a smaller instance, you can save costs by applying these recommendations.
Compute Engine provides you default hardware security. Using Identity and Access Management (IAM) you just have to ensure that proper permissions are given to control access to your VM resources. All the other basic security principles apply; if the resources are not related and don't require network communication among themselves, consider hosting them on different VPC networks. By default, users in a project can create persistent disks or copy images using any of the public images or any images that project members can access through IAM roles. You may want to restrict your project members so that they can create boot disks only from images that contain approved software that meet your policy or security requirements. You can define an organization policy that only allows Compute Engine VMs to be created from approved images. This can be done by using the Trusted Images Policy to enforce images that can be used in your organization.
By default all VM families are Shielded VMs. Shielded VMs are virtual machine instances that are hardened with a set of easily configurable security features to ensure that when your VM boots, it's running a verified bootloader and kernel — it's the default for everyone using Compute Engine, at no additional charge. For more details on Shielded VMs, refer to the documentation at https://cloud.google.com/compute/shielded-vm/docs/shielded-vm.
For additional security, you also have the option to use Confidential VM to encrypt your data in use while it's being processed in Compute Engine. For more details on Confidential VM, refer to the documentation at https://cloud.google.com/compute/confidential-vm/docs/about-cvm.
There are many use cases Compute Engine can serve in addition to running websites and databases. You can also migrate your existing systems onto Google Cloud, with Migrate for Compute Engine, enabling you to run stateful workloads in the cloud within minutes rather than days or weeks. Windows, Oracle, and VMware applications have solution sets, enabling a smooth transition to Google Cloud. To run Windows applications, either bring your own license leveraging sole-tenant nodes or use the included licensed images.
Containers are often compared with virtual machines (VMs). You might already be familiar with VMs: a guest operating system such as Linux or Windows runs on top of a host operating system with virtualized access to the underlying hardware. Like virtual machines, containers enable you to package your application together with libraries and other dependencies, providing isolated environments for running your software services. As you'll see, however, the similarities end here as containers offer a far more lightweight unit for developers and IT Ops teams to work with, bringing a myriad of benefits.
Instead of virtualizing the hardware stack as with the virtual machines approach, containers virtualize at the operating system level, with multiple containers running atop the OS kernel directly. This means that containers are far more lightweight: They share the OS kernel, start much faster, and use a fraction of the memory compared to booting an entire OS.
Containers help improve portability, shareability, deployment speed, reusability, and more. More importantly to the team, containers made it possible to solve the “it worked on my machine” problem.
The system administrator is usually responsible for more than just one developer. They have several considerations when rolling out software:
Will it work on all the machines?
If it doesn't work, then what?
What happens if traffic spikes? (System admin decides to over-provision just in case…)
With lots of developers containerizing their apps, the system administrator needs a better way to orchestrate all the containers that developers ship. The solution: Kubernetes!
The Mindful Container team had a bunch of servers and used to make decisions on what ran on each manually based on what they knew would conflict if it were to run on the same machine. If they were lucky, they might have some sort of scripted system for rolling out software, but it usually involved SSHing into each machine. Now with containers — and the isolation they provide — they can trust that in most cases, any two applications can fairly share the resources of the same machine.
With Kubernetes, the team can now introduce a control plane that makes decisions for them on where to run applications. And even better, it doesn't just statically place them; it can continually monitor the state of each machine and make adjustments to the state to ensure that what is happening is what they've actually specified. Kubernetes runs with a control plane, and on a number of nodes. We install a piece of software called the kubelet on each node, which reports the state back to the primary.
Here is how it works:
The primary controls the cluster.
The worker nodes run pods.
A pod holds a set of containers.
Pods are bin-packed as efficiently as configuration and hardware allows.
Controllers provide safeguards so that pods run according to specification (reconciliation loops).
All components can be deployed in high-availability mode and spread across zones or data centers.
Kubernetes orchestrates containers across a fleet of machines, with support for:
Automated deployment and replication of containers
Online scale — in and scale — out of container clusters
Load balancing over groups of containers
Rolling upgrades of application containers
Resiliency, with automated rescheduling of failed containers (i.e., self-healing of container instances)
Controlled exposure of network ports to systems outside the cluster
A few more things to know about Kubernetes:
Instead of flying a plane, you program an autopilot: declare a desired state, and Kubernetes will make it true — and continue to keep it true.
It was inspired by Google's tools for running data centers efficiently.
It has seen unprecedented community activity and is today one of the largest projects on GitHub. Google remains the top contributor.
The magic of Kubernetes starts happening when we don't require a sysadmin to make the decisions. Instead, we enable a build and deployment pipeline. When a build succeeds, passes all tests, and is signed off, it can automatically be deployed to the cluster gradually, blue/green, or immediately.
By far, the single biggest obstacle to using Kubernetes (k8s) is learning how to install and manage your own cluster. Check out k8s the Hard Way, a step-by-step guide to install a k8s cluster. You have to think about tasks like:
Choosing a cloud provider or bare metal
Provisioning machines
Picking an OS and container runtime
Configuring networking (e.g., P ranges for pods, SDNs, LBs)
Setting up security (e.g., generating certs and configuring encryption)
Starting up cluster services such as DNS, logging, and monitoring
Once you have all these pieces together, you can finally start to use k8s and deploy your first application. And you're feeling great and happy and k8s is awesome! But then, you have to roll out an update…
Wouldn't it be great if Mindful Containers could start clusters with a single click, view all their clusters and workloads in a single pane of glass, and have Google continually manage their cluster to scale it and keep it healthy?
GKE is a secured and fully managed Kubernetes service. It provides an easy-to-use environment for deploying, managing, and scaling your containerized applications using Google infrastructure.
Mindful Containers decided to use GKE to enable development self-service by delegating release power to developers and software.
Production-ready with autopilot mode of operation for hands-off experience
Best-in-class developer tooling with consistent support for first- and third-party tools
Offers container-native networking with a unique BeyondProd security approach
Most scalable Kubernetes service; only GKE can run 15,000 node clusters, outscaling competition up to 15X
Industry-first to provide fully managed Kubernetes service that implements full Kubernetes API, 4-way autoscaling, release channels, and multicluster support
The GKE control plane is fully operated by the Google SRE (Site Reliability Engineering) team with managed availability, security patching, and upgrades. The Google SRE team not only has deep operational knowledge of k8s, but is also uniquely positioned to get early insights on any potential issues by managing a fleet of tens of thousands of clusters. That's something that is simply not possible to achieve with self-managed k8s. GKE also provides comprehensive management for nodes, including autoprovisioning, security patching, opt-in auto-upgrade, repair, and scaling. On top of that, GKE provides end-to-end container security, including private and hybrid networking.
As the demand for Mindful Containers grows, they now need to scale their services. Manually scaling a Kubernetes cluster for availability and reliability can be complex and time consuming. GKE automatically scales the number of pods and nodes based on the resource consumption of services.
Vertical Pod Autoscaler (VPA) watches resource utilization of your deployments and adjusts requested CPU and RAM to stabilize the workloads.
Node Auto Provisioning optimizes cluster resources with an enhanced version of Cluster Autoscaling.
In addition to the fully managed control plane that GKE offers, using the Autopilot mode of operation automatically applies industry best practices and can eliminate all node management operations, maximizing your cluster efficiency and helping to provide a stronger security posture.
Cloud Run is a fully managed compute environment for deploying and scaling serverless HTTP containers without worrying about provisioning machines, configuring clusters, or autoscaling.
No vendor lock-in
— Because Cloud Run takes standard OCI containers and implements the standard
Knative
Serving API, you can easily port over your applications to on-premises or any other cloud environment.
Fast autoscaling
— Microservices deployed in Cloud Run scale automatically based on the number of incoming requests, without you having to configure or manage a full-fledged Kubernetes cluster. Cloud Run scales to zero — that is, uses no resources — if there are no requests.
Split traffic
— Cloud Run enables you to split traffic between multiple revisions, so you can perform gradual rollouts such as canary deployments or blue/green deployments.
Custom domains
— You can set up custom domain mapping in Cloud Run, and it will provision a TLS certificate for your domain.
Automatic redundancy
— Cloud Run offers automatic redundancy so you don't have to worry about creating multiple instances for high availability.
With Cloud Run, you write your code in your favorite language and/or use a binary library of your choice. Then push it to Cloud Build to create a container build. With a single command — gcloud run deploy — you go from a container image to a fully managed web application that runs on a domain with a TLS certificate and autoscales with requests.
Cloud Run service can be invoked in the following ways:
HTTPS:
You can send
HTTPS requests
to trigger a Cloud Run-hosted service. Note that all Cloud Run services have a stable HTTPS URL. Some use cases include:
Custom RESTful web API
Private microservice
HTTP middleware or reverse proxy for your web applications
Prepackaged web application
gRPC:
You can use
gRPC to connect Cloud Run
services with other services — for example, to provide simple, high-performance communication between internal microservices. gRPC is a good option when you:
Want to communicate between internal microservices
Support high data loads (gRPC uses protocol buffers, which are up to seven times faster than REST calls)
Need only a simple service definition and you don't want to write a full client library
Use streaming gRPCs in your gRPC server to build more responsive applications and APIs
WebSockets:
WebSockets
applications are supported on Cloud Run with no additional configuration required. Potential use cases include any application that requires a streaming service, such as a chat application.
Trigger from Pub/Sub:
You can use
Pub/Sub to push messages
to the endpoint of your Cloud Run service, where the messages are subsequently delivered to containers as HTTP requests. Possible use cases include:
Transforming data after receiving an event upon a file upload to a Cloud Storage bucket
Processing your Google Cloud operations suite logs with Cloud Run by exporting them to Pub/Sub
Publishing and processing your own custom events from your Cloud Run services
Running services on a schedule:
You can use
Cloud Scheduler to securely trigger a Cloud Run service
on a schedule. This is similar to using cron jobs. Possible use cases include:
Performing backups on a regular basis
Performing recurrent administration tasks, such as regenerating a sitemap or deleting old data, content, configurations, synchronizations, or revisions
Generating bills or other documents
Executing asynchronous tasks:
You can use
Cloud Tasks
to securely enqueue a task to be asynchronously processed by a Cloud Run service. Typical use cases include:
Handling requests through unexpected production incidents
Smoothing traffic spikes by delaying work that is not user-facing
Reducing user response time by delegating slow background operations, such as database updates or batch processing, to be handled by another service
Limiting the call rate to backend services like databases and third-party APIs
Events from Eventrac:
You can
trigger Cloud Run with events from more than 60 Google Cloud sources
. For example:
Use a Cloud Storage event (via Cloud Audit Logs) to trigger a data processing pipeline
Use a BigQuery event (via Cloud Audit Logs) to initiate downstream processing in Cloud Run each time a job is completed
Cloud Run and Cloud Functions are both fully managed services that run on Google Cloud's serverless infrastructure, auto-scale, and handle HTTP requests or events. They do, however, have some important differences:
Cloud Functions lets you deploy snippets of code (functions) written in a limited set of programming languages, whereas Cloud Run lets you deploy container images using the programming language of your choice.
Cloud Run also supports the use of
any tool or system library
from your application; Cloud Functions does not let you use custom executables.
Cloud Run offers a longer request
timeout duration of up to 60 minutes
, whereas with Cloud Functions the request
timeout can be set as high as 9 min
ute
s
.
Cloud Functions only sends one request at a time to each function instance, whereas by default Cloud Run is configured to send multiple concurrent requests on each container instance. This is helpful to improve latency and reduce costs if you're expecting large volumes.
App Engine is a fully managed serverless compute option in Google Cloud that you can use to build and deploy low-latency, highly scalable applications. App Engine makes it easy to host and run your applications. It scales them from zero to planet scale without you having to manage infrastructure. App Engine is recommended for a wide variety of applications, including web traffic that requires low-latency responses, web frameworks that support routes, HTTP methods, and APIs.
App Engine offers two environments; here's how to choose one for your application:
App Engine Standard
— Supports specific runtime environments where applications run in a sandbox. It is ideal for apps with sudden and extreme traffic spikes because it can scale from zero to many requests as needed. Applications deploy in a matter of seconds. If your required runtime is supported and it's an HTTP application, then App Engine Standard is the way to go.
App Engine Flex
— Is open and flexible and supports custom runtimes because the application instances run within Docker containers on Compute Engine. It is ideal for apps with consistent traffic and regular fluctuations because the instances scale from one to many. Along with HTTP applications, it supports applications requiring
WebSockets
. The max request timeout is 60 minutes.
No matter which App Engine environment you choose, the app creation and deployment process is the same. First write your code, next specify the app.yaml file with runtime configuration, and finally deploy the app on App Engine using a single command: gcloud app deploy.
Developer friendly
— A fully managed environment lets you focus on code while App Engine manages infrastructure.
Fast responses
— App Engine integrates seamlessly with
Memorystore for Redis
, enabling distributed in-memory data cache for your apps.
Powerful application diagnostics
— Cloud Monitoring and Cloud Logging help monitor the health and performance of your app, and Cloud Debugger and Error Reporting help diagnose and fix bugs quickly.
Application versioning
— Easily host different
versions
of your app, and easily create development, test, staging, and production environments.
Traffic splitting
— Route incoming requests to different app versions for
A/B tests, incremental
feature rollouts, and similar use cases.
Application security
— Helps safeguard your application by defining access rules with App Engine firewall and leverage managed SSL/TLS certificates by default on your custom domain at no additional cost.
Cloud Functions is a fully managed event-driven serverless function-as-a-service (FaaS). It is a serverless execution environment for building and connecting cloud services. With Cloud Functions you write simple, single-purpose functions that are attached to events emitted from your cloud infrastructure and services. Your function is a piece of code triggered when an event being watched is fired. Your code executes in a fully managed environment. There is no need to provision any infrastructure or worry about managing any servers in case of increase or decrease in traffic. Cloud Functions is also fully integrated with Cloud Operations for observability and diagnosis. Because Cloud Functions is based on an open source FaaS framework, it is easy to migrate.
To use Cloud Functions, just write the logic in any of the supported languages (Go, Python, Java, Node.js, PHP, Ruby, .NET); deploy it using the console, API, or Cloud Build; and then trigger it via HTTP(s) request from any service, file uploads to Cloud Storage, events in Pub/Sub, Firebase, or even a direct call through the command-line interface (CLI).
Cloud Functions augments existing cloud services and allows you to address an increasing number of use cases with arbitrary programming logic. It provides a connective layer of logic that lets you write code to connect and extend cloud services. Listen and respond to a file upload to Cloud Storage, a log change, or an incoming message on a Pub/Sub topic.
The pricing is based on number of events, compute time, memory, and ingress/egress requests and costs nothing if the function is idle. For security, using Identity and Access Management (IAM) you can define which services or personnel can access the function, and using the VPC controls, you can define network-based access.
Data processing/ETL
