Cloud Native Automation with Google Cloud Build - Anthony Bushong - E-Book

Cloud Native Automation with Google Cloud Build E-Book

Anthony Bushong

0,0
33,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

When adopting cloud infrastructure, you are often looking to modernize the automation of workflows such as continuous integration and software delivery. Minimizing operational overhead via fully managed solutions such as Cloud Build can be tough. Moreover, learning Cloud Build’s API and build schema, scalability, security, and integrating Cloud Build with other external systems can be challenging. This book helps you to overcome these challenges by cementing a Google Cloud Build foundation.
The book starts with an introduction to Google Cloud Build and explains how it brings value via automation. You will then configure the architecture and environment in which builds run while learning how to execute these builds. Next, you will focus on writing and configuring fully featured builds and executing them securely. You will also review Cloud Build's functionality with practical applications and set up a secure delivery pipeline for GKE. Moving ahead, you will learn how to manage safe roll outs of cloud infrastructure with Terraform. Later, you will build a workflow from local source to production in Cloud Run. Finally, you will integrate Cloud Build with external systems while leveraging Cloud Deploy to manage roll outs.
By the end of this book, you’ll be able to automate workflows securely by leveraging the principles of Google Cloud Build.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 246

Veröffentlichungsjahr: 2022

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Cloud Native Automation with Google Cloud Build

Easily automate tasks in a fully managed, scalable, and secure platform

Anthony Bushong

Kent Hua

BIRMINGHAM—MUMBAI

Cloud Native Automation with Google Cloud Build

Copyright © 2022 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Associate Group Product Manager: Rahul Nair

Publishing Product Manager: Niranjan Naikwadi

Senior Editor: Athikho Rishana

Content Development Editor: Divya Vijayan

Technical Editor: Rajat Sharma

Copy Editor: Safis Editing

Project Coordinator: Ashwin Kharwa

Proofreader: Safis Editing

Indexer: Hemangini Bari

Production Designer: Prashant Ghare

Marketing Coordinators: Nimisha Dua

First published: October 2022

Production reference: 1160922

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80181-670-0

www.packt.com

For my good amid these, Margaux and Benjamin, and the verse you help me contribute.

- Anthony

To my wife, Lien-ting, and children Mason and Madison, thank you for your support and patience to help make this a reality.

- Kent

Contributors

About the authors

Anthony Bushong is a senior developer relations engineer at Google. Formerly a field engineer and practice lead for Kubernetes and GKE, he worked with companies implementing automation for their Kubernetes infrastructure in Cloud Build – since version 1.3! He now focuses on distilling those experiences into multiple formats of technical content – all with the goal of teaching and enabling people, no matter their background or experience.

I want to thank my mom, dad, grandma, family, and friends – all without whom I am nothing. I also want to thank those who have taken a chance on me in this industry, recognizing that much of what I have been able to achieve has been built on your trust and teachings – especially Diane Anderson and Andrew Milo.

Kent Hua is a global solutions manager focused on application modernization. He has years of experience with customers modernizing enterprise applications focusing on both business and technical challenges on-premises and the public cloud. Over the years, he has helped organizations decompose monoliths and implement microservice patterns wherever applicable into containers running on Kubernetes. While enabling these organizations, he has identified culture and the automation of processes as critical elements in their modernization journeys.

I want to thank my parents, family, friends, and colleagues who have made me who I am today. Through our interactions and experiences, not a day passes without learning something new.

About the reviewers

Marcelo Costa is a technology lover, who over the past ten years has spent quite some time working across both software and data roles. Between working in companies and personal projects, he has worked with multiple technologies and business areas, always looking for challenging problems to solve.

He is a cloud computing Google Developer Expert (GDE) and likes to help the community by sharing knowledge with others through articles, tutorials, and open source code. He currently works as a founding engineer at Alvin, an Estonian startup in the data space.

Damith Karunaratne is a group product manager at Google Cloud, who focuses on driving continuous integration and software supply chain security efforts. Prior to joining Google, Damith spent over a decade helping companies of all shapes and sizes solve complex software challenges, including leading various CI/CD and DevOps initiatives.

Damith earned a bachelor’s degree in science with a computer science emphasis from McMaster University.

Table of Contents

Preface

Part 1: The Fundamentals

1

Introducing Google Cloud Build

Technical requirements

The value of automation

Before there was the cloud

Making sure there are enough resources

Who needs to manage all of this?

Reducing toil with managed services

Cloud-native automation with Google Cloud Build

GCP service integrations

Summary

2

Configuring Cloud Build Workers

Technical requirements

How worker pools can be configured in Cloud Build

Prerequisites for running builds on worker pools

Using the default pool

Using private pools

Summary

3

Getting Started – Which Build Information Is Available to Me?

Technical requirements

How your build resources are accessed

Build submission and status

Using the GCP console

Build operations

Summary

Part 2: Deconstructing a Build

4

Build Configuration and Schema

Defining the minimum configuration for build steps

Setting up your environment

Defining your build step container image

Defining your build step arguments

Adjusting the default configuration for the build steps

Defining the relationships between individual build steps

Configuring build-wide specifications

Summary

5

Triggering Builds

Technical requirements

The anatomy of a trigger

Integrations with source code management platforms

Defining your own triggers

Webhook triggers

Manual triggers

Summary

6

Managing Environment Security

Defense in depth

The principle of least privilege

Accessing sensitive data and secrets

Secret Manager

Cloud Key Management

Build metadata for container images

Provenance

Attestations

Securing the network perimeter

Summary

Part 3: Practical Applications

7

Automating Deployment with Terraform and Cloud Build

Treating infrastructure as code

Simple and straightforward Terraform

The separation of resource creation and the build steps

Building a custom builder

Managing the principle of least privilege for builds

Human-in-the-loop with manual approvals

Summary

8

Securing Software Delivery to GKE with Cloud Build

Creating your build infrastructure and deployment target

Enabling foundational Google Cloud services

Setting up the VPC networking for your environment

Setting up your private GKE cluster

Securing build and deployment infrastructure

Creating private pools with security best practices

Securing access to your private GKE control plane

Applying POLP to builds

Creating build-specific IAM service accounts

Custom IAM roles for build service accounts

Configuring release management for builds

Integrating SCM with Cloud Build

Gating builds with manual approvals

Executing builds via build triggers

Enabling verifiable trust in artifacts from builds

Building images with build provenance

Utilizing Binary Authorization for admission control

Summary

9

Automating Serverless with Cloud Build

Understanding Cloud Functions and Cloud Run

Cloud Functions

Cloud Run

Cloud Functions 2nd gen

Comparing Cloud Functions and Cloud Run

Building containers without a build configuration

Dockerfile

Language-specific tooling

Buildpacks

Automating tasks for Cloud Run and Cloud Functions

Deploying services and jobs to Cloud Run

Deploying to Cloud Functions

Going from source code directly to containers running in Cloud Run

Progressive rollouts for revisions of a Cloud Run service

Securing production with Binary Authorization

Summary

10

Running Operations for Cloud Build in Production

Executing in production

Leveraging Cloud Build services from different projects

Securing build triggers even further

Notifications

Deriving more value from logs

Configurations to consider in production

Making builds more dynamic

Changes in Cloud Build related to secret management

Speeding up your builds

Summary

Part 4: Looking Forward

11

Looking Forward in Cloud Build

Implementing continuous delivery with Cloud Deploy

The relationship between Cloud Build and Cloud Deploy

Summary

Index

Other Books You May Enjoy

Preface

This book gets started by discussing the value of managed services and how they can help organizations focus on the business problems at hand. Build pipelines are critical in organizations because they help build, test, and validate code before it is deployed into production environments.

We then jump right into Cloud Build: the fundamentals, configuration options, compute options, build execution, build triggering, and build security.

The book’s remaining chapters close with practical examples of how to use Cloud Build in automation scenarios. While Cloud Build can help with software build life cycles, it can also coordinate delivery to runtimes such as serverless and Kubernetes.

Who this book is for

This book is for cloud engineers and DevOps engineers who manage cloud environments and desire to automate workflows in a fully managed, scalable, and secure platform. It is assumed that you have an understanding of cloud fundamentals, software delivery, and containerization fundamentals.

What this book covers

Chapter 1, Introducing Google Cloud Build. Establish the foundation of serverless and managed services, focusing software build life cycles with Cloud Build.

Chapter 2, Configuring Cloud Build Workers. It’s a managed service, but we still need compute and this chapter discusses the compute options available.

Chapter 3, Getting Started – Which Build Information Is Available to Me?. Kicking off the first build and discovering the information available once it has started to help inform you of success or debug issues.

Chapter 4, Build Configuration and Schema. You can get started quickly with Cloud Build, but knowing the configuration options can help you save time.

Chapter 5, Triggering Builds. This is the critical component for automation: react when something happens to your source files or trigger from existing automation tools.

Chapter 6, Managing Environment Security. It’s a managed service, but there are still shared responsibilities, determining who can execute pipelines, what pipelines have access to, and how to securely integrate with other services.

Chapter 7, Automating Deployment with Terraform and Cloud Build. We can also leverage Cloud Build to automate Terraform manifests to build out infrastructure.

Chapter 8, Securing Software Delivery to GKE with Cloud Build. Discovering patterns and capabilities available in Google Cloud for secure container delivery to Google Kubernetes Engine.

Chapter 9, Automating Serverless with Cloud Build. It’s serverless, but we still need to automate getting from source code to something running.

Chapter 10, Running Operations for Cloud Build in Production. Additional considerations may need to be made when preparing for Cloud Build in production, working with multiple teams.

Chapter 11, Looking Forward in Cloud Build. What’s next? For instance, we’ll look at how Cloud Deploy leverages Cloud Build under the hood.

To get the most out of this book

You need to have experience with software development, software delivery, and pipelines to get the most out of the book and Cloud Build.

You will need access to a Google Cloud account. Examples in the book can be performed in Google Cloud’s Cloud Shell, which has the majority of the tools and binaries noted in the book. If external resources are needed, they are noted in the respective chapters.

If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

Download the example code files

The book will leverage examples from different repositories noted in the respective chapters.

Code examples in the book can be found at https://github.com/PacktPublishing/Cloud-Native-Automation-With-Google-Cloud-Build. If there’s an update to the code, it will be updated in the GitHub repository.

Packt Publishing also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://packt.link/C5G3h.

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “In this case, we will be creating a private pool of workers that have the e2-standard-2 machine type, with 100 GB of network-attached SSD, and located in us-west1.”

A block of code is set as follows:

  # Docker Build   - name: 'gcr.io/cloud-builders/docker'     args: ['build', '-t',            'us-central1-docker.pkg.dev/${PROJECT_ID}/image-repo/myimage',            '.']

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

... INFO[0002] No cached layer found for cmd RUN npm install INFO[0002] Unpacking rootfs as cmd COPY package*.json ./ requires it. ...       INFO[0019] Taking snapshot of files...                  

Any command-line input or output is written as follows:

$ project_id=$(gcloud config get-value project)

$ vpc_name=packt-cloudbuild-sandbox-vpc

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “Select System info from the Administration panel.”

Tips or important notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share Your Thoughts

Once you’ve read Cloud-Native Automation with Google Cloud Build, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Part 1: The Fundamentals

This part of the book will introduce you to Cloud Build. You will understand the context in which Cloud Build exists and brings users value, the core user journey when using Cloud Build, the architecture and environment in which builds run, and the means by which you can inspect and review a build execution.

This part comprises the following chapters:

Chapter 1, Introducing Google Cloud BuildChapter 2, Configuring Cloud Build WorkersChapter 3, Getting Started – Which Build Information Is Available to Me?

1

Introducing Google Cloud Build

To properly introduce Google Cloud Build and the value it provides to its users, it’s important to review the value that automation brings to IT organizations for common workflows such as cloud infrastructure provisioning and software delivery.

Automating these tasks may be helpful in increasing developer productivity for organizations; doing so with a managed service enables this productivity at a lower cost of operation, allowing individuals and teams to focus on the business task at hand, rather than managing all the infrastructure that runs the automation. There has been an increase in automation needs for processing AI/ML types of workloads (https://cloud.google.com/architecture/mlops-continuous-delivery-and-automation-pipelines-in-machine-learning), which are beyond the typical developer workflows. We will be focusing on the developer automation workflow (that is, continuous integration) aspects of Cloud Build in this book.

In this chapter, we will review Google Cloud Build through this lens, specifically discussing the following:

The value of automationBefore there was the cloudReducing toil with managed servicesCloud-native automation with Google Cloud Build

Technical requirements

Data center and infrastructure conceptsPublic cloud conceptsSoftware build concepts

The value of automation

The compilation of applications and services comes in all shapes and sizes. It may seem straightforward that code just becomes a packaged artifact, but for many scenarios, builds can have a number of complex steps with many dependencies. The steps involved in creating and testing an artifact may be manual, automated, or a combination of both to form a build pipeline. The following figure demonstrates an example build pipeline with a set of activities critical to the building of an application:

Figure 1.1 – Example build pipeline

Running these builds manually could potentially lead to careless mistakes with an outcome that may not be consistent or repeatable. When code is being built, it is very important to document even the smallest changes that were made to the build that resulted in something working as opposed to not working. This is where the use of a source code management (SCM) system, such as Git (https://git-scm.com), becomes critical in our overall pipeline. If a change was actually the result of a build step changed locally, not being able to repeat this can result in frustration and productivity loss.

This is especially relevant from the perspective of handing off work to a colleague. Having to understand and tweak a set of manual steps in a build would not be a good use of that colleague’s time, when they could instead be focused on the code at hand. The time of each member of an organization is valuable and it’s best to allow that individual to focus on being productive. This could be during a production outage, where time is best spent trying to fix the root cause of the outage rather than analyzing how to actually build the code. Depending on the impact of the outage, every second could have a monetary impact on the organization. In cases of a simple development handoff to a production outage, automation of a build would be very beneficial to the situation.

Imagine if a developer could solely focus on code development, rather than analyzing manual steps or difficult-to-execute builds. An organization might have automation in place, but it must be seamless for the developer in order to maximize productivity. This can be in the form of the following:

Coding standardsBoilerplate codeBlueprints on how to use the pipeline

The preceding reference points could also apply to both what is considered the automation of the inner loop and the outer loop of software development. The inner loop of development typically consists of local development by a developer. Once code is completed in the inner loop, a merge request is created for addition to an integration branch. Once merged in an integration branch, this is where the typical build pipeline starts: the outer loop. Providing a starting point in the form of standards itself may not be automation; however, it could be baked into the configuration files. It may just be a starting point, a foundation that can also provide a level of flexibility for the developer to apply specific preferences.

Figure 1.2 – Example inner and outer loops

The ecosystem of tools and integration that has been built around Git has helped drive the importance of version controlling not only source code but also configurations that define a build pipeline. GitOps (https://www.weave.works/blog/the-history-of-gitops) primarily focuses on infrastructure as code (IaC), ensuring that a runtime environment represents configurations declaratively stored in Git. The common use of Git tooling across developer and operation teams reduces the amount of friction for onboarding, which makes GitOps also critical for end-to-end automation.

Automation helped reduce end-to-end deployment times for this organization: https://cloud.google.com/customers/reeport/.

Once automation is streamlined, the team that owns the pipeline is able to aggregate metrics in order to determine areas for improvement at scale. This becomes a more achievable task, compared to when builds are executed manually by each developer. As mentioned earlier, pipelines in an organization could also include manual steps. These metrics could identify patterns where manual steps could possibly be automated. Reducing manual steps would increase the efficiency of a pipeline while also reducing the potential human errors that can occur. There may be situations where manual steps aren’t automatable, but identification is key so that it can be considered in the future or to allow for teams to focus on other steps that can be improved.

This can reduce developers’ frustration and improve overall productivity across teams, which can benefit the organization in the following ways:

Delivering features fasterReducing the amount of time it takes to resolve issuesAllowing teams to focus on other business-critical activitiesFeedback for continuous improvement of the pipeline

The value of automation can help an organization in many aspects. While metrics can be manually gathered, they can be most effective when aggregated in an automated pipeline. Decisions can be made to determine the most effective changes to a build pipeline. The metrics gathered from frameworks in place, such as GitOps, can also help feed into improving the end-to-end pipeline, not just the automation of source code compilation. Continuous improvement becomes more achievable when an organization can use metrics for data-driven decisions.

Before there was the cloud

There are a variety of tools on the market, ranging from open source to closed source and self-managed to hosted offerings, supporting build pipelines. Availability of the pipeline solution is critical in ensuring that code is built in a timely manner; otherwise, it may impact the productivity of multiple teams. Organizations may have separate teams that are responsible for maintaining the solution that executes the build pipeline.

Making sure there are enough resources

For self-managed solutions, the maintenance includes the underlying infrastructure, OS, tools, and libraries that make up the pipeline infrastructure. Scale is also a factor for build pipelines; depending on the complexity, organizations may have multiple concurrent builds occurring at the same time. Build pipelines need at least compute, memory, and disk to execute, typically referred to as workers within the build platform. A build pipeline may consist of multiple jobs, steps, or tasks to complete a particular pipeline to be executed. The workers are assigned tasks to complete from the build platform. Workers need to be made available so that they can be assigned tasks and such tasks are executed. Similar to capacity planning and sizing needs for applications, enough compute, memory, storage, or any other resource for workers must be planned out.

There must be enough hardware to handle builds at the peak. Peak is an important topic because in a data center scenario, hardware resources can be somewhat finite because it takes time to acquire and set up the hardware. Technologies such as virtualization have given us the ability to overprovision compute resources, but at some point, physical hardware becomes the bottleneck for growth if our build needs become more demanding. While an organization needs to size for peak, that also means that builds are not always running constantly at peak to make full use of the allocated resources. Virtualization, as mentioned previously, may help us with other workloads consuming compute during off-peak time, but this may require significant coordination efforts throughout the organization. We may be left with underutilized and wasted resources.

Figure 1.3 – Under-utilized resources when allocating for peak utilization

Who needs to manage all of this?

A team typically maintains and manages the build infrastructure within an organization. This team may be dedicated to ensuring the environment is available, resources are kept up to date, and new capabilities are added to support organizational needs. Requirements can come from all directions, such as developers, operators, platform administrators, and infrastructure administrators. Different build and pipeline tools on the market do help to facilitate some of this by offering plugins and extensions to extend capabilities. For instance, Jenkins has community contributions of 1,800+ plugins (https://plugins.jenkins.io/) at the time of writing this book. While that is quite an amount, this can also mean teams have to ensure plugins are updated and keep up to date with the plugins’ life cycles. For instance, if the plugin is no longer being maintained, what are the alternatives? If multiple plugins perform similar functions, which one should be chosen? A rich community is beneficial as popular plugins bubble up and may have better support.

While productivity is impacted as mentioned, not having enough capacity or improperly sizing the build infrastructure could lead to slower builds. Builds come in all shapes; they can run in seconds for some, while to others, they can take hours. For builds that take hours or a long time, this would mean the developer and many other downstream teams are waiting. Just because a build is submitted successfully, it does not mean it completes successfully too; it could possibly fail at any point of the build, leading to lost time.

The team that is responsible for managing the build infrastructure may also be likely responsible for maintaining a service-level agreement (SLA) to the users of the system. The design of the solution may also have been designed by another team. As noted earlier, if builds are not running, there may be a cost associated because it impacts the productivity of developers, delays in product releases, or delays in pushing out critical patches to the system. This needs to be taken into account when self-managing a solution. While this was the norm for much of the industry before there was the cloud, in an on-premises enterprise, vendors developed tools and platforms to ease the burden of infrastructure management. Managed service providers (MSPs) also provided tooling layers to help organizations manage compute resources, but organizations still had to take into account resources that were being spun up or down.

Security is a critical factor to be considered when organizations need to manage their own software components on top of infrastructure or the entire stack. It’s not just the vulnerability of the code itself being built, but the underlying build system needs to be securely maintained as well. In the last few years, a significant vulnerability was exposed across all industries (https://orangematter.solarwinds.com/2021/05/07/an-investigative-update-of-the-cyberattack/).

Eventually, when public cloud resources were available, much of the similar patterns discussed could be used – in this case, infrastructure as a service (IaaS) offerings in a cloud provider for handling the compute infrastructure. These eased the burden of having to deal with compute resources, but again, like MSPs, the notion of workers had to be determined and managed.

Organizations have had to deal with the build software pipeline platform regardless of whether the infrastructure was managed on-premises in their data center, in a co-location, or by an IaaS provider. It is critical to ensure that the platform is available and has sufficient capacity for workers to complete associated tasks. In many organizations, this consisted of dedicated teams that managed the infrastructure or teams that wore multiple hats to ensure the build platform was operational.

Reducing toil with managed services

In the previous section, we discussed the efforts involved in maintaining a platform for building applications and services. Many of the activities described in making sure the environment is always up and running could involve some toil. For example, Google’s SRE handbook (https://sre.google/sre-book/eliminating-toil/) goes further into the elements of IT tasks that could be considered toil.

If we are able to avoid toil and know that a provider manages the underlying build infrastructure, we are able to focus on what is more important, the application that helps drive our business. This is one of the goals of managed services, letting the provider handle the underlying details, providing a consistent syntax that becomes the common language between teams, providing compute resources as needed, and not billing when the service is not being utilized.

It is one less component of a build pipeline to consider as the provider is maintaining the underlying infrastructure and they are able to provide the team with scale when needed at any given time. The MSP would be responsible for making sure that there are enough workers in order to execute all the jobs in the build pipeline. However, managed services could also be seen as a form of lock-in to a particular vendor or cloud provider. In most cases, a managed service typically has the best integration to services provided by the offering provider. This is where adding additional capabilities are much more streamlined, but not limited, to the following:

Triggering mechanismsSecrets managementSecuring communication and data transfer between integrated servicesObservability

The integrations are there to help save time and, in reference to the original theme of this book, allow an organization to focus on the application at hand. Though important topics are noted in the preceding section, the importance of a managed service to allow flexibility and a way to integrate third-party-specific capabilities is also important when choosing a managed service.

As noted earlier, if an organization chooses to manage their own build solution, they may be responsible for the availability of the platform. In the case of a managed service, the provider is responsible for the availability and may establish an SLA with the customer using its services. The customer would have to make the determination of whether the communicated SLA is acceptable to the business.

Managed services offered by providers reduce the amount of toil to keep the build platform up and running. They allow teams at an organization to focus on critical business functions or revenue-generating activities. In the case of on-premises, not having to wait for hardware procurement or setup allows for maximum business flexibility. The provider would be responsible for making sure the platform is up to date and allowing for fast-paced groups within the organization to experiment with newer capabilities.

Cloud-native automation with Google Cloud Build

This brings us to Cloud Build, a Google Cloud offering that is a serverless platform. This means