The Definitive Guide to Modernizing Applications on Google Cloud - Steve (Satish) Sangapu - E-Book

The Definitive Guide to Modernizing Applications on Google Cloud E-Book

Steve (Satish) Sangapu

0,0
39,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Legacy applications, which comprise 75–80% of all enterprise applications, often end up being stuck in data centers. Modernizing these applications to make them cloud-native enables them to scale in a cloud environment without taking months or years to start seeing the benefits. This book will help software developers and solutions architects to modernize their applications on Google Cloud and transform them into cloud-native applications.
This book helps you to build on your existing knowledge of enterprise application development and takes you on a journey through the six Rs: rehosting, replatforming, rearchitecting, repurchasing, retiring, and retaining. You'll learn how to modernize a legacy enterprise application on Google Cloud and build on existing assets and skills effectively. Taking an iterative and incremental approach to modernization, the book introduces the main services in Google Cloud in an easy-to-understand way that can be applied immediately to an application.
By the end of this Google Cloud book, you'll have learned how to modernize a legacy enterprise application by exploring various interim architectures and tooling to develop a cloud-native microservices-based application.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 488

Veröffentlichungsjahr: 2022

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



The Definitive Guide to Modernizing Applications on Google Cloud

The what, why, and how of application modernization on Google Cloud

Steve (Satish) Sangapu

Dheeraj Panyam

Jason Marston

BIRMINGHAM—MUMBAI

The Definitive Guide to Modernizing Applications on Google Cloud

Copyright © 2021 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Rahul Nair

Publishing Product Manager: Rahul Nair

Senior Editor: Arun Nadar

Content Development Editor: Nihar Kapadia

Technical Editor: Shruthi Shetty

Copy Editor: Safis Editing

Project Coordinator: Ajesh Devavaram

Proofreader: Safis Editing

Indexer: Subalakshmi Govindhan

Production Designer: Alishon Mendonca

First published: December 2021

Production reference: 2121121

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80020-979-4

www.packt.com

To the memory of my father, Ishwar, and my mother, Savithri, for their sacrifices and exemplifying that hard work has positive outcomes. To my wife, Sanju, for being my loving and supportive partner, and to my children, Riya and Risha, for keeping us on our toes.

– Steve (Satish) Sangapu

To my parents, who have encouraged me in writing this book, with their push for tenacity ringing in my ears.

– Dheeraj Panyam

Contributors

About the authors

Steve (Satish) Sangapu has been working with software since 2000. He specializes in migrating and modernizing applications from monoliths to containerized microservices as well as creating data engineering pipelines to mine vast amounts of structured and unstructured data.

He has extensive experience successfully leading large, cross-functional, geographically dispersed teams utilizing modern Agile development methodologies while collaborating effectively with product teams in creating high-performance, fault-tolerant, and high-availability systems.

He also holds seven patents from the United States Patent and Trademark Office and certifications from Carnegie Mellon Software Engineering Institute and Google Cloud.

I want to thank the people who have given me and the people around me love and support in different ways in my life.

Dheeraj Panyam has been working in the IT industry since 2000. His experience spans diverse domains (optical, telecom, retail, and healthcare) and covers all phases of the SDLC, including application development, production support, QA automation, and cloud architecture. He lives in India and collaborates with a Google Cloud consulting company, helping them design solutions and architecture set up on public cloud platforms.

He holds multiple Google Cloud certifications in addition to other certifications in networking and testing.(in new line)

Jason Marston is a Cloud Solution Architect based in England. He was recruited by Microsoft because of his OSS background. Jason has worked with Java since version 1 and has a long history with open source. He has over 30 years' of experience in developing software and now helps organizations migrate and modernize legacy applications to the cloud. Jason was an SME in the Worldwide Communities project at Microsoft and, as a part of the leadership team for those communities, helped many people solve their problems by adopting Java on Azure. In his spare time, Jason reads science fiction books and has two children who think he is a geek/nerd.

About the reviewer

Radhakrishnan (Krishna) Gopal is a cloud evangelist, seasoned technology professional, and mentor with over 22 years of industry experience in all major cloud hyperscalers, including AWS, Azure, and Google Cloud. He is currently helping organizations to drive business value through cloud adoption and innovation. He has worked in many facets of IT throughout his career and delivered high-quality, mission-critical, and innovative technology solutions leveraging multi-cloud, data, AI, and intelligent automation. He is a Google Cloud Certified Professional Cloud Architect, Google data engineer, Azure certified solutions architect expert, Azure data engineer, data science associate, AI engineer, and AWS Certified Solutions Architect Associate. He loves to explore new frontiers of technology and impart them in solutions to make his clients very successful.

Table of Contents

Preface

Section 1: Cloud-Native Application Development and App Modernization in Google Cloud

Chapter 1: Cloud-Native Application Fundamentals

The cloud-native ecosystem

Benefits of cloud-native applications

Increased speed of delivery

Increased scalability

Increased resiliency

Mixed technology stack and workforce

Continuous integration and delivery

Increased automation

Principles of cloud-native architecture

Principle 1 – lightweight microservices

Principle 2 – leveraging automation

Principle 3 – DevOps culture

Principle 4 – better to go managed

Principle 5 – innovate

Limitations of microservices

Applying the 12-factor app principles on Google Cloud

Code base

Dependencies

Config

Backing services

Build, release, run

Processes

Port binding

Concurrency

Disposability

Dev/prod parity

Logs

Admin processes

Summary

Chapter 2: End-to-End Extensible Tooling for Cloud-Native Application Development

Moving past third-party services – the beauty of end-to-end tooling

Google Cloud Code

Features and benefits of Cloud Code

The role of Cloud Code in the cloud-native app development pipeline

Google Cloud Build

Features and benefits of Cloud Build

The role of Cloud Build in the cloud-native app development pipeline

Google Container Registry

Features and benefits of GCR

The next-gen container registry – Artifact Registry

The role of GCR in the cloud-native app development pipeline

Google Cloud Run

Features and benefits of Cloud Run

The role of Google Cloud Run in the cloud-native app development pipeline

Google Kubernetes Engine

Features and benefits of GKE

The role of GKE in the cloud-native app development pipeline

Operations suite

Features of Google Cloud Monitoring

Features of Google Cloud Logging

The role of the Cloud operations suite in the cloud-native app development pipeline

Summary

Chapter 3: Cloud-Native Architecture Patterns and System Architecture Tenets

Cloud-native patterns

The scope of cloud-native patterns

Solving challenges with cloud-native patterns

Be proactive, not reactive

Scaling and performance

Deployments

Resiliency and availability

Monitoring

Security

Cloud-native design patterns

Microservices

Strangler applications

Decomposition patterns

Event-driven patterns

Command Query Responsibility Segregation

The saga pattern

Multiple service instances

Canary deployments

Stateless services

Immutable infrastructure

Anti-corruption layer

API composition

Event sourcing

The Retry pattern

Circuit breaker pattern

The bulkhead pattern

Using the cloud-native pattern judiciously

Hybrid and multi-cloud architecture recommendations

Distributed deployment patterns

Redundant deployment patterns

Summary

Section 2: Selecting the Right Google Cloud Services

Chapter 4: Choosing the Right Compute Option

Five compute options… and Firebase

Firebase

Cloud Functions

GAE

Cloud Run

GKE

GCE

Pricing

How important is it to choose the right option?

Changing compute options

Making a decision

Summary

Chapter 5: Choosing the Right Database and Storage

Storage and database options on Google Cloud – the big three

GCS – basics

GCS

Cloud SQL

Cloud Firestore (previously Datastore)

Cloud Spanner

Cloud Bigtable

Wrapping up the big five

Additional storage and database options

BigQuery

Filestore

Persistent disks/local solid-state drive (SSD) (block storage)

MemoryStore

Security and flexibility

Summary

Chapter 6: Implementing a Messaging and Scheduling System

Understanding the requirements of a messaging system

Requirement #1: Scalability

Requirement #2: Extensibility

Requirement #3: Agility

Requirement #4: Resiliency

Introduction to asynchronous messaging

Messaging products available (open source and cloud native) on the market

Amazon SQS

Kafka

RabbitMQ

NATS Streaming

Advantages of asynchronous messaging

Introduction to Pub/Sub

What is Cloud Pub/Sub?

Pub/Sub key features

Additional benefits

Pub/Sub Concepts – topics and subscriptions

Pub/Sub model – fan-in and fan-out

Pull versus push types – differences and when to use each type

Getting started with Cloud Pub/Sub

Introduction to Cloud Tasks

Introduction to Cloud Scheduler

Summary

Chapter 7: Implementing Cloud-Native Security

The principles and concepts of cloud security

Economy of Mechanism

Defense in Depth

Principle of Least Privilege

Adding infrastructure security layers (revision)

Cloud IAM

Traditional access control versus Cloud IAM

Concepts of IAM

Entity

Identity

Permissions

Policy

Authentication and authorization

Cloud IAM on Google Cloud Platform

Features of Cloud IAM

Components of Cloud IAM

Members

All authenticated users

All users

Resources

Permissions

Roles

IAM policy bindings

Limitations of Cloud IAM

Cloud Identity

Features of Cloud Identity

Setting up Cloud Identity

Cloud Identity Platform

Features of Cloud Identity Platform

BeyondCorp (a new approach to enterprise security)

Cloud Identity-Aware Proxy (IAP)

Summary

Section 3: Rehosting and Replatforming the Application

Chapter 8: Introducing the Legacy Application

Technical requirements

The infrastructure architecture

The software architecture

Spring Boot

Thymeleaf

Bootstrap

jQuery

Explaining the software architecture

Implementing the software

Spring Boot configuration

Understanding the layers of the application

The presentation layer

The controller layer

The domain layer

Validation and defensive programming

Summary

Chapter 9: The Initial Architecture on Google Compute Engine

Technical requirements

The initial infrastructure design

Designing our network

Designing our network security

Creating the modernization project

Implementing the network

Implementing the VMs

Importing the data

Summary

Chapter 10: Addressing Scalability and Availability

Technical Requirements

Designing for scalability and availability

Using instance templates

Using managed instance groups

Using an HTTP(S) load balancer

Summary

Chapter 11: Re-Platforming the Data Layer

Designing for scalability and availability

Using Cloud Memorystore

Provisioning a Cloud Memorystore instance

Updating the Regional Managed Instance Group

Using Cloud SQL

Using Cloud SQL Proxy

Using Cloud Spanner

Provisioning Cloud Spanner

Updating the build

Updating the application settings

Importing data into Cloud SQL

Exporting data from our MySQL virtual machine

Importing to Cloud SQL

Cloud SQL versus Cloud Spanner

Summary

Chapter 12: Designing the Interim Architecture

The infrastructure architecture

Google Identity Platform

Cloud Pub/Sub

The software architecture

Refactoring the frontend and exposing REST services

Adding Google Identity Platform for identity and authentication

Publishing events

Consuming events

Refactoring to microservices

Microservice boundaries

Summary

Chapter 13: Refactoring to Microservices

Technical requirements

Analyzing the structure of the application backend

Refactoring into microservices

Refactoring the database

The web frontend

The Strangler Pattern revisited

Google HTTP(S) Load Balancer Routing

Google App Engine Dispatcher

Apigee API Manager

Containerizing the deployment units with Docker

Summary

Section 4: Refactoring the Application on Cloud-Native/PaaS and Serverless in Google Cloud

Chapter 14: Refactoring the Frontend and Exposing REST Services

Technical requirements

Creating REST controllers

Creating an AngularJS web frontend

Modules

Components

Routing

Services

Authenticating in the web frontend

Setting up Firebase and Google Identity Platform

Initializing Firebase Authentication

Router updates

The authentication cycle

The signout controller

Validating the authentication token in the REST controllers

The authentication filter

Summary

Chapter 15: Handling Eventual Consistency with the Compensation Pattern

Technical requirements

The distributed transaction problem

The compensation pattern

Creating topics and subscriptions with Google Cloud Pub/Sub

Implementing eventual consistency and the compensation pattern

Deploying and testing the application

Summary

Chapter 16: Orchestrating Your Application with Google Kubernetes Engine

Technical requirements

Introducing GKE

Modes of operation

Creating a GKE cluster

Configuring the environment

Kubernetes ConfigMaps and Secrets

Deploying and configuring the microservices

Kubernetes Pods

Kubernetes ReplicaSets

Kubernetes Deployments

Kubernetes Horizontal Pod Autoscalers

Kubernetes Services

Automating the deployment of our components

Configuring public access to the application

Kubernetes-managed certificates

Kubernetes Ingress

When to use GKE

Summary

Chapter 17: Going Serverless with Google App Engine

Technical requirements

Introducing Google App Engine

Google App Engine standard environment

Google App Engine flexible environment

Components of App Engine and the hierarchy of an application deployed on App Engine

Deploying containers to the App Engine flexible environment

Application configuration updates

Deployment configuration

Automating deployment

When to use Google App Engine

Summary

Chapter 18: Future Proofing Your App with Google Cloud Run

Technical requirements

Cloud Run

The Knative stack

Cloud Run environments

Deploying containers to Google Cloud Run

Frontend configuration

Service manifest

Google Cloud Build

Domain name mapping

When to use Cloud Run

Summary

Appendix A: Choosing the Right Migration Strategy

Step 1 – assess

Cataloging existing applications

Educating teams

Choosing what to migrate first

Capable and innovative teams

The effort required for migration

License restrictions and compliance

Can afford downtime

Step 2 – plan

Migration paths versus migration strategies

Choosing the right migration path

Step 3 – migrate

Transferring your data

Deploying workloads

Setting up automated and containerized deployments

Step 4 – optimize

Letting internal teams takeover

Setting up monitoring

Leveraging managed services and automation

Cost and performance

Appendix B: Application Modernization Solutions

Modernizing Java apps

What is Google Anthos?

Preparing to modernize Java apps

Phase 1 – containerizing Java applications

Phase 2 – Refactoring and re-platforming

Modernization strategies (the 6 Rs of modernization)

Retire, retain, and re-architect

Other Books You May Enjoy

Section 1: Cloud-Native Application Development and App Modernization in Google Cloud

On completion of Section 1, you will have gained an understanding of the cloud-native ecosystem, principles and benefits of cloud-native architecture as well as how to apply the 12-factor app principles on Google Cloud. In addition, you will get insights into the end-to-end tooling that goes into developing a cloud-native application on Google Cloud, along with a sample reference architecture. Finally, you will learn how to solve common challenges with cloud-native design patterns as well as hybrid and multi-cloud architecture recommendations.

This part of the book comprises the following chapters:

Chapter 1, Cloud-Native Application Fundamentals Chapter 2, End-to-End Extensible Tooling for Cloud-Native Application Development Chapter 3, Cloud-Native Architecture Patterns and System Architecture Tenets

Chapter 1: Cloud-Native Application Fundamentals

Cloud computing brought about a paradigm shift in the world of software engineering and changed how we build applications. The cloud ecosystem powers some of the most powerful, largest, and most innovative applications using the same set of universal principles. However, some of these principles go against the best practices in traditional application development but are crucial to the success of a cloud-native application.

In this chapter, we are going to explore these fundamentals and core principles to help you utilize the full potential of the cloud-native ecosystem. After finishing this chapter, you'll have a clear understanding of the following topics and how they are used in day-to-day development on the cloud:

The cloud-native ecosystemBenefits of cloud-native architecturePrinciples of cloud-native architectureApplying the 12-factor app principles on Google Cloud

The cloud-native ecosystem

The cloud-native ecosystem is a combination of three very basic elements: the cloud platform, the architecture, and, of course, the cloud-native application. Let's break them down one by one.

The cloud platform is what makes cloud-native applications possible. For instance, the virtually unlimited computing and storage capabilities of a cloud platform give cloud-native applications the following characteristics:

Scalability, a defining characteristic.The pay-per-use model makes the applications cost-effective.Managed services that make cloud-native applications not only versatile but also very developer-friendly.

There are ample reasons why the industry is choosing cloud-native architecture as the foundation for its applications. The architecture dictates how the software is engineered and with cloud-native architecture, developers have far more control. It enables developers to adopt DevOps, containers, automation, microservices, and more. Microservices, in particular, are one of the most important components of a cloud-native architecture and they are what give cloud-native applications the rest of their defining characteristics: agility and resiliency.

An application can be considered cloud-native when it can take advantage of the cloud platform, and in order to take full advantage, it usually needs to be built on a cloud-native architecture. Therefore, a cloud-native application should be the following:

Managed: Use the cloud platform as an infrastructure (be dependent on it to do all the computing).Scalable: Quickly increase or decrease resources to match the demand.Resilient: A single bug or crash should not take down the application. Loosely coupled: Parts of the application should be isolated enough for them to be altered or removed without any downtime.

If the cloud-native ecosystem were a house, the cloud platform would be the underground foundation, the architecture would be the main pillars, and the cloud applications would be the rooms.

Benefits of cloud-native applications

Cloud-native applications have many benefits that make them superior to traditional applications in many ways. These benefits are why people build cloud-native applications, but not all the benefits are innate; they're not guaranteed automatically.

Simply rehosting to a cloud platform does not mean that the time to market will decrease or that the application will be more resilient. It's up to the developer to ensure that the characteristics of the cloud platform and architecture are carried over to the end user. So, before learning how to develop cloud-native applications, it's a good idea to learn what makes cloud-native applications so powerful and popular among businesses.

Increased speed of delivery

Simply building applications isn't enough – delivering the service to the market is just as important. Bringing a new service or product to the market before competitors has a huge advantage (first-mover advantage). Similarly, timely feature updates and bug fixes are incredibly important as well.

Cloud-native applications can be built in a very short time and are generally much faster at pushing updates as well. This is possible due to the way they are architected as well as because of the approach developers take. Let's take a look at some of the architectural benefits first.

Not monolithic

A decade ago, the trend was to make everything monolithic. Today, the trend is to break the monolith into microservices. This paradigm shift was driven by the need to be more agile and resilient and monoliths were neither. The solution? Use a loosely coupled architecture that is not affected by the limitations of monolithic architecture.

Unlike a monolith, the cloud-native architecture supports an application being built in pieces and then joined together. These pieces are called microservices and they are completely isolated from each other in their own environments called containers. They communicate with each other through APIs. The popular saying breaking the monolith refers to breaking down the web of a complex and interconnected code base into neatly organized microservices that are much easier to maintain.

A popular real-world example of breaking the monolith is Netflix. By the time it turned 1 in 2008, Netflix's monolithic architecture had already become a problematic mess that caused extremely long downtimes. So, after 3 years of refactoring, Netflix's engineers were able to break down their giant monolith into 500-700 microservices, reducing cost, time to market, and downtimes.

A microservices architecture also reduces the time to fix bugs as each microservice is monitored separately and buggy microservices can be quickly identified, replaced with an older version, or completely removed without any downtime.

Independent development of microservices

Another major advantage of microservices is that because they are independent, developers can work on different microservices at once. This gives developers the ability to build and update different parts of the application at once, without constantly worrying about app-breaking updates or having to shut down the entire server for a small bug fix. Although compatibility issues haven't been completely eliminated in cloud-native applications, they are far fewer and rarer.

Amazon's two pizza policy is a great example of the independent development of microservices. The policy states that a microservice is too big if the team working on it cannot be fed by two pizzas. Although not very scientific, it illustrates just how great microservices are for small, especially remote, teams.

Independent deployment of microservices

The loosely coupled design philosophy has given rise to a new breed of applications that are modular. As microservices are usually designed with functionality in mind, they can be thought of as modular features that can be changed, replaced, or completely taken out with minimum impact on the entire application. But they can also be introduced independently. When adding a new microservice to the main code base, no major refactoring is required, which significantly reduces the time to market.

Increased scalability

Scalability is one of the key characteristics of cloud-native applications. They are extremely scalable due to the vast (unlimited as far as most businesses are concerned) hardware capabilities of modern cloud platforms. However, cloud-native applications are not scalable in the same way as traditional applications.

Historically, businesses increased their capacity to serve concurrent users by vertically scaling or scaling up. This means that they went from 2 gigabytes of memory to 8 gigabytes and from a 1 GHz CPU to a 2.4 GHz one.

Cloud-native applications, on the other hand, scale up using a different approach: horizontal scaling or scaling out. Instead of increasing the raw computing and storage capabilities of each unit, cloud platforms increase the number of units. Instead of a single stick of 8 gigabytes, they have four sticks of 2 gigabytes.

Although vertical scaling is easier, horizontal scaling opens up far more possibilities in terms of how resources are allocated and how applications scale with the latter, providing much better scalability.

Additionally, cloud platforms provide a number of scalability benefits such as autoscaling and pay-per-use pricing schemes that make cloud-native applications much better investments.

Increased resiliency

Risks can never be completely eliminated, so instead of solely focusing on avoiding failures, cloud-native applications are architected to respond to failures – that is, to be resilient. The resiliency of a system refers to its ability to withstand failures and continue functioning.

Unlike monolithic architecture, where everything is interconnected and pinpointing errors takes time, a cloud-native architecture promotes isolation, which ensures that a single fault won't trigger a system-wide shutdown. Independent and fast deployments also ensure patches reach the end user in time.

The cloud platform, too, plays a role in making cloud applications more resilient compared to their traditional counterparts. For instance, an automated failsafe can take critical measures without human intervention. Additionally, the developer can adopt various practices and mechanisms such as canary development, automated testing, and continuous integration and continuous delivery (CI/CD) tools to not only mitigate failures but also respond to them quickly when they do happen.

Mixed technology stack and workforce

One of the things about the tech stack of cloud-native applications is its support for different programming languages within the same application as well as various types of databases (such as a mix of SQL and NoSQL variants). That's right, you do not need to write all the applications in the same language because of microservices.

The cloud platform will read and execute container images the same way, irrespective of the language, libraries, or dependencies used. This capability is often overlooked, but the functional value of this is incredible for a diverse workforce. The fact that a project is no longer limited to a single language is great news for teams that have members that are proficient in multiple because they can now work on different microservices without any issues, because remember, cloud development makes independent development very easy.

Continuous integration and delivery

CI/CD is a development model based on the DevOps philosophy of software engineering. Traditionally, the developers would write a piece of code, wait for it to be tested by the operations or QA team, and then use the feedback to make changes.

In hindsight, this was a counter-intuitive process that led to siloed teams and data and consequently, slower development, increased costs, and often more bugs. Instead of having the development and operations teams on different sides, the CI/CD model and DevOps, in general, remove this the ball's in your court mindset and aims to make this process of development and deployment concurrent and continuous.

The following are some of the practices that are part of the CI/CD model that you'll likely use:

Iterative development: Instead of building everything at once, cloud developers opt for an iterative process that makes testing more manageable and also reduces the number of bugs on release.

Not to mention, iterative development is faster and gives developers the flexibility to change priorities and pivot quickly (agility).

Automated testing: Cloud developers depend on automated testing for fast feedback before the code is deployed to customers. If a change in code causes a failure, the test also doubles as a concurrent debugging aid that can identify what caused the failure.

Most tests fall under one of five major categories: unit tests, integration tests, system tests, smoke tests, and performance tests. Each test serves a different purpose. That said, tests can be written by the developer to cover nearly all potential scenarios. Cloud platforms will also provide testing tools with existing tests and templates to make things easier and faster.

Continuous integration: With every new code change, there is a possibility that something else will fail. To prevent this, developers use continuous integration to constantly monitor and validate the main code base after each change to avoid any major failures.

There are different ways to implement CI, including setting up CI servers. These CI servers can be run on the cloud platform themselves or through an on-premises software such as Jenkins.

Continuous deployment: CI acts as the stepping stone to the main actor in a CI/CD pipeline: continuous deployment (or delivery). Developers practice CD by automating the delivery process. After a change passes all of the tests, it is automatically deployed to the main (production) code base.

CD helps make the feedback cycles shorter, saves time, reduces the release cycles, and increases overall reliability.

Increased automation

The cloud platform is built to promote automation and therefore a large part of the workflows and processes can be automated. Let's take a look at a few of them.

Environment creation and maintenance

To build your application, you need an infrastructure to build it on. Most cloud platforms give developers two options. They can either configure their own infrastructure and provision resources according to their exact requirements or let the cloud do it for them. Cloud solutions that offer the second option are called managed services and it is a big advantage.

In essence, automating environment creation and maintenance means you let the cloud do all the heavy lifting while you focus on your app. This results in benefits such as the following:

Not having to worry about overprovisioning resources and paying for more than you will useEliminating traditional server management and maintenance costs, which includes upgrades, patching, and licensingGetting a project up and running requires a smaller team

Additionally, environment automation also gives you autoscaling. Autoscaling is an operations pattern that automatically reduces or increases resources depending on traffic. Cloud-native applications are also built with autoscaling, so the change in resources does not affect it. More importantly, however, autoscaling significantly reduces cloud costs and ensures your customers always reach you irrespective of traffic.

Event generation

Event-based cloud automation refers to process automation on the cloud triggered by specific events. Developers can automate a number of responses, from simple scenarios such as sending emails and doing scheduled tasks to more complex workflows including orchestration with external applications, real-time file processing, and even using machine learning for analysis.

Analytics

Cloud platforms such as Google Cloud offer fully managed data analytics solutions that can monitor hundreds of metrics and analyze them using machine learning in real time. These tools can analyze your resource usage, traffic patterns, and more to provide valuable insights into how your application is performing.

Client needs include a variety of use cases, some of which are mentioned here:

Analytics can be automated for warehouse and supply chain management demand forecasting and marketing analysis.Automating interactions with external business intelligence tools for easier control.Cloud platforms such as Google Cloud have decades of research and innovation in machine learning and AI that businesses can leverage for their day-to-day analytics.Cloud platforms also provide stream analytics, which is also a very powerful solution that automates real-time analytics and facilitates quick decision making.

To summarize, cloud-native app development and cloud computing, in general, has been one of the biggest technological developments in software engineering in the past decade. It offers significant improvements in terms of speed, resiliency, collaboration, and scalability over its monolithic counterparts. However, there is one similarity between cloud-native and monolithic applications – the importance of implementation. In order to enjoy the benefits of cloud-native app development to the fullest, developers must leverage cloud best practices and principles. In the next section, we'll take a look at some of the core principles of cloud-native architecture that must be remembered during app development.

Principles of cloud-native architecture

Cloud-native architecture is the design or approach of building and deploying applications that exist in the cloud to take advantage of the aforementioned cloud delivery models. These models, along with cloud-native architecture, result in scalability, flexibility, and resiliency over their traditional counterparts. Traditional counterparts tend to optimize for a fixed, high-cost infrastructure that requires considerable manual efforts to modify and doesn't allow the immediate scale of additional compute storage, memory, or network resources.

Cloud-native architecture also has five principles that will help you use the cloud-native ecosystem to its fullest while helping you navigate a new development platform. Let's take a look.

Principle 1 – lightweight microservices

Cloud-native architecture is fundamentally a different approach from traditional monolithic applications. Rather than the wholescale development and deployment of applications, cloud-native-architected applications are based on self-contained and independently deployable microservices. Microservices are at the heart of a cloud-native architecture and it is critical for a DevOps-focused pipeline because smaller teams are able to work on small portions of the application.

However, as microservices become more complex and larger, they lose their initial purpose of being agile and modular and become ineffective. Therefore, the first thing to remember when creating microservices is to keep them light and focused. The following are some additional factors to remember when working with microservices:

API-based architecture communication: Microservices are completely isolated and packaged into their own portable environments called containers. But they do communicate through APIs, and so a cloud-native architecture uses API-based architecture communication. Independent technology stack: As we mentioned earlier, microservices can be written in different languages and since the microservices are independent of each other, this does not affect anything. So, it's a good idea to use this capability if different members of your development team are proficient in different languages to save time and effort.Independently deployable: Microservices do not need to be deployed at once or one at a time. They can be deployed continuously and concurrently, which is great for mass automated testing. The ideal use of this characteristic is to set up CI/CD pipelines to automate deployment and testing.

Principle 2 – leveraging automation

Cloud-native applications should be architected for automation. Both the architecture and the cloud platform (such as Google Cloud) are extremely automation-friendly, so it is very easy for developers to automate crucial but repetitive tasks involving repairing, scaling, deploying, monitoring, and so on:

Infrastructure setup and continual automation: Creating, maintaining, and updating the infrastructure can be automated with tools such as Google Cloud Deployment Manager or Terraform, so if you do not have very specific resource or configuration requirements, automation is the way to go.Development automation: Google Cloud is full of development automation tools that boost productivity and help you focus on improving your app by taking care of more repetitive tasks. One of the most worthwhile investments on your end would be to set up a CI/CD pipeline using tools such as Google Cloud Build, Jenkins, or Spinnaker. Monitoring and auto-heal: Monitoring app performance and health is crucial, especially in the early stages of app development, but it's not feasible to be on the watch 24/7. That's why developers should integrate monitoring and logging systems in their applications right from the start. More importantly, machine learning can be used to analyze data streams in real time for faster decision making.

A cloud-native architecture is built to support automation at every step, so if a process can be automated, consider automating it.

Principle 3 – DevOps culture

The DevOps culture is a philosophy, a development method, and also a principle to abide by when working on a cloud-native project. Adopting DevOps not only boosts agility and your ability to work around problems, but there are also some important things to consider.

For instance, the use of small, independent teams to speed up development is all for nothing if the teams cannot work together. DevOps helps avoid this problem by reducing the friction between teams (especially between development and production teams) by introducing consistency in workflows, collaborative tools, and reducing the burden cross-functional teams traditionally put on each other.

Additionally, companies and teams that have implemented DevOps properly consistently outperform those who haven't. However, implementation isn't all about tools and platforms – it's equally about the people and the mindset. In order to promote the DevOps culture inside your team or company, you must promote innovation and the habit of refining and simplifying your cloud-native architecture.

Principle 4 – better to go managed

Managed services should almost always be chosen over manual operations. Modern managed solutions from cloud platforms are incredibly advanced and can reduce your responsibilities significantly. On top of the saved manpower and time, managed services will often result in cost savings by finding clever ways to reduce operational overhead.

Overall, when feasible, let the cloud do the heavy lifting because the benefits in cost and time savings will almost always outweigh any potential risks of letting the cloud manage things for you.

Principle 5 – innovate

Finally, it's important to always remember that cloud-native applications are very different from traditional application development in one way – they promote experimentation and innovation. First of all, cloud development won't punish developers the same way monoliths do if their experiments go wrong. There are so many protective measures in place that the chance of you damaging your code permanently is close to zero.

More importantly, though, cloud platforms give you the tools to innovate with. Integrate machine learning, conversational tech, IoT, and so much more. If you have a vision, chances are that you'll be able to make it a reality with cloud-native development.

Limitations of microservices

You might be thinking that microservices is the ultimate tool in modern software engineering, better than the monolith in every conceivable way – especially if your experience with microservices is limited or if you've recently learned about the wonders of microservices. However, you'll find that this is not the case.

Like everything else in life, microservices have their own sets of limitations, which means it's not the be-all and end-all that some people might make it out to be. In fact, it won't even be the obvious choice when building a modern application; in certain cases, you still might be better off with a monolith. Furthermore, in order to make the most of microservices, you need to understand the challenges of microservices and know when additional measures need to be undertaken to make up for where it's lacking.

Management of microservices

The value of change is subjective. While most of the changes introduced by microservices are positive in that they help simplify operations for the business, for some businesses, microservices can cause new complications to rise. In essence, the very things that make microservices so useful for modern applications can also make them less functional in certain scenarios – this will also be a theme in all of the limitations that we'll discuss, the first of which is managing microservices.

One of the main objectives behind using microservices is that it adds a degree of modularity, but to achieve that, we need to divide our application into lots of microservices, which, in the case of a growing application, can make mismanagement easier. Although there are additional tools and platforms available for easier microservices management (Google Cloud has one too), the point still stands – don't let your microservices get out of control.

Homogeneity of microservices

The mixed technological stack is a great feature of microservices, but ill-planned or irresponsible usage of this feature could mean that over time, you have microservices with multiple languages, databases, dependencies, and so on within the same project. While this may be convenient during initial application development, technologically complex and inconsistent microservices can become a major inconvenience when teams are switched or when a different developer starts working on a microservice with a language they aren't proficient in. Additionally, you may also have to use different tools to alter microservices within the same project.

Debugging and testing

The testing phase in a microservices architecture is almost always more complex than testing in a monolithic architecture as you are testing tens and hundreds of individual components that may or not be homogeneous in nature (meaning different technologies used).

Furthermore, in addition to testing microservices individually (known as unit tests), developers are also required to test the entire application together (known as integration tests) while taking into consideration interdependencies and APIs. These tests can be automated to a certain degree, but the tests need to be written manually by the developer.

Microservices Death Star

Even though microservices are designed to be isolated and independent of each other, there will be a point in application development (especially in larger projects) where inter-service dependencies are introduced. In fact, this isn't rare at all and there are numerous ways in which dependencies can emerge in an application. As development continues, this can get out of hand and result in an extremely complex architecture that is very interdependent and thus prone to implosion – hence called the microservices Death Star.

However, it's not all bad. As we said, a microservices Death Star is almost always a result of poor management and planning. Similar problems occur in monolithic architectures as well, but microservices provide the benefit of visibility, meaning you can see your architecture becoming interdependent and thus can take steps to control this before it's too late.

DevOps limitations

DevOps and cloud-native applications go hand in hand due to a myriad of reasons, but when paired with microservices, a DevOps implementation can face a few challenges. For instance, microservices development thrives on smaller, independent teams (leading to faster development). However, the large number of teams can make it difficult to unify the goals of the development teams with the operations teams and keep everyone on track – which is one of the main objectives of DevOps.

Fortunately, this can be avoided by planning ahead and making use of the numerous tools at your disposal for DevOps implementation (primarily automation). Remember, at the end of the day, DevOps is here to increase developmental efficiency while reducing time to market and the microservices architecture is an effective way of achieving these goals.

It's true that the microservices architecture won't always be the answer. Despite its limitations, a traditional monolithic application still might make sense in certain cases. For instance, if your application is relatively simple with little to no scope for expansion, the added complexity of the microservices architecture might not be worth it. And overall, regardless of your project, it's important to remember the limitations of microservices to prevent vulnerabilities and administrative headaches in the long run.

Applying the 12-factor app principles on Google Cloud

The 12-factor app is a set of 12 principles or best practices for building software-as-a-service applications. Written in 2011, 12-factor app is 12 important principles that can be followed to minimize the time and cost of designing scalable and robust cloud-native applications.

The 12 principles can be applied to any programming language and any combination of backing services (database, queue, memory cache, and so on), and is increasingly useful on any cloud vendor platform. However, to make these principles easier to follow as well as to help you apply them yourself, we'll discuss the principles in the context of Google Cloud and, more importantly, how you can apply the 12-factor app principles on Google Cloud

The 12 factors are as follows.

Code base

One code base tracked in revision control, many deploys.

Tracking code in a version-controlled system (VCS) such as Git or Mercurial has many benefits, such as the following:

Enabling different teams to work together by keeping track of all the changes to the code. Providing developers with an intuitive way of resolving merge conflicts (and avoiding them to an extent).Allowing developers to quickly and easily roll back the code to a previous version. A single code base also helps simplify things when creating a CI/CD pipeline.

You can apply this principle to your process by using Google's Cloud Source Repositories, which helps you to collaborate with other members of your team as well as other developers while tracking and managing your code in a scalable, private, and feature-rich Git repository. It also integrates with other Google services, such as Cloud Build, App Engine, Cloud Logging, and more, which is quite handy.

Dependencies

Explicitly declare and isolate dependencies.

This principle translates into two best practices. First, developers should always declare any dependencies into version control explicitly. An explicit dependency declaration enables developers, especially those who are new to the project, to quickly get started without needing to set up too many things. It's also a good practice to keep track of changes made to dependencies.

The second practice suggested by this principle is to isolate an app by packaging it into a container. Containers are crucial to a microservices architecture as they are what keeps the app and its dependencies independent from the environment. As you package and isolate more and more dependencies, you can use the Container Registry tool to manage container images, perform vulnerability analysis, and grant access to users, among other things.

Config

Store config in the environment.

You might have only a handful of configurations for each environment when starting out, but as your application grows and develops, the number of configurations is going to increase significantly, which makes managing configurations for deployments a bit more complex.

To avoid this and ensure your application is architected to be as scalable as possible, you should store configuration in environment variables. Environmental variables (or env vars) can be easily switched between deploys and work with any programming language and framework. If you're already using Google Kubernetes to manage your microservices, you can also use ConfigMaps to attach various information, including configuration files, directly to the containers as well as the secrets manager service in Google Cloud to store sensitive information.

Backing services

Treat backing services as attached resources.

This principle states that developers should treat backing services (such as datastores, messaging systems, and SMTP services) as attached resources because we want these services to be loosely coupled to the deployments. This enables developers to seamlessly switch between third-party or local backing services without any changes to the code.

Build, release, run

Strictly separate build and run stages.

The software development process of creating a 12-factor app is divided into three stages: build, release, and run. Each stage creates a unique identification code that can be used to identify different stages of the development process with the main goal of creating an audit log.

So, at the first stage, a unique identification number is attached to the build. After that, we reach the release stage and the identification number of the build is attached to the configuration of the environment. Every release will have a unique ID in chronological order and since each change leads to a new release, these unique IDs can be used to track changes as well.

Processes

Execute the app as one or more stateless processes.

A 12-factor app completely avoids sticky sessions and instead uses stateless processes that can be created and destroyed without affecting the rest of the application. Developers can use backing services as a database or Google Cloud Storage to persist any data that may need to be reused.

Port binding

Export services via port binding.

Traditional web apps are written to run environments or servers such as Apache Tomcat, but since cloud-native applications are completely self-contained, they do not require such servers to listen to requests. Instead, they export HTTP as a service by binding to a port and listening to that port for requests.

When building apps on Google Cloud it's best to provide port numbers in the environment using env vars instead of hardcoding port numbers in your code to maintain portability in your apps.

Concurrency

Scale out via the process model.

12-factor apps are extremely scalable and to achieve the same level of scalability, it's recommended to divide your app into different types of processes and assign these processes to different types of works (background processes, web processes, worker processes, and so on).

App Engine, Compute Engine, Cloud Functions, and Kubernetes Engine all support concurrency, and thus it's highly recommended to follow this principle to make the most of your cloud-native application.

Disposability

Maximize robustness with fast startup and graceful shutdown.

A 12-factor app treats the cloud infrastructure, processes, and session data as disposable, and the application should be able to shut down and restart quickly and gracefully. This improves agility, scalability, performance, and user experience as processes can be moved between machines without any problems.

The level of disposability of your app depends on various factors, but you can do the following to make your app robust against startups and shutdowns:

Use backing services as attached resources to decouple functionality.Limit the amount of layering in your container images. Use native features of Google Cloud to perform infrastructure tasks when possible.Leverage SIGTERM (stop) signals to perform graceful shutdowns.

Dev/prod parity

Keep development, staging, and production as similar as possible.

With traditional applications, development and operations teams had very different environments. The same cannot exist in cloud-native applications because speed is of the essence. Everything must be fast, smooth, and no time or effort should be spent on altering apps to suit different tools in different environments.

This becomes a little easier with cloud platforms that have a large ecosystem of auxiliary services. For instance, you can use Google Cloud's services for development, testing, staging, and production to maintain consistency across environments and also to speed up collaboration between teams.

Logs

Treat logs as event streams.

Logs are a great source of information about the performance and health of your apps. During development, developers will use logs as an important tool for monitoring the app's behavior. However, when your application is already running on public clouds, logs become unnecessary and come in the way of dynamic scaling.

Therefore, it's best practice to decouple logs from core logic instead of using other tools (such as the Cloud Logging agent) for the collection, processing, and analysis of tools.

Admin processes

Run admin/management tasks as one-off processes.

Admin processes should be decoupled from the core app to reduce maintenance and coordination. Google Cloud has many services built in to encourage this practice. For instance, you can use CronJobs in Google Kubernetes Engine to control the timing, execution, and frequency of admin processes using containers. Similarly, App Engine and Compute Engine have fully managed tools such as Cloud Tasks and Cloud Scheduler that help simplify admin processes.

The cloud-native platform (cloud vendor) and the cloud-native architecture have some very powerful benefits that developers must consider and leverage in order to utilize the full potential of cloud computing. To make this easier, developers can follow the framework of the 12-factor app until these principles and best practices become second nature.

Summary

Cloud-native app development is an extremely effective method for developing powerful applications that are based on relatively simple principles. However, despite the seemingly simple premise behind cloud-native app development, these applications, when scaled up, become increasingly complex and in order to maintain their core characteristics of resiliency, scalability, and agility, developers should follow the right principles, best practices, design patterns, and tools. The first part of this book (consisting of the first three chapters) goes through each of these factors in detail.

Now that you have a basic but strong understanding of how the cloud-native ecosystem works, the numerous benefits it offers over traditional app development, as well as its underlying principles, we can begin learning about the actual tools developers use to build cloud-native applications in the next chapter.

Chapter 2: End-to-End Extensible Tooling for Cloud-Native Application Development

One of the best things about using a cloud platform such as Google Cloud is having access to hundreds of services that make software engineering significantly faster and easier. Google Cloud provides end-to-end tooling for cloud-native application development, which starts with cloud-native integrated development environment (IDE) tools that aid in maximizing development productivity code all the way to setting up monitoring and logging for your application. For cloud-native application development, Google Cloud offers a wide range of extensible services that allow developers to simplify their workflows.

In this chapter, we will understand what these services are and the benefits and roles of each service. We'll also explore how these services interconnect with the rest of the Google Cloud services used in the pipeline. Finally, we'll look at a sample cloud-native architecture pipeline to better understand how these services fit into the day-to-day workflows of a cloud-native developer.

In this chapter, we will cover the following topics:

Moving past third-party services – the beauty of end-to-end tooling Google Cloud CodeGoogle Cloud Build Google Container RegistryGoogle Cloud RunGoogle Kubernetes EngineOperations suite

Moving past third-party services – the beauty of end-to-end tooling

Developers building applications on traditional architectures and on-premises infrastructure