Cloud Native Applications with Ballerina - Dhanushka Madushan - E-Book

Cloud Native Applications with Ballerina E-Book

Dhanushka Madushan

0,0
31,19 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The Ballerina programming language was created by WSO2 for the modern needs of developers where cloud native development techniques have become ubiquitous. Ballerina simplifies how programmers develop and deploy cloud native distributed apps and microservices.
Cloud Native Applications with Ballerina will guide you through Ballerina essentials, including variables, types, functions, flow control, security, and more. You'll explore networking as an in-built feature in Ballerina, which makes it a first-class language for distributed computing. With this app development book, you'll learn about different networking protocols as well as different architectural patterns that you can use to implement services on the cloud. As you advance, you'll explore multiple design patterns used in microservice architecture and use serverless in Amazon Web Services (AWS) and Microsoft Azure platforms. You will also get to grips with Docker, Kubernetes, and serverless platforms to simplify maintenance and the deployment process. Later, you'll focus on the Ballerina testing framework along with deployment tools and monitoring tools to build fully automated observable cloud applications.
By the end of this book, you will have learned how to apply the Ballerina language for building scalable, resilient, secured, and easy-to-maintain cloud native Ballerina projects and applications.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 639

Veröffentlichungsjahr: 2021

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Cloud Native Applications with Ballerina

A guide for programmers interested in developing cloud native applications using Ballerina Swan Lake

Dhanushka Madushan

BIRMINGHAM—MUMBAI

Cloud Native Applications with Ballerina

Copyright © 2021 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Richa Tripathi

Publishing Product Manager: Sathyanarayanan Ellapulli

Senior Editor: Rohit Singh

Content Development Editor: Vaishali Ramkumar

Technical Editor: Karan Solanki

Copy Editor: Safis Editing

Project Coordinator: Deeksha Thakkar

Proofreader: Safis Editing

Indexer: Pratik Shirodkar

Production Designer: Shyam Sundar Korumilli

First published: September 2021

Production reference: 1210921

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80020-063-0

www.packt.com

To my mother and father

– Dhanushka Madushan

Contributors

About the author

Dhanushka Madushan is a senior software engineer at WSO2 and has a bachelor of engineering qualification from the Department of Computer Science and Engineering, University of Moratuwa. He has 5+ years' experience in developing software solutions for cloud-based platforms in different business domains. He has worked on the WSO2 integration platform for 3+ years and is responsible for building and maintaining integration products. He often writes blogs about the latest cutting-edge, cloud native-related technologies using his experience of working on many open source projects, including Micro-integrator, WSO2 ESB, and Apache Synapse, as well as development of the Choreo iPaaS platform and the Zeptolytic SaaS data analytics platform. Dhanushka's extensive exposure to cloud native-related technologies, including Docker, Kubernetes, Jenkins, AWS, and multiple observability tools, is a key area of expertise.

First and foremost, my thanks go to my loving parents who have supported me throughout the long journey of writing this book. Next, my thanks extend to Sameera Jayasoma and all Ballerina team members for supporting me whenever I had questions. I especially need to thank Lakmal Warusawithana and Anjana Fernando for the support given in the initial drafting and code sample creation process. Also, I would like to thank the technical reviewers, Nadeeshaan Gunasinghe, Joy Rathnayake, Shiroshica Kulatilake, and Shenavi de Mel, for the amazing work they have done. I would also extend my gratitude to the Packt team who supported and guided me throughout the process of publishing this book. Finally, I would like to thank Dr. Sanjeewa Weerawarna for founding and this awesome programming language.

About the reviewers

Nadeeshaan Gunasinghe is a technical lead at WSO2 with over 6 years' experience in enterprise integration, programming languages, and developer tooling. Nadeeshaan leads the Ballerina Language Server team and is also a key contributor to Ballerina, which is an open source programming language and platform for the cloud era, as well as being an active contributor to the WSO2 Enterprise Service Bus. He is also passionate about sports, football and cricket in particular. 

Joy Rathnayake is a solutions architect with over 16 years' industry experience and is part of the solution architecture team at WSO2, based in Colombo, Sri Lanka. He is primarily responsible for understanding customer requirements, identifying the products/technologies required, and defining the overall solution design/architecture. 

Joy has been recognized as both a Microsoft Most Valuable Professional (MVP) and Microsoft Certified Trainer (MCT). He was the first to hold both MVP and MCT recognitions in Sri Lanka. He has contributed to developing content for Microsoft Certifications and has worked as a Subject Matter Expert (SME) for many Microsoft exam development projects. He has contributed a lot to the community by presenting at various events, including Tech-Ed Europe, Tech-Ed Southeast Asia, Tech-Ed Sri Lanka, Tech-Ed India, Tech-Ed Malaysia, Southeast Asia SharePoint Conference, and SharePoint Saturday. He enjoys traveling, speaking at public events/conferences, and reading. 

Connect with him on LinkedIn at https://www.linkedin.com/in/joyrathnayake/.

Shiroshica Kulatilake is a solutions architect at WSO2 where she works with customers around the world to provide middleware solutions on the WSO2 stack for digital transformation projects. In her work, she is involved with real-world problems that organizations face in a rapidly changing digital world and gets the opportunity to help these organizations achieve what they require in their business from a technological standpoint. Her expertise lies in API Management, API Security, Integration, and EIPaaS. She started her career as a software engineer and is still passionate about the nitty-gritty aspects of building things, with her current focus being microservice architectures.

Shenavi de Melis an experienced software solutions engineer with 7+ years' experience working in the computer software industry. She is passionate about building great customer relationships and enhancing her knowledge of the field. She has extensive hands-on experience in many development languages and technologies, including Java, Jaggery, Ballerina, JavaScript/jQuery, SQL, PHP, HTML, Docker, and Kubernetes. Currently, she is working as a lead solutions engineer as part of the solutions engineering team, assisting customers in implementing their solutions using the WSO2 platform. She is also very familiar with API management, integration, and identity protocols, having spent the majority of her career working at WSO2, one of the leading companies as regards middleware and open source technology.

Table of Contents

Preface

Section 1: The Basics

Chapter 1: Introduction to Cloud Native

Evolution from the monolithic to the microservice architecture

The N-tier architecture in monolithic applications

Monolithic application architecture

The ESB simplifies SOA

The emergence of microservices

Understanding what cloud native architecture is

Cloud computing

Serverless architecture

Definition of cloud native

Why should you select a cloud native architecture?

Challenges of cloud native architecture

Security and privacy

The complexity of the system

Cloud lock-in

Deploying cloud native applications

Design is complex and hard to debug

Testing cloud native applications

Placing Ballerina on cloud native architecture

Building cloud native applications

The twelve-factor app

Code base

Dependencies

Config

Backing services

Dev/prod parity

Admin processes

API-first design

The 4+1 view model

Building an order management system

Breaking down services

Impact on organizations when moving to cloud native

Challenges of moving to a cloud native architecture

Outdated technologies

Building cloud native delivery pipelines

Interservice communication and data persistence

Conway's law

Netflix's story of moving to cloud native architecture

Summary

Questions

Further reading

Answers

Chapter 2: Getting Started with Ballerina

Technical requirements

Introduction to the Ballerina language

A glimpse of Ballerina

The Ballerina compiler

The Ballerina threading model

Setting up a Ballerina environment

Downloading and installing Ballerina

Setting up VS Code

Using VS Code to develop a Ballerina application

Writing a simple "hello world" program

Building a Ballerina project

Understanding the Ballerina type system

Simple types

Structured types

Sequence types

Behavioral types

Other types

Working with types

Using types to build applications with Ballerina

Controlling program flow

Ballerina functions

Treating functions as variables

The Ballerina main function

Working with Ballerina classes

Ballerina objects

Error handling

Using custom errors

Summary

Questions

Answers

Section 2: Building Microservices with Ballerina

Chapter 3: Building Cloud Native Applications with Ballerina

Technical requirements

Ballerina cloud native syntaxes and features

The role of Ballerina in cloud native architecture

Building a Ballerina HTTP service

Passing data to HTTP resources

Using query parameters on an HTTP server

Passing structured data to HTTP services

Invoking remote HTTP services

Ballerina JSON support

Working with XML format in the Ballerina language

Ballerina remote methods

Containerizing applications with Ballerina

Introduction to containers

Containers versus VMs

Containerizing applications with Docker

Building a Docker image with Ballerina

Using Ballerina's development Docker image

Container orchestration with Kubernetes

Introduction to Kubernetes

Generating Kubernetes artifacts with Ballerina

Utilizing Kubernetes resources

Using config maps in the Kubernetes cluster

Configuring a Kubernetes health check for Ballerina services

Using Kustomize to modify Ballerina Kubernetes configurations

Summary

Questions

Further reading

Answers

Chapter 4: Inter-Process Communication and Messaging

Technical requirements

Communication between services in a microservice architecture

Communication over services in the Kubernetes cluster

Using environment variables for service discovery

Using the Kubernetes DNS resolver for service discovery

Using a service mesh to simplify inter-service communication

Service discovery in a service mesh by using Consul

Using resiliency patterns in Ballerina

Ballerina client-side load balancing

Synchronous communication

Handling HTML form data

Building a Ballerina backend with GraphQL

Using the OpenAPI Specification with Ballerina

Building a chat application with WebSocket

Building Ballerina services with the gRPC protocol

Asynchronous communication

Asynchronous communication over microservices

Building the publisher and subscriber pattern with Apache Kafka

Connecting Ballerina with Kafka

Connecting Ballerina with RabbitMQ

Summary

Questions

Further reading

Answers

Chapter 5: Accessing Data in Microservice Architecture

Technical requirements

Accessing data with Ballerina

Connecting the Ballerina application with MySQL

Querying from MySQL

Using parameterized queries

Using batch execution

MySQL data types and Ballerina data types

Connecting databases with JDBC drivers

Connecting Ballerina application with DaaS platforms

Managing transactions in Ballerina

Building an order management system

Initializing the MySQL database for the order management system

Building an order management system with Ballerina

Understanding ACID properties

Ballerina transaction management

The database-per-service design pattern

The role of Domain-Driven Design in cloud native architecture

Creating aggregates with Ballerina

Distributed transactions with saga

Building a saga orchestrator with Ballerina

Using event sourcing and CQRS in a distributed system

Using events to communicate among services

Developing with event sourcing

Creating aggregates with event sourcing

Creating snapshots

Querying in microservice architecture

Summary

Questions

Further reading

Answers

Section 3: Moving on with Cloud Native

Chapter 6: Moving on to Serverless Architecture

Technical requirements

Introducing serverless architecture

Developing Ballerina applications with Azure Functions

Introducing Azure cloud features

Building serverless applications with the Azure cloud

Building Ballerina applications with Azure Queue

Developing Ballerina applications with AWS Lambda functions

Understanding AWS Cloud services

Configuring the AWS CLI and AWS Console

Creating your first Ballerina AWS Lambda function

Adding triggers to invoke a Lambda function

Using AWS Step Functions with an AWS Lambda function

Building the Lambda function with Ballerina

Creating Step functions in the AWS Console

Creating an API gateway to invoke Step functions

Summary

Questions

Further reading

Answers

Chapter 7: Securing the Ballerina Cloud Platform

Technical requirements

Managing certificates in Ballerina applications

Securing the Ballerina service with HTTPS

Calling an HTTPS endpoint with Ballerina

Securing Ballerina interservice communication with mutual SSL

Authenticating and authorizing with LDAP user stores

Authenticating and authorizing users with the LDAP server

Setting up Apache Directory Studio

Authenticating and authorizing Ballerina services with LDAP

Token-based authorization with Ballerina

Securing Ballerina services with a JSON Web Token

Generating and validating a JWT with Ballerina

Generating JWT with WSO2 Identity Server

Authorizing Ballerina services with JWT

Authorizing Ballerina services with WSO2 IS-generated JWT claims

Building a Ballerina service with JWT authorization

OAuth2 authentication and authorization with WSO2 IS

Summary

Questions

Further reading

Answers

Chapter 8: Monitoring Cloud Native Applications

Technical requirements

Introduction to observability and monitoring

Observability versus monitoring

Ballerina logging

Printing logs with Ballerina

Using Logstash to collect logs

Using Logstash with Filebeat in container environments

Using Elasticsearch to collect logs

Using Kibana to analyze Ballerina logs

Tracing with Ballerina

Understanding the OpenTelemetry standard

Using Jaeger as a tracing platform

Monitoring the Ballerina application with Jaeger

Creating custom spans with the Ballerina language

Collecting and visualizing metrics with Ballerina

Exposing metrics from Ballerina

Collecting metrics with Prometheus

Visualizing metrics with Grafana

Creating custom metrics with Ballerina

Summary

Questions

Answers

Further reading

Chapter 9: Integrating Ballerina Cloud Native Applications

Technical requirements

Fronting Ballerina services with an API gateway

Building an API gateway with Ballerina

Setting up and using WSO2 API Microgateway

Using interceptors in Microgateway with Ballerina

Building Ballerina integration flows with Choreo

Introduction to Choreo low-code development

Building HTTP services with Choreo

Integrating services with Choreo

Summary

Questions

Answers

Chapter 10: Building a CI/CD Pipeline for Ballerina Applications

Technical requirements

Testing Ballerina applications

Testing cloud native applications

Writing a simple test in the Ballerina language

Writing test functions with the Ballerina test framework

Ballerina test reports

Understanding Ballerina's testing life cycle

Grouping Ballerina tests

Mocking functions with Ballerina

Automating a cloud native application's delivery process

CI/CD pipeline in a cloud native application

Using GitHub Actions with Ballerina

Setting up GitHub Actions

Building and deploying applications with Ballerina Central

Introduction to Ballerina Central

Building and publishing packages in Ballerina Central

Publishing packages to Ballerina Central with GitHub Actions

Using Ballerina Central packages

Summary

Questions

Further reading

Answers

Why subscribe?

Other Books You May Enjoy

Section 1: The Basics

This first section focuses on the basics of cloud native technology concepts and the basic building blocks of the Ballerina language. This section is necessary to understand the more advanced concepts that we are going to discuss in later sections.

First, we will discuss what cloud native is, the history of cloud-based software architecture, the definition of cloud native, and transforming an organization to using cloud native technologies. Here, we will focus on the theoretical aspects of building a cloud native system.

Next, we will discuss the architecture of the Ballerina language, setting up a development environment, fundamental Ballerina syntaxes, the Ballerina type system, error handling, and controlling the program flow. Here, we will learn about the practical aspects of using the Ballerina language and fundamental concepts that are needed to build complex cloud native applications.

This section comprises the following chapters:

Chapter 1, Introduction to Cloud NativeChapter 2, Getting Started with Ballerina

Chapter 1: Introduction to Cloud Native

In this chapter, we will go through how developers came up with cloud native due to the problems that are attached to monolithic architecture. Here, we will discuss the old paradigms of programming, such as three-tier architecture, and what the weaknesses are. You will learn about the journey of shifting from an on-premises computation infrastructure model to a cloud-based computing architecture. Then we will discuss the microservice architecture and serverless architecture as cloud-based solutions.

Different organizations have different definitions of cloud native architecture. It is difficult to give a cloud native application a clear definition, but we will discuss the properties that cloud native applications should have in this chapter. You will see how the twelve-factor app plays a key role in building cloud native applications. When you are building a cloud native application, keep those twelve factors in mind.

Organizations such as Netflix and Uber are transforming the way applications are designed by replacing monolithic architecture with the microservice architecture. Later in this chapter, we will see how organizations are successful in their business by introducing cloud native concepts. It is not a simple task to switch to a cloud native architecture. We will address moving from a monolithic architecture to a cloud native architecture later in the chapter.

We will cover the following topics in this chapter:

Evolution from the monolithic to the microservice architectureUnderstanding what the cloud native architecture isBuilding cloud native applicationsThe impact on organizations when moving to cloud native

By the end of this chapter, you will have learned about the evolution of cloud native applications, what cloud native applications are, and the properties that cloud native applications should have.

Evolution from the monolithic to the microservice architecture

The monolithic architecture dictated software development methodologies until cloud native conquered the realm of developers as a much more scalable design pattern. Monolithic applications are designed to be developed as a single unit. The construction of monolithic applications is simple and straightforward. There are problems related to monolithic applications though, such as scalability, availability, and maintenance.

To address these problems, engineers came up with the microservice architecture, which can be scalable, resilient, and maintainable. The microservice architecture allows organizations to develop increasingly flexible. The microservice architecture is the next step up from the Service-Oriented Architecture (SOA). Both these architectures use services for business use cases. In the next sections, we will follow the journey from the monolithic architecture to SOA to the microservice architecture. To start this journey, we will begin with the simplest form of software architecture, which is the N-tier architecture. In the next section, we will discuss what the N-tier architecture is and the different levels of the N-tier architecture.

The N-tier architecture in monolithic applications

The N-tier architecture allows developers to build applications on several levels. The simplest type of N-tier architecture is the one-tier architecture. In this type of architecture, all programming logic, interfaces, and databases reside in a single computer. As soon as developers understood the value of decoupling databases from an application, they invented the two-tier architecture, where databases were stored on a separate server. This allowed developers to build applications that allow multiple clients to use a single database and provide distributed services over a network.

Developers introduced the application layer to the two-tier architecture and formed the three-tier architecture. The three-tier architecture includes three layers, known as data, application, and presentation, as shown in the following diagram:

Figure 1.1 – Three-tier architecture

The topmost layer of the three-tier architecture is known as the presentation layer, which users directly interact with. This can be designed as a desktop application, a mobile application, or a web application. With the recent advancement of technology, desktop applications have been replaced with cloud applications. Computational power has moved from consumer devices onto the cloud platform with the recent growth of mobile and web technologies.

The three-tier architecture's middle layer is known as the application layer, in which all business logic falls. To implement this layer, general-purpose programming languages, along with supporting tools, are used. There are several programming languages with which you can implement business logic, such as Node.js, Java, and Python, along with several different libraries. These programming languages might be general-purpose programming languages such as Node.js and Java or domain-specific languages such as HTML, Apache Groovy, and Apache Synapse. Developers can use built-in tools such as API gateways, load balancers, and messaging brokers to develop an application, in addition to general-purpose programming languages.

The bottom layer is the data layer, which stores data that needs to be accessed by the application layer. This layer consists of databases, files, and third-party data storage services to read and write data. Databases usually consist of relational databases, which are used to store different entities in applications. There are multiple databases, such as MySQL, Oracle, MSSQL, and many more, used to build different applications. Other than using those databases, developers can select file-based and third-party storage services as well.

Developers need to be concerned about security, observability, delivery processes, deployability, and maintainability across all these layers over the entire life cycle of application development. With the three-tier architecture, it is easy and efficient to construct simple applications. Separating the application layer allows the three-tier architecture to be language-independent and scalable. Developers can distribute traffic between multiple application layer instances to allow horizontal scaling of the application. A load balancer sitting in front of the application layer spreads the load between the application instance replicas. Let's discuss monolithic application architecture in more detail and see how we can improve it in the next section.

Monolithic application architecture

The term "monolithic" comes from the Greek terms monos and lithos, together meaning a large stone block. The meaning in the context of IT for monolithic software architecture characterizes the uniformity, rigidity, and massiveness of the software architecture.

A monolithic code base framework is often written using a single programming language, and all business logic is contained in a single repository.

Typical monolithic applications consist of a single shared database, which can be accessed by different components. The various modules are used to solve each piece of business logic. But all business logic is wrapped up in a single API and is exposed to the frontend. The user interface (UI) of an application is used to access and preview backend data to the user. Here's a visual representation of the flow:

Figure 1.2 – A monolithic application

The scaling of monolithic applications is easy, as the developer can increase the processing and storage capacity of the host machine. Horizontal scalability can be accomplished by replicating data access layers and spreading the load of the client within each of these instances.

Since the monolithic architecture is simple and straightforward, an application with this paradigm can be easily implemented. Developers can start designing the application with a model entity view. This architecture can be easily mapped to the database design and applied to the application. It's also easy for developers to track, log, and monitor applications. Unlike the microservice architecture, which we will discuss later in this chapter, testing a monolithic application is also simple.

Even though it is simple to implement a monolithic application, there are lots of problems associated with maintaining it when it comes to building large, scalable systems:

Monolithic applications are designed, built, and implemented in a single unit. Therefore, all the components of the architecture of the system should be closely connected. In most cases, point-to-point communication makes it more difficult to introduce a new feature component to the application later.As a monolithic application grows, it takes more time to start the entire application. Changing existing components or adding new features to the system may require stopping the whole system. This makes the deployment process slower and much more unstable.On the other hand, having an unstable service in the system means the entire system is fragile. Since the components of a monolithic application are tightly coupled with each other, all components should function as planned. Having a problem in one subsystem causes all the dependent components to fail.It is difficult to adopt modern technology because a monolithic application is built on homogeneous languages and frameworks. This makes it difficult to move forward with new technologies, and inevitably the whole system will become a legacy system.

Legacy systems have a business impact due to problems with the management of the system:

Maintaining legacy systems is costly and ineffective over time. Even though developers continue to add features, the complexity of the system increases exponentially. This means that organizations spend money on increasing the complexity of the application rather than refactoring and updating the system.Security gets weaker over time as the dependent library components do not upgrade and become vulnerable to security threats. Migrating to new versions of libraries is difficult due to the tight coupling of components. The security features of these libraries are not up to date, making the system vulnerable to attacks.When enforcement regulations become tidal, it gets difficult to adjust the structure according to these criteria. With the General Data Protection Act (GDPA) enforcement, the system should be able to store well-regulated information.When technology evolves over time, due to compatibility problems, it is often difficult to integrate old systems with new systems. Developers need to add more adapters to the system to make it compliant with new systems. This makes the system a lot more complicated and bulkier.

Due to these obstacles, developers came up with a more modular design pattern, SOA, which lets developers build a system as a collection of different services. When it comes to SOA, a service is the smallest deployment unit used to implement application components. Each service is designed to solve problems in a specific business domain.

Services can be run on multiple servers and connected over a network. Service interfaces have loose coupling in that another service or client is able to access its features without understanding the internal architecture.

The core idea of SOA is to build loosely coupled, scalable, and reusable systems, where services work together to execute business logic. Services are the building block of SOA and are interconnected by protocols such as HTTP, JMS, TCP, and FTP. SOA commonly uses the XML format to communicate with other services. But interconnecting hundreds of services is a challenge if each service uses a point-to-point communication method. This makes it difficult to have thousands of links in interservice communication to maintain the system.

On the other hand, the system should be able to handle multiple different data formats, such as JSON, XML, Avro, and Thrift, which makes integration much more complicated. For engineers, observing, testing, migrating, and maintaining such a system is a nightmare. The Enterprise Service Bus (ESB) was introduced to SOA to simplify these complex messaging processes. In the next section, we will discuss what an ESB is and how it solves problems in SOA.

The ESB simplifies SOA

The ESB is located in the middle of the services, connecting all the services. The ESB provides a way of linking services by sitting in the center of services and offering various forms of transport protocols for communication. This addresses the issue of point-to-point connectivity issues where many services need to communicate with each other. If all of these services are directly linked to each other, communication is a mess and difficult to manage. The ESB decouples the service dependencies and offers a single bus where all of its services can be connected. Let's have a look at an SOA with point-to-point communication versus using an ESB for services to communicate:

Figure 1.3 – ESB architecture

On incoming service requests, the ESB may perform simple operations and forward them to another service. This makes it easier for developers to migrate to SOA and expose existing legacy systems as services so that they can be easily handled instead of creating everything from scratch.

An ESB is capable of managing multiple endpoints with multiple protocols. For example, an ESB can use the following features to facilitate the integration of services in SOA:

Security: An ESB handles security when connecting various services together. The ESB offers security features such as authentication, authorization, certificate management, and encryption to secure connected services.Message routing: Instead of directly calling services, an ESB provides the modularity of SOA by providing routing on the ESB itself. Since all other services call the ESB to route services, developers can substitute service components without modifying the service.Central communication platform: This prevents a point-to-point communication issue where each service does not need to know the address of the endpoint. The services blindly send requests to the ESB and the ESB routes requests as specified in the ESB routing logic. The ESB routes traffic between services and acts as smart pipes, and that makes service endpoints dumb.Monitoring the whole message flow: Because the ESB is located in the center of the services, this is the perfect location to track the entire application. Logging, tracing, and collecting metrics can be placed in the ESB to collect the statistics of the overall system. This data can be used along with an analytical tool to analyze bugs, performance bottlenecks, and failures.Integration over different protocols: The ESB ensures that services can be connected via different communication protocols, such as HTTP, TCP, FTP, JMS, and SMTP. It is also supported for various data interchange formats, such as JSON and XML.Message conversion: If a service or client application is required to access another service, the message format may be modified from one to another. In this case, the ESB offers support for the conversion of messages across various formats, such as XML and JSON. It also supports the use of transformation (XSLT) and the modification of the message structure.Enterprise Integration Patterns (EIP): These are used as building blocks for the SOA messaging system. These patterns include channeling, routing, transformation, messaging, system, and management. This helps developers build scalable and reliable SOA platforms with EIP.

SOA was used as mainstream cloud architecture for a long time until the microservice architecture came along as a new paradigm for building cloud applications. Let's discuss the emergence of the microservice architecture and how it solves problems with SOA in the next section.

The emergence of microservices

SOA provides solutions to most of the issues that monolithic applications face. But developers still have concerns about creating a much more scalable and flexible system. It's easy to construct a monolithic structure. But as it expands over time, managing a large system becomes more and more difficult.

With the emergence of container technology, developers have been able to provide a simple way to build and manage large-scale software applications. Instead of building single indivisible units, the design of microservices focuses on building components separately and integrating them with language-independent APIs. Containers provide an infrastructure for applications to run independently. All the necessary dependencies are available within a container. This solves a lot of dependency problems that can impact the production environment. Unlike virtual machines (VMs), containers are lightweight and easy to start and stop.

Each microservice in the system is designed to solve a particular business problem. Unlike monolithic architectures, microservices focus more on business logic than on technology-related layers and database designs. Even though microservices are small, determining how small they should be is a decision that should be taken in the design stage. The smaller the microservices, the higher the network communication overhead associated with them. Therefore, when developing microservices, choose the appropriate scope for each service based on the overall system design.

The architecture of microservices eliminates the concept of the ESB being the central component of SOA. Microservices prefer smart endpoints and dumb pipes, where the messaging protocol does not interfere with business logic. Messaging should be primitive in such a way that it only transports messages to the desired location. While the ESB was removed from the microservice architecture, the integration of services is still a requirement that should be addressed.

The following diagram shows a typical microservice architecture:

Figure 1.4 – Example microservice architecture

The general practice of the design of microservices is to provide a database for each service. The distributed system should be designed in such a way that disk space or memory is not shared between services. This is also known as shared-nothing architecture in distributed computing. Sharing resources creates a bottleneck for the system to scale up. Even if the number of processing instances increases, the overall system performance does not increase since accessing resources might become a bottleneck that slows down the whole process. As databases are shared resources for services, the microservice architecture forces the developer to provide separate databases for each service.

However, in traditional business applications, it is not feasible to split the schema into several mutually exclusive databases since different resources need to be shared by the same entities. This makes it important that services communicate with each other. Multiple protocols are available for interservice communication, which will be addressed in Chapter 4, Inter-Process Communication and Messaging. However, communication between services should also be treated with care, as consistency becomes another headache for the developer to solve.

There are multiple benefits of using microservices rather than monolithic architectures. One advantage of this type is the scalability of the system. When it comes to monolithic applications, the best way to scale is by vertical scaling, where more resources are allocated to the host computer. In contrast with monolithic applications, microservices can not only be scaled vertically but also horizontally. The stateless nature of microservices applications makes microservices more independent. These independent stateless services can be replicated, and traffic can be distributed over them.

Microservices enable developers to use multiple programming languages to implement services. In short, we refer to these as being polyglot, where each service is designed in a different language to increase the agility of the system. This provides freedom to choose the technology that is best suited to solve the problem.

As well as having advantages, the microservice architecture also has disadvantages.

The biggest disadvantage of the microservice architecture is that it increases complexity. Microservice developers need to plan carefully and have strong domain knowledge of the design of microservices. The following problems are also associated with the microservice architecture:

Handling consistency: Because each service has its own database, sharing entities with other services becomes an overhead for the system compared to monolithic applications. Multiple design principles help to resolve this problem, which we will describe in Chapter 5, Accessing Data in the Microservice Architecture.Security: Unlike monolithic applications, developers need to develop new techniques to solve the security of distributed applications. Modern authentication approaches, such as JWT and OAuth protocols, help overcome these security issues. These methods will be explored in Chapter 7, Securing the Ballerina Cloud Platform.Automated deployment: Distributed system deployment is more jargon that needs to be grasped clearly. It is not very straightforward to write test cases for a distributed system due to consistency and availability. There are many techniques for testing a distributed system. These will be addressed in Chapter 10, Building a CI/CD Pipeline for Ballerina Applications.

Compared to an SOA, a microservice architecture offers system scalability, agility, and easy maintenance. But the complexity of building a microservice architecture is high due to its distributed nature. Programming paradigms also significantly change when moving from SOA to a microservice architecture. Here's a comparison between SOA and microservice architectures:

Table 1.1 – SOA versus microservices

Having understood the difference between monolithic and microservice architectures, next, we will dive into the cloud native architecture.

Understanding what cloud native architecture is

Different organizations have different definitions of cloud native architecture. Almost all definitions emphasize creating scalable, resilient, and maintainable systems. Before we proceed to a cloud native architecture definition, we need to understand what cloud computing is about.

Cloud computing

The simplest introduction to the cloud is the on-demand delivery of infrastructure, storage, databases, and all kinds of applications through a network. Simply, the client outsources computation to a remote machine instead of doing it on a personal computer. For example, you can use Google Drive to store your files or share images with Twitter or Firebase applications to manage mobile application data. Different vendors offer services at different levels of abstraction. The companies providing these services are considered to be cloud providers. Cloud computing pricing depends mainly on the utilization of resources.

The cloud can be divided into three groups, depending on the deployment architecture:

Public cloud: In the public cloud, the whole computing system is kept on the cloud provider's premises and is accessible via the internet to many organizations. Small organizations that need to save money on maintenance expenses can use this type of cloud service. The key issue with this type of cloud service is security.Private cloud: Compared to the public cloud, the private cloud commits private resources to a single enterprise. It offers a more controlled atmosphere with better security features. Usually, this type of cloud is costly and difficult to manage.Hybrid cloud: Hybrid clouds combine both types of cloud implementation to offer a far more cost-effective and stable cloud platform. For example, public clouds may be used to communicate with customers, while customer data is secured on a private network. This type of cloud platform provides considerable security, as well as being cheaper than a private cloud.

Cloud providers offer services to the end user in several ways. These services can rely on various levels of abstraction. This can be thought of as a pyramid, where the top layers have more specialized services than the ones below. The topmost services are more business-oriented, while the bottom services contain programming logic rather than business logic. Selecting the most suitable service architecture is a trade-off between the cost of developing and implementing the system versus the efficiency and capabilities of the system. The types of cloud architecture are as follows:

Software as a Service (SaaS)Platform as a Service (PaaS)Infrastructure as a Service (IaaS)

We can generalize this to X as a service, where X is the abstraction level. Today, various companies have come up with a range of services that provide services over the internet. These services include Function as a Service (FaaS), Integration Platform as a Service (iPaaS), and Database as a Service (DBaaS). See the following diagram, which visualizes the different types of cloud architecture in a pyramidal layered architecture:

Figure 1.5 – Types of cloud platforms and popular vendors

Each layer in the cloud platform has different levels of abstractions. IaaS platforms provide OS-level abstraction that allows developers to create virtual machines and host a program. In PaaS platforms, developers are only concerned about building applications rather than infrastructure implementations. SaaS is a final product that the end user can directly work with. Check the following diagram, which shows the different abstraction levels of cloud platforms:

Figure 1.6 – Levels of abstraction provided by different types of platform

SaaS offers utilized platforms for the developer to work with where developers just need to take care of business logic. The majority of the SaaS platform is designed to run a web browser, along with a UI to work with. SaaS systems manage all programming logic and infrastructure-related maintenance, while end users just need to concentrate on the logic that needs to be applied.

In SaaS applications, users do not need to worry about installing, managing, or upgrading applications. Instead, they may use existing cloud resources to implement the business logic on the cloud. This also decreases expenses, as the number of individuals who need to operate the system is decreased. The properties of SaaS applications are better suited to an enterprise that is just starting out, where the system is small and manageable. Once it scales up, they need to decide whether to hold it in SaaS or switch to another cloud architecture.

On the PaaS cloud platform, vendors have an abstract environment where developers can run programs without worrying about the underlying infrastructures. The allocation of resources may be fixed or on demand. In this type of system, developers will concentrate on programming the business logic rather than the OS, software upgrades, infrastructure storage, and so on.

Here is a list of the advantages of building cloud applications using PaaS:

Cost-effective development for organizations since they only need to focus on business logic.Reduces the number of lines of code that are additionally required to configure underlying infrastructure rather than business use cases.Maintenance is easy due to third-party support.Deployment is easy since the infrastructure is already managed by the platform.

IaaS is a type of cloud platform that provides the underlying infrastructure, such as VMs, storage space, and networking services, that is required to deploy applications. Users can pick the resource according to the application requirements and deploy the system on it. This is helpful as developers can determine what resources the application can have and allocate more resources dynamically as needed.

The cost of the deployment is primarily dependent on the number of resources allocated. The benefit of IaaS is that developers have complete control of the system, as the infrastructure is under the control of the system.

A list of the advantages of using IaaS is given here:

It provides flexibility in selecting the most suitable infrastructure that the system supports.It provides the ability to automate the deployment of storage, networking, and processing power.It can be scaled up by adding more resources to the system.The developer has the authority to control the system at the infrastructure level.

Although these fundamental cloud architectures are present, there are additional architectures that will address some of the challenges of cloud architectures. FaaS is once such architecture. FaaS operates pretty much the same as the PaaS platform. On the FaaS platform, the programmer provides a function that needs to be evaluated and returns the result to the client. Developers do not need to worry about the underlying infrastructure or the OS.

Serverless architecture

There are a lot of components that developers need to handle in the design of microservices. The developer needs to create an installation script to containerize and deploy applications. Engineering a microservice architecture embraces these additional charges for managing the infrastructure layer functionality. The serverless architecture offloads server management to the cloud provider and only business logic programming is of concern to developers.

FaaS is a serverless architecture implementation. Common FaaS platform providers include AWS Lambda, Azure Functions, and Google Cloud Functions. Unlike in microservice architectures, functions are the smallest FaaS modules that can be deployed. Developers build separate functions to handle each request. Hardware provisioning and container management are taken care of by cloud providers. A serverless architecture is a single toolkit that can manage deployment, provisioning, and maintenance. Functions are event-driven in that an event can be triggered by the end user or by another function.

Features such as AWS Step Functions make it easier to build serverless systems. There are multiple advantages associated with using a serverless architecture instead of a microservice architecture.

The price of this type of platform depends on the number of requests that are processed and the duration of the execution. FaaS can scale up with incoming traffic loads. This eliminates servers that are always up and running. Instead, the functions are in an idle state if there are no requests. When requests flow in, they will be activated, and requests will be processed. A key issue associated with serverless architecture is cloud lock-in, where the system is closely bound to the cloud platform and its features. Also, you cannot run a long-running process on functions as most FaaS vendors restrict the execution time for a certain period of time. There are other concerns, such as security, multitenancy, and lack of monitoring tools in serverless architectures. However, it provides developers with an agile and rapid method of development to build applications more easily than in microservice architectures.

Definition of cloud native

In the developer community, cloud native has several definitions, but the underlying concept is the same. The Cloud Native Computing Foundation (CNCF) brings together cloud native developers from all over the world and offers a stage to create cloud native applications that are more flexible and robust. The cloud native definition from the CNCF can be found on their GitHub page.

According to the CNCF definition of cloud native, cloud native empowers organizations to build scalable applications on different cloud platforms, such as public, private, and hybrid clouds. Technologies such as containers, container orchestration tools, and configurable infrastructure make cloud native much more convenient.

Having a loosely coupled system is a key feature of cloud native applications that allows the building of a much more resilient, manageable, and observable system. Continuous Integration and Continuous Deployment (CI/CD) simplify and speed up the deployment process.

Other than the CNCF definition, pioneers in cloud native development have numerous definitions, and there are common features that cloud native applications should have across all the definitions. The key aim of being cloud native is to empower companies by offering a much more scalable, resilient, and maintainable application on cloud platforms.

By looking at the definition, we can see there are a few properties that cloud native applications should have:

Cloud native systems should be loosely coupled; each service is capable of operating independently. This allows cloud native applications to be simple and scalable.Cloud native applications should be able to recover from failures.Application deployment and maintenance should be easy.Cloud native application system internals should be observable.

If we drill down a little further into cloud native applications, they all share the following common characteristics:

Statelessness: Cloud systems do not preserve the status of running instances. All states that are necessary to construct business logic are kept within the database. All services are expected to read the data from the database, process data, and return the data where it is needed. This characteristic is critical when resilience and scalability come into play. Services are to be produced and destroyed, based on what the system administrator has said. If the service keeps its state on the running instance, it will be a problem to scale the system up and down. Simply put, all services should be disposable at any time.Modular design: Applications should be minimal and concentrate on solving a particular business problem. In the architecture of microservices, services are the smallest business unit that solves a particular business problem. Services may be exposed and managed as APIs where other modules do not require the internal implementation of each of the modules. Interservice communication protocols can be used to communicate with each provider and perform a task.Automated delivery: Cloud native systems should be able to be deployed automatically. As cloud native systems are intended to develop large applications, there could be several independent services running. If a new version is released for a specific module, the system should be able to adapt to changes with zero downtime. Maintaining the system should be achieved with less effort at a low cost.Isolation from the server and OS dependencies: Services run in an isolated system in which the host computer is not directly involved in services. This makes the services independent of the host computer and able to operate on any OS. Container technologies help to accomplish this by wrapping code with the container and offering OS-independent platforms to work with.Multitenancy: Multitenancy cloud applications offer users the ability to isolate user data from different tenants. Users can view their own information only. Multi-tenant architectures greatly increase the security of cloud applications and let multiple entities use the same system.

Why should you select a cloud native architecture?

The latest trend in the industry is cloud native applications, with businesses increasingly striving to move to the cloud due to the many benefits associated with it. The following are some of those benefits:

ScalabilityReliabilityMaintainabilityCost-effectivenessAgile development

Let's talk about each in detail.

Scalability

As applications are stateless by nature, the system administrator can easily scale up or scale down the application by simply increasing or decreasing the number of services. If the traffic is heavy, the system can be scaled up and distribute the traffic. On the other hand, if the traffic is low, the system can be scaled down to avoid consuming resources.

Reliability

If one service goes down, the load can be distributed to another service and the work can be continued. There is no specificity about particular services because of the statelessness of a cloud native application. Services can easily be replaced in the event of failure by another new service. The stateless nature helps to achieve this benefit of building a reliable system. This ensures fault tolerance and reliability for the entire application.

Maintainability

The whole system can be automated by using automation tools. Whenever someone wants to modify the system, it's as simple as sending a pull request to Git. When it's merged, the system upgrades with a new version. Deployment is also simple as services are separate, and developers need to consider part of the system rather than the entire system. Developers can easily deploy changes to a development environment with CI/CD pipelines. Then, they can move on to the testing and production environment with a single click. The whole system can be automated by using automation tools.

Cost-effectiveness

Organizations can easily offload infrastructure management to third parties instead of working with on-site platforms that need to invest a lot of money in management and maintenance. This allows the system to scale based on the pay-as-you-go model. Organizations simply don't need to keep paying for idle servers.

Agile development

In cloud native applications, services are built as various independent components. Each team that develops the service will determine which technologies should be used for implementation, such as programming languages, frameworks, and libraries. For example, developers can select the Python language to create a machine learning service and the Go language to perform some calculations. The developer team will more regularly and efficiently deliver applications with the benefits of automated deployment.

Challenges of cloud native architecture

While there are benefits of cloud native architecture, there are a few challenges associated with it as well. We will cover them in the following sections.

Security and privacy

Even though cloud service providers provide security for your system, your application should still be implemented securely to protect data from vulnerabilities. As there are so many moving components in cloud native applications, the risk of a security breach is therefore greater. It also gets more complex as the application grows more and more. The design and modifications should always be done with security in mind. Always comply with security best practices and use security monitoring software for all releases to analyze security breaches. Use the security features of the language you use to implement services.

The complexity of the system

Cloud native is intended to develop large applications on cloud platforms. When applications get bigger and bigger, it's natural that they will get complicated as well. Cloud native applications can have a large number of components in them, unlike monolithic applications. These components need to communicate with each other, and this makes the whole system worse if it's not done correctly.

The complexity of cloud native applications is primarily due to communication between different services. The system should be built in a manner in which such network communications are well managed. Proper design and documentation make the system manageable and understandable. When developing a complex cloud native application that has a lot of business requirements, make sure to use a design pattern designed for a cloud native application such as API Gateway, a circuit breaker, CQRS, or Saga. These patterns significantly reduce the complexity of your system.

Cloud lock-in

Lock-in technology is not specific to cloud native architectures, where technology is constantly evolving. Cloud providers have ways of deploying and maintaining applications. For example, the deployment of infrastructures, messaging protocols, and transport protocols might vary from one vendor to another. Moving on to different vendors is also an issue. Therefore, while building and designing a system, ensure compliance with community-based standards rather than vendor-specific standards. When you are selecting messaging protocols and transport protocols, check the community support for them and make sure they are commonly used community-based standards.

Deploying cloud native applications

Cloud native systems involve a significant number of different deployments, unlike the deployment of a monolithic application. Cloud applications can be spread over multiple locations. Deployment tools should be able to handle the distributed nature of cloud applications. Compared to monolithic applications, you may need to think about infrastructure deployment as well as program development. This makes cloud native deployment even more complicated.

Deploying a new version of a service is also a problem that you need to focus on when building a distributed system. Make sure you have a proper plan to move from one version to another since, unlike monolithic applications, cloud native applications are designed to provide services continuously.

Design is complex and hard to debug

Each of the cloud native system's services is intended to address certain specific business use cases. But if there are hundreds of these processes interacting with each other to provide a solution, it is difficult to understand the overall behavior of the system.

Unlike debugging monolithic systems, because there is a replicated process, debugging a cloud native application often becomes more challenging.

With analytic tools, logging, tracing, and metrics make the debug process easy. Use monitoring tools to keep track of the system continuously. Automated tools are available that collect logs, traces, and metrics and provide a dashboard for system analysis.

Testing cloud native applications

Another challenge associated with delivering cloud native applications is that the testing of applications is difficult due to consistency issues. Cloud native applications are designed with a view to the eventual consistency of data. When doing integration testing, you still need to take care of the consistency of the data. There are several test patterns that you can use to prevent this kind of problem. In Chapter 10, Building a CI/CD Pipeline for Ballerina Applications, we will discuss automated testing and deployment further.

Placing Ballerina on cloud native architecture

The main goal of the Ballerina language is to provide a general-purpose programming language that strongly supports all cloud native aspects. Ballerina's built-in features let programmers create cloud native applications with less effort. In the coming chapters, you will both gain programming knowledge of Ballerina and become familiar with the underlying principles that you should know about to create cloud native applications.

Ballerina is a free, open source programming language. All of its features and tools are free to use. Even though Ballerina is new to the programming world, it has supported libraries that you can find from Ballerina Central. Ballerina provides built-in functionality for creating Docker images and deploying them in Kubernetes. These deployment artifacts can be kept along with the source code. Serverless deployment is also easy with Ballerina's built-in AWS Lambda and Azure support.