31,19 €
Site reliability engineering (SRE) is being touted as the most competent paradigm in establishing and ensuring next-generation high-quality software solutions.
This book starts by introducing you to the SRE paradigm and covers the need for highly reliable IT platforms and infrastructures. As you make your way through the next set of chapters, you will learn to develop microservices using Spring Boot and make use of RESTful frameworks. You will also learn about GitHub for deployment, containerization, and Docker containers. Practical Site Reliability Engineering teaches you to set up and sustain containerized cloud environments, and also covers architectural and design patterns and reliability implementation techniques such as reactive programming, and languages such as Ballerina and Rust. In the concluding chapters, you will get well-versed with service mesh solutions such as Istio and Linkerd, and understand service resilience test practices, API gateways, and edge/fog computing.
By the end of this book, you will have gained experience on working with SRE concepts and be able to deliver highly reliable apps and services.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 447
Veröffentlichungsjahr: 2018
Copyright © 2018 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Commissioning Editor: Gebin GeorgeAcquisition Editor: Rohit RajkumarContent Development Editor: Priyanka DeshpandeTechnical Editor: Rutuja PatadeCopy Editor: Safis EditingProject Coordinator: Drashti PanchalProofreader: Safis EditingIndexer: Mariammal ChettiyarGraphics: Tom ScariaProduction Coordinator: Aparna Bhagat
First published: November 2018
Production reference: 1301118
Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.
ISBN 978-1-78883-956-3
www.packtpub.com
Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
Improve your learning with Skill Plans built especially for you
Get a free eBook or video every month
Mapt is fully searchable
Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
Pethuru Raj Chelliah (PhD) works as the chief architect at the Site Reliability Engineering Center of Excellence, Reliance Jio Infocomm Ltd. (RJIL), Bangalore. Previously, he worked as a cloud infrastructure architect at the IBM Global Cloud Center of Excellence, IBM India, Bangalore, for four years. He also had an extended stint as a TOGAF-certified enterprise architecture consultant in Wipro Consulting services division and as a lead architect in the corporate research division of Robert Bosch, Bangalore. He has more than 17 years of IT industry experience.
Shreyash Naithani is currently a site reliability engineer at Microsoft R&D. Prior to Microsoft, he worked with both start-ups and mid-level companies. He completed his PG Diploma from the Centre for Development of Advanced Computing, Bengaluru, India, and is a computer science graduate from Punjab Technical University, India. In a short span of time, he has had the opportunity to work as a DevOps engineer with Python/C#, and as a tools developer, site/service reliability engineer, and Unix system administrator. During his leisure time, he loves to travel and binge watch series.
Shailender Singh is a principal site reliability engineer and a solution architect with around 11 year's IT experience who holds two master's degrees in IT and computer application. He has worked as a C developer on the Linux platform. He had exposure to almost all infrastructure technologies from hybrid to cloud-hosted environments. In the past, he has worked with companies including Mckinsey, HP, HCL, Revionics and Avalara and these days he tends to use AWS, K8s, Terraform, Packer, Jenkins, Ansible, and OpenShift.
Pankaj Thakur has a master's degree in computer applications from Dr. A.P.J. Abdul Kalam Technical University, formerly known as Uttar Pradesh Technical University (UPTU), one of the most reputable universities in India. With over 13 years experience and expertise in the field of IT, he has worked with numerous clients across the globe. Pankaj has a keen interest in cloud technologies, AI, machine learning, and automation. He has successfully completed several cloud migrations converting monolithic applications to microservice architectures. With his knowledge and experience, he believes readers are going to gain a lot from this book and that it will enhance their SRE skills.
Ashish Kumar has an engineering degree in IT from Himachal Pradesh University, Shimla. He has been working in the field of DevOps consultation, containerized-based applications, development, monitoring, performance engineering, and SRE practices. He has been a core team member of DevOps implementation and SRE practice implementation. He is passionate about identifying toil work and automating it using software practices. During his free time, he loves to go trekking, play outdoor games, and meditate.
If you're interested in becoming an author for Packt, please visit aomuthors.packtpub.cand apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
Title Page
Copyright and Credits
Practical Site Reliability Engineering
Dedication
About Packt
Why subscribe?
Packt.com
Contributors
About the authors
About the reviewers
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the example code files
Download the color images
Conventions used
Get in touch
Reviews
Demystifying the Site Reliability Engineering Paradigm
Setting the context for practical SRE
Characterizing the next-generation software systems 
Characterizing the next-generation hardware systems 
Moving toward hybrid IT and distributed computing
Envisioning the digital era
The cloud service paradigm
The ubiquity of cloud platforms and infrastructures 
The growing software penetration and participation
Plunging into the SRE discipline
The challenges ahead
The need for highly reliable platforms and infrastructures 
The need for reliable software
The emergence of microservices architecture 
Docker enabled containerization
Containerized microservices
Kubernetes for container orchestration
Resilient microservices and reliable applications
Reactive systems 
Reactive systems are highly reliable
The elasticity of reactive systems
Highly reliable IT infrastructures
The emergence of serverless computing
The vitality of the SRE domain
The importance of SREs 
Toolsets that SREs typically use
Summary
Microservices Architecture and Containers
What are microservices?
Microservice design principles
Deploying microservices
Container platform-based deployment tools
Code as function deployment
Programming language selection criteria in AWS Lambda
Virtualization-based platform deployment
Practical examples of microservice deployment
A container platform deployment example with Kubernetes
Code as function deployment 
Example 1 – the Apex deployment tool
Example 2 – the Apex deployment tool
Example 3 – the Serverless deployment tool 
Virtual platform-based deployment using Jenkins or TeamCity
Microservices using Spring Boot and the RESTful framework
Jersey Framework
Representational State Transfer (REST)
Deploying the Spring Boot application
Monitoring the microservices
Application metrics
Platform metrics
System events
Tools to monitor microservices
Important facts about microservices
Microservices in the current market
When to stop designing microservices
Can the microservice format be used to divide teams into small or micro teams? 
Microservices versus SOA
Summary
Microservice Resiliency Patterns
Briefing microservices and containers
The containerization paradigm
IT reliability challenges and solution approaches
The promising and potential approaches for resiliency and reliability
MSA is the prominent way forward
Integrated platforms are the need of the hour for resiliency
Summary
DevOps as a Service
What is DaaS?
Selecting tools isn't easy
Types of services under DaaS
An example of one-click deployment and rollback 
Configuring automated alerts
Centralized log management
Infrastructure security
Continuous process and infrastructure development
CI and CD
CI life cycle
CI tools
Installing Jenkins
Jenkins setup for GitHub
Setting up the Jenkins job
Installing Git
Starting the Jenkins job
CD
Collaboration with development and QA teams
The role of developers in DevOps
The role of QA teams in DevOps
QA practices     
Summary
Container Cluster and Orchestration Platforms
Resilient microservices 
Application and volume containers 
Clustering and managing containers
What are clusters?
Container orchestration and management
What is container orchestration?
Summary
Architectural and Design Patterns
Architecture pattern
Design pattern
Design pattern for security
Design pattern for resiliency
Design pattern for scalability
Design pattern for performance
Design principles for availability
Design principles for reliability
Design patterns – circuit breaker
Advantages of circuit breakers
Closed state 
Open state 
Half-open state 
Summary
Reliability Implementation Techniques
Ballerina programming 
A hello program example
A simple example with Twitter integration 
Kubernetes deployment code
A circuit breaker code example
Ballerina data types
Control logic expression
The building blocks of Ballerina
Ballerina command cheat sheet
Reliability
Rust programming
Installing Rust
Concept of Rust programming
The ownership of variables in Rust
Borrowing values in Rust
Memory management in Rust
Mutability in Rust
Concurrency in Rust
Error-handling in Rust
The future of Rust programming
Summary
Realizing Reliable Systems - the Best Practices
Reliable IT systems – the emerging traits and tips
MSA for reliable software
The accelerated adoption of containers and orchestration platforms
The emergence of containerized clouds
Service mesh solutions
Microservices design – best practices
The relevance of event-driven microservices
Why asynchronous communication? 
Why event-driven microservices? 
Asynchronous messaging patterns for event-driven microservices
The role of EDA to produce reactive applications 
Command query responsibility segregation pattern
Reliable IT infrastructures
High availability
Auto-scaling
Infrastructure as code 
Summary
Service Resiliency
Delineating the containerization paradigm
Why use containerization? 
Demystifying microservices architecture 
Decoding the growing role of Kubernetes for the container era
Describing the service mesh concept
Data plane versus control plane summary
Why is service mesh paramount?
Service mesh architectures
Monitoring the service mesh
Service mesh deployment models
Summary
Containers, Kubernetes, and Istio Monitoring
Prometheus
Prometheus architecture
Setting up Prometheus
Configuring alerts in Prometheus
Grafana
Setting up Grafana
Configuring alerts in Grafana
Summary
Post-Production Activities for Ensuring and Enhancing IT Reliability
Modern IT infrastructure
Elaborating the modern data analytics methods
Monitoring clouds, clusters, and containers
The emergence of Kubernetes 
Cloud infrastructure and application monitoring
The monitoring tool capabilities
The benefits
Prognostic, predictive, and prescriptive analytics
Machine-learning algorithms for infrastructure automation
Log analytics
Open source log analytics platforms
Cloud-based log analytics platforms
AI-enabled log analytics platforms
Loom
Enterprise-class log analytics platforms 
The key capabilities of log analytics platforms
Centralized log-management tools
IT operational analytics 
IT performance and scalability analytics
IT security analytics
The importance of root-cause analysis 
OverOps enhances log-management
Summary
Further Readings
Service Meshes and Container Orchestration Platforms
About the digital transformation
Cloud-native and enabled applications for the digital era
Service mesh solutions
Linkerd
Istio
Visualizing an Istio service mesh
Microservice API Gateway
The benefits of an API Gateway for microservices-centric applications
Security features of API Gateways
API Gateway and service mesh in action
API management suite
Ensuring the reliability of containerized cloud environments
The journey toward containerized cloud environments
The growing solidity of the Kubernetes platform for containerized clouds
Kubernetes architecture – how it works
Installing the Kubernetes platform
Installing the Kubernetes client
Installing Istio on Kubernetes
Trying the application
Deploying services to Kubernetes
Summary
Other Books You May Enjoy
Leave a review - let other readers know what you think
Increasingly, enterprise-scale applications are being hosted and managed in software-defined cloud environments. As cloud technologies and tools are quickly maturing and stabilizing, cloud adoption as the one-stop IT solution for producing and running all kinds of business workloads is rapidly growing across the globe. However, there are a few crucial challenges in successfully running cloud centers (public, private, hybrid, and edge). The aspects of automation and orchestration are being lauded as the way forward to surmount the challenges that are brewing in operating clouds and for realizing the originally envisioned benefits of the cloud idea. The widely expressed concern associated with the cloud is reliability (resiliency and elasticity). The other noteworthy trend is the emergence of web-scale and mobile-enabled operational, transactional, and analytical applications. It is therefore essential to ensure the stability, fault tolerance, and high availability of data and process-intensive applications as far as possible. The reliability concern is being overwhelmingly tackled through the smart leveraging of pioneering technologies.
This book is articulating and accentuating how a suite of breakthrough technologies and tools blend well to ensure the highest degree of reliability, not only for professional and personal applications, but also for cloud infrastructures. Let's envisage and embrace reliable systems.
Practical Site Reliability Engineering helps software developers, IT professionals, DevOps engineers, performance specialists, and system engineers understand how the emerging domain of Site Reliability Engineering (SRE) comes in handy in automating and accelerating the process of designing, developing, debugging, and deploying highly reliable applications and services.
Chapter 1, Demystifying the Site Reliability Engineering Paradigm, includes the new SRE domain and the need for SRE patterns, platforms, practices, programming models and processes, enabling frameworks, appropriate technologies, techniques, tools, and tips.
Chapter 2, Microservices Architecture and Containers, introduces concepts such as containerization, microservice architecture (MSA), and container management and clustering, which contribute to the realization of reliable applications and environments.
Chapter 3, Microservice Resiliency Patterns, covers DevOps under SRE since automation and DevOps play a big role in the SRE journey.
Chapter 4, DevOps as a Service, focuses on various microservice resiliency patterns that intrinsically and insightfully enable the design, development, debugging, delivery, and deployment of reliable systems.
Chapter 5, Container Cluster and Orchestration Platforms, provides a detailed explanation of the preceding technologies for ensuring the goals of SRE.
Chapter 6, Architectural and Design Patterns, explains how architecture and design are the ultimate building blocks during service or microservice development, giving you clarity and direction to implement any logic in the cloud era.
Chapter 7, Reliability Implementation Techniques, gives a guarantee that the future is bright and makes us optimistic that things are going to change in the cloud era.
Chapter 8, Realizing Reliable Systems – the Best Practices, includes the best practices arising from the expertise, experience, and education of site reliability engineers, DevOps people, and cloud engineers.
Chapter 9, Service Resiliency, explains all about the platforms for container enablement and orchestration purposes.
Chapter 10, Containers, Kubernetes, and Istio Monitoring, covers how we can monitor applications or services running on clusters, pods, and Kubernetes using Prometheus and Grafana.
Chapter 11, Post-Production Activities for Ensuring and Enhancing IT Reliability, look at the various activities to be performed in order to prevent any kind of disaster, so as to fully guarantee the SLAs agreed with customers, clients, and consumers.
Chapter 12, Service Meshes and Container Orchestration Platforms, conveys what and why the multi-cloud approach is gaining unprecedented market and mind shares.
Readers have to have a basic knowledge of cloud infrastructure, Docker containers, MSA, and DevOps.
You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
Log in or register at
www.packt.com
.
Select the
SUPPORT
tab.
Click on
Code Downloads & Errata
.
Enter the name of the book in the
Search
box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR/7-Zip for Windows
Zipeg/iZip/UnRarX for Mac
7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Practical-Site-Reliability-Engineering. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://www.packtpub.com/sites/default/files/downloads/9781788839563_ColorImages.pdf.
There are a number of text conventions used throughout this book.
CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "Create a file using vim and run the hello.bal command:"
A block of code is set as follows:
fn main(){panic!("Something is wrong... Check for Errors");}
When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
import ballerina/config;import ballerina/io;import wso2/twitter;
endpoint http:Listener listener {
port:9090
}
Any command-line input or output is written as follows:
$ apex deploy auth
$ apex deploy auth api
Bold: Indicates a new term, an important word, or words that you see on screen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "We can click on Istio Mesh Dashboard to see the global request volume and look at our success and failure rate."
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packt.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in, and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packt.com.
To provide competitive and cognitive services to their venerable customers and clients, businesses across the globe are strategizing to leverage the distinct capabilities of IT systems. There is a widespread recognition that IT is the most crucial contributor and important ingredient for achieving the required business automation, augmentation, and acceleration. The unique advancements being harvested in the IT space directly enable the much-anticipated business productivity, agility, affordability, and adaptivity. In other words, businesses across the globe unwaveringly expect their business offerings, outputs, and operations to be robust, reliable, and versatile. This demand has a direct and decisive impact on IT, and hence IT professionals are striving hard and stretching further to put highly responsive, resilient, scalable, available, and secure systems in place to meet the varying needs and mandates of businesses. Thus, with the informed adoption of all kinds of noteworthy advancements being unearthed in the IT space, business houses and behemoths are to lustrously fulfil the elusive goal of customer satisfaction.
Recently, there has been a widespread insistence for IT reliability that, in turn, enables business dependability. There are refined processes, integrated platforms, enabling patterns, breakthrough products, best practices, optimized infrastructures, adaptive features, and architectures toward heightened IT reliability.
This chapter will explain the following topics:
The origin
The journey so far
The fresh opportunities and possibilities
The prospects and perspectives
The impending challenges and concerns
The future
Precisely speaking, the charter for any Site Reliability Engineering (SRE) team in any growing IT organization is how to create highly reliable applications, and the other is how to plan, provision, and put up highly dependable, scalable, available, performing, and secure infrastructures to host and run those applications.
It is appropriate to give some background information for this new engineering discipline to enhance readability. SRE is a quickly emerging and evolving field of study and research. The market and mind shares of the SRE field are consistently climbing. Businesses, having decisively understood the strategic significance of SRE, are keen to formulate and firm up a workable strategy.
Software applications are increasingly complicated yet sophisticated. Highly integrated systems are the new norm these days. Enterprise-grade applications ought to be seamlessly integrated with several third-party software components running in distributed and disparate systems. Increasingly, software applications are made out of a number of interactive, transformative, and disruptive services in an ad hoc manner on an as-needed basis. Multi-channel, multimedia, multi-modal, multi-device, and multi-tenant applications are becoming pervasive and persuasive. There are also enterprise, cloud, mobile, Internet of Things (IoT), blockchain, cognitive, and embedded applications hosted in virtual and containerized environments. Then, there are industry-specific and vertical applications (energy, retail, government, telecommunication, supply chain, utility, healthcare, banking, and insurance, automobiles, avionics, and robotics) being designed and delivered via cloud infrastructures.
There are software packages, homegrown software, turnkey solutions, scientific, and technical computing services, and customizable and configurable software applications to meet distinct business requirements. In short, there are operational, transactional, and analytical applications running on private, public, and hybrid clouds. With the exponential growth of connected devices, smart sensors, and actuators, fog gateways, smartphones, microcontrollers, and single board computers (SBCs), the software enabled data analytics and proximate moves to edge devices to accomplish real-time data capture, processing, decision-making, and action.
We are destined to move towards real-time analytics and applications. Thus, it is clear that software is purposefully penetrative, participative, and productive. Largely, it is quite a software-intensive world.
Similar to the quickly growing software engineering field, hardware engineering is also on the fast track. These days, there are clusters, grids, and clouds of IT infrastructures. There are powerful appliances, cloud-in-a-box options, hyper-converged infrastructures, and commodity servers for hosting IT platforms and business applications. The physical machines are touted as bare metal servers. The virtual versions of the physical machines are the virtual machines and containers. We are heading toward the era of hardware infrastructure programming. That is, closed, inflexible, and difficult to manage and maintain bare-metal servers are being partitioned into a number of virtual machines and containers that are highly flexible, open, easily manageable, and replaceable, not to mention quickly provisionable, independently deployable, and horizontally scalable. The infrastructure partitioning and provisioning gets sped up with scores of automated tools to enable the rapid delivery of software applications. The rewarding aspects of continuous integration, deployment, and delivery are being facilitated through a combination of containers, microservices, configuration management solutions, DevOps tools, and Continuous Integration (CI) platforms.
Worldwide institutions, individuals, and innovators are keenly embracing cloud technology with all its clarity and confidence. With the faster maturity and stability of cloud environments, there is a distinct growth in building and delivering cloud-native applications, and there are viable articulations and approaches to readily make cloud native software. Traditional and legacy software applications are being meticulously modernized and moved to cloud environments to reap the originally envisaged benefits of the cloud idea. Cloud software engineering is one hot area, drawing the attention of many software engineers across the globe. There are public, private, and hybrid clouds. Recently, we have heard more about edge/fog clouds. Still, there are traditional IT environments that are being considered in the hybrid world.
There are development teams all over the world working in multiple time zones. Due to the diversity and multiplicity of IT systems and business applications, distributed applications are being touted as the way forward. That is, the various components of any software application are being distributed across multiple locations for enabling redundancy enabled high availability. Fault-tolerance, less latency, independent software development, and no vendor lock-in are being given as the reason for the realm of distributed applications. Accordingly, software programming models are being adroitly tweaked so that they deliver optimal performance in the era of distributed and decentralized applications. Multiple development teams working in multiple time zones across the globe have become the new norm in this hybrid world of on-shore and off-shore development.
With the big-data era upon us, we need the most usable and uniquely distributed computing paradigm through the dynamic pool of commoditized servers and inexpensive computers. With the exponential growth of connected devices, the days of device clouds are not too far away. That is, distributed and decentralized devices are bound to be clubbed together in large numbers to form ad hoc and application-specific cloud environments for data capture, ingestion, pre-processing, and analytics. Thus, there is no doubt that the future belongs to distributed computing. The fully matured and stabilized centralized computing is unsustainable due to the need for web-scale applications. Also, the next-generation internet is the internet of digitized things, connected devices, and microservices.
There are a bunch of digitization and edge technologies bringing forth a number of business innovations and improvisations. As enterprises are embracing these technologies, the ensuring era is being touted as the digital transformation and intelligence era. This section helps in telling you about all that needs to be changed through the absorption of these pioneering and path-breaking technologies and tools.
The field of information and communication technology (ICT) is rapidly growing with the arrival of scores of pioneering technologies, and this trend is expediently and elegantly automating multiple business tasks. Then, the maturity and stability of orchestration technologies and tools is bound to club together multiple automated jobs and automate the aggregated ones. We will now discuss the latest trends and transitions happening in the ICT space.
Due to the heterogeneity and multiplicity of software technologies such as programming languages, development models, data formats, and protocols, software development and operational complexities are growing continuously. There are several breakthrough mechanisms to develop and run enterprise-grade software in an agile and adroit fashion. There came a number of complexity mitigation and rapid development techniques for producing production-grade software in a swift and smart manner. The leverage of "divide and conquer" and "the separation of crosscutting concerns" techniques is being consistently experimented with and developers are being encouraged to develop risk-free and futuristic software services. The potential concepts of abstraction, encapsulation, virtualization, and other compartmentalization methods are being invoked to reduce the software production pain. In addition, there are performance engineering and enhancement aspects that are getting the utmost consideration from software architects. Thus, software development processes, best practices, design patterns, evaluation metrics, key guidelines, integrated platforms, enabling frameworks, simplifying templates, and programming models are gaining immense significance in this software-defined world.
Thus, there are several breakthrough technologies for digital innovations, disruptions, and transformations. Primarily, the IoT paradigm generates a lot of multi-structured digital data and the famous artificial intelligence (AI) technologies, such as machine and deep learning, enables the extrication of actionable insights out of the digital data. Transitioning raw digital data into information, knowledge, and wisdom is the key differentiator for implementing digitally transformed and intelligent societies. Cloud IT is being positioned as the best-in-class IT environment for enabling and expediting the digital transformation.
With digitization and edge technologies, our everyday items become digitized to join in with mainstream computing. That is, we will be encountering trillions of digitized entities and elements in the years ahead. With the faster stability and maturity of the IoT, cyber physical systems (CPS), ambient intelligence (AmI), and pervasive computing technologies and tools, we are being bombarded with innumerable connected devices, instruments, machines, drones, robots, utilities, consumer electronics, wares, equipment, and appliances. Now, with the unprecedented interest and investment in AI (machine and deep learning, computer vision, and natural language processing), algorithms and approaches, and IoT device data (collaborations, coordination, correlation, and corroboration) are meticulously captured, cleansed, and crunched to extricate actionable insights/digital intelligence in time. There are several promising, potential, and proven digital technologies emerging and evolving quickly in synchronization, with a variety of data mining, processing, and analytics. These innovations and disruptions eventually lead to digital transformation. Thus, digitization and edge technologies in association with digital intelligence algorithms and tools lead to the realization and sustenance of digitally transformed environments (smarter hotels, homes, hospitals, and so on). We can easily anticipate and articulate digitally transformed countries, counties, and cities in the years to come with pioneering and groundbreaking digital technologies and tools.
The cloud era is setting in and settling steadily. The aiding processes, platforms, policies, procedures, practices, and patterns are being framed and firmed up by IT professionals and professors, to tend toward the cloud. The following sections give the necessary details for our esteemed readers.
The cloud applications, platforms, and infrastructures are gaining immense popularity these days. Cloud applications are of two primary types:
Cloud-enabled
: The currently running massive and monolithic applications get modernized and migrated to cloud environments to reap the distinct benefits of the cloud paradigm
Cloud-native
: This is all about designing, developing, debugging, delivering, and deploying applications directly on cloud environments by intrinsically leveraging the non-functional capabilities of cloud environments
The current and conventional applications that are hosted and running on various IT environments are being meticulously modernized and migrated to standardized and multifaceted cloud environments to reap all the originally expressed benefits of cloud paradigm. Besides enabling business-critical, legacy, and monolithic applications to be cloud-ready, there are endeavors for designing, developing, debugging, deploying, and delivering enterprise-class applications in cloud environments, harvesting all of the unique characteristics of cloud infrastructure and platforms. These applications natively absorb the various characteristics of cloud infrastructures and act adaptively. There is microservices architecture (MSA) for designing next-generation enterprise-class applications. MSA is being deftly leveraged to enable massive applications to be partitioned into a collection of decoupled, easily manageable, and fine-grained microservices.
With the decisive adoption of cloud technologies and tools, every component of enterprise IT is being readied to be delivered as a service. The cloud idea has really and rewardingly brought in a stream of innovations, disruptions, and transformations for the IT industry. The days of IT as a Service (ITaaS) will soon become a reality, due to a stream of noteworthy advancements and accomplishments in the cloud space.
The other key aspect is to have reliable, available, scalable, and secure IT environments (cloud and non-cloud). We talked about producing versatile software packages and libraries. We also talked about setting up and sustaining appropriate IT infrastructures for successfully running various kinds of IT and business applications. Increasingly, the traditional data centers and server farms are being modernized through the smart application of cloud-enablement technologies and tools. The cloud idea establishes and enforces IT rationalization, the heightened utilization of IT resources, and optimization. There is a growing number of massive public cloud environments (AWS, Microsoft Azure, Google cloud, IBM cloud, and Oracle cloud) that are encompassing thousands of commodity and high-end server machines, storage appliance arrays, and networking components to accommodate and accomplish the varying IT needs of the whole world. Government organizations, business behemoths, various service providers, and institutions are empowering their own IT centers into private cloud environments. Then, on an as-needed basis, private clouds are beginning to match the various capabilities of public clouds to meet specific requirements. In short, cloud environments are being positioned as the one-stop IT solution for our professional, social, and personal IT requirements.
The cloud is becoming pervasive with the unique contributions of many players from the IT industry, worldwide academic institutions, and research labs. We have plenty of private, public, and hybrid cloud environments. The surging popularity of fog/edge computing leads to the formation of fog/edge device clouds, which are contributing immensely to produce people-centric and real-time applications. The fog or edge device computing is all about leveraging scores of connected and capable devices to form a kind of purpose-specific as well as agnostic device cloud to collect, cleanse and crunch sensor, actuator, device, machine, instrument, and equipment poly-structured and real-time data emanating from all sorts of physical, mechanical, and electrical systems on the ground. With the projected billions of connected devices, the future beckons and bats for device clusters and clouds. Definitely, the cloud movement has penetrated every industry and the IT phenomenon is redefined and resuscitated by the roaring success of the cloud. Soon, cloud applications, platforms, and infrastructures will be everywhere. IT is all set to become the fifth social utility. The pertinent and paramount challenge is how to bring forth deeper and decisive automation in the cloud IT space.
The need for deeply automated and adaptive cloud centers with clouds emerging as the most flexible, futuristic, and fabulous IT environments to host and run IT and business workloads, there is a rush for bringing as much automation as possible to speed up the process of cloud migration, software deployment and delivery, cloud monitoring, measurement and management, cloud integration and orchestration, cloud governance and security, and so on. There are several trends and transitions happening simultaneously in the IT space to realize these goals.
Marc Andreessen famously penned the article Why software is eating the world several years ago. Today, we widely hear, read, and even sometimes experience buzzwords such as software-defined, compute, storage, and networking. Software is everywhere and gets embedded in everything. Software has, unquestionably, been the principal business automation and acceleration enabler. Nowadays, on its memorable and mesmerizing journey, software is penetrating into every tangible thing (physical, mechanical, and electrical) in our everyday environments to transform them into connected entities, digitized things, smart objects, and sentient materials. For example, every advanced car today has been sagaciously stuffed with millions of lines of code to be elegantly adaptive in its operations, outputs, and offerings.
Precisely speaking, the ensuing era sets the stage for having knowledge-filled, situation-aware, event-driven, service-oriented, cloud-hosted, process-optimized, and people-centric applications. These applications need to exhibit a few extra capabilities. That is, the next-generation software systems innately have to be reliable, rewarding, and reactive ones. Also, we need to arrive at competent processes, platforms, patterns, procedures, and practices for creating and sustaining high-quality systems. There are widely available non-functional requirements (NFRs), quality of service (QoS), and quality of experience (QoE) attributes, such as availability, scalability, modifiability, sustainability, security, portability, and simplicity. The challenge for every IT professional lies in producing software that unambiguously and intrinsically guarantees all the NFRs.
Agile application design: We have come across a number of agile software development methodologies. We read about extreme and pair programming, scrum, and so on. However, for the agile design of enterprise-grade applications, the stability of MSA is to activate and accelerate the application design.
Accelerated software programming: As we all know, enterprise-scale and customer-facing software applications are being developed speedily nowadays, with the faster maturity of potential agile programming methods, processes, platforms, and frameworks. There are other initiatives and inventions enabling speedier software development. There are component-based software assemblies, and service-oriented software engineering is steadily growing. There are scores of state-of-the-art tools consistently assisting component and service-based application-building phenomena. On the other hand, the software engineering aspect gets simplified and streamlined through the configuration, customization, and composition-centric application generation methods.
Automated software deployment through DevOps: There are multiple reasons for software programs to be running well in the developer's machine but not so well in other environments, including production environments. There are different editions, versions, and releases of software packages, platforms, programming languages, and frameworks. Coming to the running software is suites across different environments. There is a big disconnect between developers and operation teams due to constant friction between development and operating environments.
Further on, with agile programming techniques and tips, software applications get constructed quickly, but their integration, testing, building, delivery, and deployment aspects are not automated. Therefore, concepts such as DevOps, NoOps, and AIOps have gained immense prominence and dominance to bring in several automation enabling IT administrators. That is, these new arrivals have facilitated a seamless and spontaneous synchronization between software design, development, debugging, deployment, delivery and decommissioning processes, and people. The emergence of configuration management tools and cloud orchestration platforms enables IT infrastructure programming. That is, the term Infrastructure as Code (IaC) is facilitating the DevOps concept. That is, faster provisioning of infrastructure resources through configuration files, and the deployment of software on those infrastructure modules, is the core and central aspect of the flourishing concept of DevOps.
This is the prime reason why the concept of DevOps has started flourishing these days. This is quite a new idea that's gaining a lot of momentum within enterprise and cloud IT teams. Companies embrace this new cultural change with the leverage of multiple toolsets for Continuous Integration (CI), Continuous Delivery (CD), and Continuous Deployment (CD). Precisely speaking, besides producing enterprise-grade software applications and platforms, realizing and sustaining virtualized/containerized infrastructures with the assistance of automated tools to ensure continuous and guaranteed delivery of software-enabled and IT-assisted business capabilities to mankind is the need of the hour.
We have understood the requirements and the challenges. The following sections describe how the SRE field is used to bridge the gap between supply and demand. As explained previously, building software applications through configuration, customization, and composition (orchestration and choreography) is progressing quickly. Speedier programming of software applications using agile programming methods is another incredible aspect of software building. The various DevOps tools from product and tool vendors quietly ensures continuous software integration, delivery, and deployment.
The business landscape is continuously evolving, and consequently the IT domain has to respond precisely and perfectly to the changing equations and expectations of the business houses. Businesses have to be extremely agile, adaptive, and reliable in their operations, offerings, and outputs. Business automation, acceleration, and augmentation are being solely provided by the various noteworthy improvements and improvisations in the IT domain.
IT agility and reliability directly guarantees the business agility and reliability. As seen previously, the goal of IT agility (software design, development, and deployment) is getting fulfilled through newer techniques. Nowadays, IT experts are looking out for ways and means for significantly enhancing IT reliability goals. Typically, IT reliability equals IT elasticity and resiliency. Let's us refer to the following bullets:
IT elasticity
: When an IT system is suddenly under a heavy load, how does the IT system provision and use additional IT resources to take care of extra loads without affecting users? IT systems are supposed to be highly elastic to be right and relevant for the future of businesses. Furthermore, not only IT systems but also the business applications and the IT platforms (development, deployment, integration, orchestration, brokerage, and so on) have to be scalable. Thus, the combination of applications, platforms, and infrastructures have to contribute innately to be scalable (vertically, as well as horizontally).
IT resiliency
: When an IT system is under attack from internal as well as external sources, the system has to have the wherewithal to wriggle out of that situation to continuously deliver its obligations to its subscribers without any slowdown and breakdown. IT systems have to be highly fault-tolerant to be useful for mission-critical businesses. IT systems have to come back to their original situation automatically, even if they are made to deviate from their prescribed path. Thus, error prediction, identification, isolation, and other capabilities have to be embedded into IT systems. Security and safety issues also have to be dexterously detected and contained to come out unscathed.
Thus, when IT systems are resilient and elastic, they are termed reliable systems. When IT is reliable, then the IT-enabled businesses can be reliable in their deals, deeds, and decisions that, in turn, enthuse and enlighten their customers, employees, partners, and end users.
We discussed about cloud-enabled and native applications and how they are hosted on underlying cloud infrastructures to accomplish service delivery. Applications are significantly functional. However, the non-functional requirements, such as application scalability, availability, security, reliability, performance/throughput, modifiability, and so on, are being used widely. That is, producing high-quality applications is a real challenge for IT professionals. There are design, development, testing, and deployment techniques, tips, and patterns to incorporate the various NFRs into cloud applications. There are best practices and key guidelines to come out with highly scalable, available, and reliable applications.
The second challenge is to setup and sustain highly competent and cognitive cloud infrastructures to exhibit reliable behavior. The combination of highly resilient, robust, and versatile applications and infrastructures leads to the implementation of highly dependable IT that meets the business productivity, affordability, and adaptivity.
Having understood the tactical and strategic significance and value, businesses are consciously embracing the pioneering cloud paradigm. That is, all kinds of traditional IT environments are becoming cloud-enabled to reap the originally expressed business, technical, and use benefits. However, the cloud formation alone is not going to solve every business and IT problem. Besides establishing purpose-specific and agnostic cloud centers, there are a lot more things to be done to attain the business agility and reliability. The cloud center operation processes need to be refined, integrated, and orchestrated to arrive at optimized and organized processes. Each of the cloud center operations needs to be precisely defined and automated in to fulfil the true meaning of IT agility. With agile and reliable cloud applications and environments, the business competency and value are bound to go up remarkably.
We know that the subject of software reliability is a crucial one for the continued success of software engineering in the ensuing digital era. However, it is not easy thing to do. Because of the rising complexity of software suites, ensuring high reliability turns out to be a tough and time-consuming affair. Experts, evangelists, and exponents have come out with a few interesting and inspiring ideas for accomplishing reliable software systems. Primarily, there are two principal approaches; these are as follows:
Resilient microservices can lead to the realization of reliable software applications. Popular technologies include microservices, containers, Kubernetes, Terraform, API Gateway and Management Suite, Istio, and Spinnaker.
Reactive systems (resilient, responsive, message-driven, and elastic)—this is based on the famous Reactive Manifesto. There are a few specific languages and platforms (
http://vertx.io/
,
http://reactivex.io/
,
https://www.lightbend.com/products/reactive-platform
, RxJava, play framework, and so on) for producing reactive systems. vAkka is a toolkit for building highly concurrent, distributed, and resilient message-driven applications for Java and Scala.
Here are the other aspects being considered for producing reliable software packages:
Verification and validation of software reliability through various testing methods
Software reliability prediction algorithms and approaches
Static and dynamic code analysis methods
Patterns, processes, platforms, and practices for building reliable software packages
Let's discuss these in detail.
Mission critical and versatile applications are to be built using the highly popular MSA pattern. Monolithic applications are being consciously dismantled using the MSA paradigm to be immensely right and relevant for their users and owners. Microservices are the new building block for constructing next-generation applications. Microservices are easily manageable, independently deployable, horizontally scalable, relatively simple services. Microservices are publicly discoverable, network accessible, interoperable, API-driven, composed, replaceable, and highly isolated.
The future software development is primarily finding appropriate microservices. Here are few advantages of the MSA style:
Scalability
: Any production-grade application typically can use three types of scaling. The
x-axis
scaling is for horizontally scalability. That is, the application has to be cloned to guarantee high availability. The second type of scale is
y-axis
scaling. This is for splitting the application into various application functionalities. With microservices architecture, applications (legacy, monolithic, and massive) are partitioned into a collection of easily manageable microservices.
Each unit fulfils one responsibility.
The third is the
z-axis
scaling, which is for partitioning or sharding the data. The database plays a vital role in shaping up dynamic applications. With NoSQL databases, the concept of sharing came into prominence.
Availability
: Multiple instances of microservices are deployed in different containers (Docker) to guarantee high availability. Through this redundancy, the service and application availability is ensured. With multiple instances of services are being hosted and run through Docker containers, the load-balancing of service instances is utilized to ensure the high-availability of services. The widely used circuit breaker pattern is used to accomplish the much-needed fault-tolerance. That is, the redundancy of services through instances ensures high availability, whereas the circuit-breaker pattern guarantees the resiliency of services. Service registry, discovery, and configuration capabilities are to lead the development and discovery of newer services to bring forth additional business (vertical) and IT (horizontal) services. With services forming a dynamic and ad hoc service meshes, the days of service communication, collaboration, corroborations, and correlations are not too far away.
Continuous deployment
: Microservices are independently deployable, horizontally scalable, and self-defined. Microservices are decoupled/lightly coupled and cohesive fulfilling the elusive mandate of modularity. The dependency imposed issues get nullified by embracing this architectural style. This leads to the deployment of any service independent of one another for faster and more continuous deployment.
Loose coupling
: As indicated previously, microservices are autonomous and independent by innately providing the much-needed loose coupling. Every microservice has its own layered- architecture at the service level and its own database at the backend.
Polyglot microservices
: Microservices can be implemented through a variety of programming languages. As such, there is no technology lock-in. Any technology can be used to realize microservices. Similarly, there is no compulsion for using certain databases. Microservices work with any file system SQL databases, NoSQL and NewSQL databases, search engines, and so on.
Performance
: There are performance engineering and enhancement techniques and tips in the microservices arena. For example, high-blocking calls services are implemented in the single threaded technology stack, whereas high CPU usage services are implemented using multiple threads.
There are other benefits for business and IT teams by employing the fast-maturing and stabilizing microservices architecture. The tool ecosystem is on the climb, and hence implementing and involving microservices gets simplified and streamlined. Automated tools ease and speed up building and operationalizing microservices.
The Docker idea has shaken the software world. A bevy of hitherto-unknown advancements are being realized through containerization. The software portability requirement, which has been lingering for a long time, gets solved through the open source Docker platform. The real-time elasticity of Docker containers hosting a variety of microservices enabling the real-time scalability of business-critical software applications is being touted as the key factor and facet for the surging popularity of containerization. The intersection of microservices and Docker containers domains has brought in paradigm shifts for software developers, as well as for system administrators. The lightweight nature of Docker containers along with the standardized packaging format in association with the Docker platform goes a long way in stabilizing and speeding up software deployment.
The container is a way to package software along with configuration files, dependencies, and binaries required to enable the software on any operating environment. There are a number of crucial advantages; they are as follows:
Environment consistency
: Applications/processes/microservices running on containers behave consistently in different environments (development, testing, staging, replica, and production). This eliminates any kind of environmental inconsistencies and makes testing and debugging less cumbersome and less time-consuming.
Faster deployment