Building CI/CD Systems Using Tekton - Joel Lord - E-Book

Building CI/CD Systems Using Tekton E-Book

Joel Lord

0,0
34,79 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Tekton is a powerful yet flexible Kubernetes-native open source framework for creating continuous integration and continuous delivery (CI/CD) systems. It enables you to build, test, and deploy across multiple cloud providers or on-premise systems.
Building CI/CD Systems Using Tekton covers everything you need to know to start building your pipeline and automating application delivery in a cloud-native environment. Using a hands-on approach, you will learn about the basic building blocks, such as tasks, pipelines, and workspaces, which you can use to compose your CI/CD pipelines. As you progress, you will understand how to use these Tekton objects in conjunction with Tekton Triggers to automate the delivery of your application in a Kubernetes cluster.
By the end of this book, you will have learned how to compose Tekton Pipelines and use them with Tekton Triggers to build powerful CI/CD systems.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 269

Veröffentlichungsjahr: 2021

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Building CI/CD Systems Using Tekton

Develop flexible and powerful CI/CD pipelines using Tekton Pipelines and Triggers

Joel Lord

BIRMINGHAM—MUMBAI

Building CI/CD Systems Using Tekton

Copyright © 2021 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Vijin Boricha

Publishing Product Manager: Shrilekha Malpani

Senior Editor: Shazeen Iqbal

Content Development Editor: Romy Dias

Technical Editor: Shruthi Shetty

Copy Editor: Safis Editing

Project Coordinator: Shagun Saini

Proofreader: Safis Editing

Indexer: Vinayak Purushotham

Production Designer: Joshua Misquitta

First published: August 2021

Production reference: 1050821

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80107-821-4

www.packt.com

To Mom, who taught me the importance of books, and to Dad, for bringing that Hyperion at home so many years ago.

Contributors

About the author

Joel Lord (joel__lord on Twitter) is passionate about the web and technology in general. He likes to learn new things, but most of all, he wants to share his discoveries. He does so by traveling to various conferences all across the globe.

He graduated from college with a degree in computer programming in the last millennium. Apart from a little break to get his BSc in computational astrophysics, he has always worked in the industry.

In his daily job, Joel is a developer advocate with MongoDB, where he connects with software engineers to help them make the web better by using best practices around JavaScript.

In his free time, he can be found stargazing on a campground somewhere or brewing a fresh batch of beer in his garage.

Before I started writing this book, I never realized how many people were involved in the process. So many people participated, making the end result what it is today. Big thanks to the team of illustrators, content editors, and technical reviewers. Special kudos to Romy Dias, who was a great help during the whole process. Most importantly, many thanks to my wonderful spouse, Natacha, for giving me the support I've needed to write this book.

About the reviewers

Jonas Pettersson is an independent software consultant with experience in different businesses and environments, from telecommunications to finance and start-ups. He is a software developer with experience mainly in Java-based environments. Over the last few years, he has been working in a Kubernetes-based platform team and has helped with continuous delivery and configuration management problems. Jonas has contributed to the Tekton project and is enthusiastic about the growing cloud-native landscape.

Brian Nguyen is an application architect and software engineer with a specialization in cloud-native applications and machine learning. Throughout his career, he has been responsible for entire software product developments, including collecting new requirements from the product manager, architecting custom solutions for the customer, and ensuring quality and security for the whole system. After several years in software development, in 2019, Brian joined Red Hat to work as an architect, where he is responsible for machine learning activities on the Kubernetes platform. Brian currently holds a Bachelor of Science in computer engineering from the University of Florida and a Master of Science in computer science from the Georgia Institute of Technology.

Table of Contents

Preface

Section 1: Introduction to CI/CD

Chapter 1: A Brief History of CI/CD

The early days

Waterfall model4

Understanding the impacts of Agile development practices

Here be testing7

Deploying in the era of the cloud

Works on my machine!9

The cloud today – cloud native 12

The future of the cloud13

Demystifying continuous integration versus continuous delivery versus continuous deployment

Continuous integration14

Continuous delivery15

Continuous deployment16

CI/CD in the real world17

Summary

Chapter 2: A Cloud-Native Approach to CI/CD

Being a software developer in the age of cloud-native development

Understanding cloud-native CI/CD

Containers22

Serverless22

DevOps22

Introducing Tekton

Tekton CLI24

Tekton Triggers25

Tekton Catalog25

Tekton Dashboard26

Exploring Tekton's building blocks

Steps, tasks, and pipelines26

Where to use a step, a task, or a pipeline29

Workspaces29

Understanding TaskRuns and PipelineRuns

TaskRuns31

PipelineRuns 32

Summary

Section 2: Tekton Building Blocks

Chapter 3: Installation and Getting Started

Technical requirements

Setting up a developer environment

Git36

Node.js37

VS Code39

Installing a container runtime and setting up a registry

Docker41

Docker Hub41

Picking a Kubernetes distribution (local, cloud, hosted)

minikube42

Connecting to your Kubernetes cluster

Preparing the Tekton tooling

Tekton Dashboard44

Summary

Chapter 4: Stepping into Tasks

Technical requirements

Introducing tasks

Understanding Steps

Building your first task

Adding additional Steps52

Using scripts55

Adding task parameters

Making the Hello task more reusable57

Using array type parameters59

Adding a default value60

Sharing data

Accessing the home directory61

Using results63

Using Kubernetes volumes65

Using workspaces66

Visualizing tasks

The VS Code Tekton Pipelines extension 67

Tekton Dashboard 68

Digging into TaskRuns

Getting your hands dirty

More than Hello World 72

Build a generic curl task 72

Create a random user 72

Summary

Chapter 5: Jumping into Pipelines

Technical requirements

Introducing pipelines

Building your first pipeline

Parameterizing pipelines

Reusing tasks in the context of a pipeline

Ordering tasks within pipelines

Using task results in pipelines

Introducing pipeline runs

Getting your hands dirty

Back to the basics 106

Counting files in a repo 106

Weather services 107

Summary

Chapter 6: Debugging and Cleaning Up Pipelines and Tasks

Technical requirements

Debugging pipelines

Running a halting task

Adding a finally task

Getting your hands dirty

Fail if root 122

Make your bets 122

Summary

Chapter 7: Sharing Data with Workspaces

Technical requirements

Introducing workspaces

Types of volume sources

emptyDir 127

ConfigMap 127

Secret 127

Persistent volume claims and volume claim templates 128

Using your first workspace

Using workspaces with task runs

Adding a workspace to a pipeline

Persisting data within a pipeline

Cleaning up with finally

Using workspaces in pipeline runs

Using volume claim templates

Getting your hands dirty

Write and read 144

Pick a card 145

Hello admin 145

Summary

Chapter 8: Adding when Expressions

Technical requirements

Introducing when expressions

Using when expressions with parameters

Using the notin operator

Using when expressions with results

Getting your hands dirty

Hello Admin 155

Critical Hit 156

Not working on weekends 156

Summary

Chapter 9: Securing Authentication

Technical requirements

Introducing authentication in Tekton

Authenticating into a Git repository

Basic authentication161

SSH authentication 164

Authenticating in a container registry

Summary

Section 3: Tekton Triggers

Chapter 10: Getting Started with Triggers

Technical requirements

Introducing Tekton Triggers

Installing Tekton Triggers

Configuring your cluster

Using a local cluster176

Cloud-based clusters176

Defining new objects

Trigger templates 178

Trigger bindings 179

Event listeners180

Summary

Chapter 11: Triggering Tekton

Technical requirements

Creating a pipeline to be triggered

Creating the trigger

TriggerBinding 186

TriggerTemplate 187

EventListener 188

Configuring the incoming webhooks

Creating a secret 190

Exposing a route 190

Making the route publicly available 191

Configuring your GitHub repository 192

Triggering the pipeline

Summary

Section 4: Putting It All Together

Chapter 12: Preparing for a New Pipeline

Technical requirements

Cleaning up your cluster

Installing the necessary tooling

Exploring the source code

Creating the container 205

Deploying the application206

Updating the application manually

Summary

Chapter 13: Building a Deployment Pipeline

Technical requirements

Identifying the components

Using the task catalog

Adding an additional task

Creating the pipeline

Creating the trigger

Summary

Assessments

Technical requirements

Chapter 4

More than Hello World225

Build a generic curl task226

Create a random user228

Chapter 5

Back to the basics229

Counting files in a repo 230

Weather services 231

Chapter 6

Fail if root234

Make your bets 235

Chapter 7

Write and read 237

Pick a card 238

Hello admin 240

Chapter 8

Hello Admin 242

Critical Hit 244

Not working on weekends 246

Other Books You May Enjoy

Preface

Tekton is a powerful yet flexible Kubernetes-native open source framework for creating continuous integration and continuous delivery (CI/CD) systems. It lets you build, test, and deploy across multiple cloud providers or on-premises systems by abstracting away the underlying implementation details.

Building CI/CD Systems Using Tekton covers everything you need to know to start building your pipeline and automate application delivery in a cloud-native environment. Using a hands-on approach, you will learn about the basic building blocks that you can use to compose your CI/CD pipelines. You will then learn how to use these components in conjunction with Tekton Triggers to automate the delivery of your application in a Kubernetes cluster.

By the end of this book, you will know how to compose Tekton Pipelines and use them with Tekton Triggers to build powerful CI/CD systems.

Who this book is for

This book caters to everyone who wants to learn about one of the most powerful Kubernetes-native CI/CD systems: Tekton. This book is aimed at software developers who want to use the Custom Resource Definitions (CRDs) in Kubernetes and use Tekton to run pipeline tasks in order to build and own application delivery pipelines.

What this book covers

Chapter 1, A Brief History of CI/CD, takes you a step back in time and explains where CI/CD comes from and why it is so important nowadays. This will help you understand the importance of building robust pipelines for quicker delivery of your application.

Chapter 2, A Cloud-Native Approach to CI/CD, explains that Tekton is different from other CI/CD solutions because of its cloud-native approach. In this chapter, you will learn what cloud-native development is and what it means in the context of CI/CD pipelines.

Chapter 3, Installation and Getting Started, explains how to prepare your environment for the exercises that will be presented in the book.

Chapter 4, Stepping into Tasks, explains that tasks are the basic building block of Tekton pipelines. They are at the heart of the Tekton philosophy. In this chapter, you will learn how to build and use tasks that are reusable.

Chapter 5, Jumping into Pipelines, explains that a Tekton pipeline is composed of multiple tasks. In this chapter, you will learn how to use the tasks you learned about in the previous chapter to build pipelines.

Chapter 6, Debugging and Cleaning Up Pipelines and Tasks, demonstrates that when authoring tasks, things don’t always work as expected. This chapter introduces concepts to help find issues with Tekton pipelines and tasks. It also introduces a new concept called finally, which helps to clean up after a pipeline has been executed.

Chapter 7, Sharing Data with Workspaces, explains that in order to share data across the various tasks in a pipeline, there was originally a concept of pipeline resources. In the latest iteration of Tekton, workspaces are now the recommended way to do this.

Chapter 8, Adding when Expressions, explains that in order to add conditional statements in the execution of a pipeline, when expressions can be used. These expressions control the flow of the pipeline based on conditions.

Chapter 9, Securing Authentication, demonstrates that for certain operations, it is necessary to authenticate into a service. This can be done without exposing credentials by using secrets.

Chapter 10, Getting Started with Triggers, covers Tekton Triggers, a sister project of Tekton Pipelines that adds the ability to automatically trigger a pipeline by opening a route on your Kubernetes cluster and listening for incoming requests. In this chapter, you will learn about the new objects that are introduced by Tekton Triggers and how to install and prepare a local minikube cluster to listen for incoming requests.

Chapter 11, Triggering Tekton, explains how to create the required objects for the cluster to listen for a GitHub webhook and trigger a pipeline on certain actions.

Chapter 12, Preparing for a New Pipeline, prepares you to deploy a full real-world example of a Tekton pipeline. You will start by cleaning up the cluster and install all the required components on a fresh new installation of minikube. You will then be invited to explore the application that is about to be deployed. This will be a Node.js Express server with a few basic routes. Finally, you will be guided through the process of manually deploying and updating the application into the local cluster.

Chapter 13, Building a Deployment Pipeline, shows you how to build the tasks that are required for the pipeline and link them together. You will also need to create conditions, secrets, and workspaces to fully deploy the application.

To get the most out of this book

In order to use Tekton Pipelines, you will need access to a Kubernetes cluster. All the examples in this book are running on minikube. The installation instructions are provided in the book.

If you are using the digital version of this book, we advise you to type the code yourself or access the code via the GitHub repository (link available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Building-CI-CD-systems-using-Tekton. In case there’s an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Code in Action

Code in Action videos for this book can be viewed at https://bit.ly/2VmDYy0.

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: http://www.packtpub.com/sites/default/files/downloads/9781801078214_ColorImages.pdf.

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “Mount the downloaded WebStorm-10*.dmg disk image file as another disk in your system.”

A block of code is set as follows:

apiVersion: apps/v1

kind: Deployment

...

   spec:

     containers:

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

     - name: tekton-pod

       image: <YOUR_USERNAME>/tekton-lab-app

       ports:

       - containerPort: 3000

Any command-line input or output is written as follows:

$ kubectl apply -f ./deploy.yaml

deployment.apps/tekton-deployment created

service/tekton-svc created

ingress.networking.k8s.io/tekton-ingress created  

Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: “On the Add webhook screen on GitHub, fill in the form.”

Tips or important notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share your thoughts

Once you've read Building CI/CD Systems Using Tekton., we'd love to hear your thoughts! Please https://packt.link/r/1801078211 for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we're delivering excellent quality content.

Section 1: Introduction to CI/CD

This section serves as an introduction to continuous integration and continuous delivery (CI/CD), why they exist, and what they mean in the context of cloud-native development. You will start by learning about the history of the agile methodology and how it led to the creation of CI/CD principles.

Then, you will learn where Tekton fits into the CI/CD landscape. The main components that are used to create Tekton pipelines will be introduced to you.

By the end of this section, you will have an understanding of why you should be using CI/CD systems to automate your application delivery and where Tekton comes in to help you with CI/CD.

The following chapters will be covered in this section:

Chapter 1, A Brief History of CI/CDChapter 2, A Cloud-Native Approach to CI/CD

Chapter 1: A Brief History of CI/CD

Application development has not always worked the way it does today. Not so long ago, the processes were much different, as was the available technology for software engineering. To understand the importance of continuous integration/continuous deployment (CI/CD), it is essential to take a step back and see how it all started. In this chapter, you will learn how CI/CD came to where it is right now and where it might go in the future. You will take a small trip back in time – about 20 years ago – to see how application deployment was done back then, when I was still a junior developer. We will then look at various turning points in the history of software development practices and how this impacted the way we deploy applications today.

You will also learn about how cloud computing changed the way that we deliver software compared to how it was done about two decades ago. This will set the foundations for learning how to build powerful CI/CD pipelines with Tekton.

Finally, you will start to understand how CI/CD can fit into your day-to-day life as a software developer. Pipelines can be used at various stages of the application life cycle, and you will see some examples of their usage.

In this chapter, we are going to cover the following main topics:

The early daysUnderstanding the impacts of Agile development practicesDeploying in the era of the cloudDemystifying CI versus CD versus CD

The early days

It doesn't seem that long ago that I had my first job as a software developer. Yet, many things have changed since. I still remember my first software release in the early 2000s. I had worked for months on software for our customer. I had finished all the requirements, and I was ready to ship all this to them. I burned the software and an installer on a CD-ROM; I jumped in my car and went to the customer's office. As you've probably guessed, when I tried to install the software, nothing worked. I had to go back and forth between my workplace and the customer's office many times before I finally managed to get it up and running.

Once the customer was able to test out the software, he quickly found that some parts of the software were barely usable. His environment was different and caused issues that I could not have foreseen. He found a few bugs that slipped through our QA processes, and he needed new features since his requirements had changed between the time he'd listed them and now.

I received the list of new features, enhancements, and bugs and got back to work. A few months later, I jumped into my car with the new CD-ROM to install the latest version on their desktop and, of course, nothing worked as expected again.

Those were the times of Waterfall development. We'll learn what this is about in the next section.

Waterfall model

The Waterfall methodology consists of a series of well-planned phases. Each phase required some thorough planning and that requirements were gathered. Once all these needs were established, shared with the customer, and well documented, the software development team would start working on the project. The engineers then deployed the software according to the specifications from the planning phase. Each of these cycles would vary in length but would typically be measured in months or years. Waterfall software development consists of one main phase, while agile development is all about smaller cycles based on feedback from the previous iteration.

The following diagram demonstrates the Waterfall methodology:

Figure 1.1 – Waterfall versus Agile

This model worked well on some projects. Some teams could do wonders using the Waterfall model, such as the Apollo space missions. They had a set of rigorous requirements, a fixed deadline, and were aiming for zero bugs.

In the early 2000s, though, the situation was quickly changing. More and more enterprises started to bloom on the internet and having a shorter time to market than the competition was becoming even more important. Ultimately, this is what led to the agile manifesto of 2001.

So far, you've learned how software development was done at the turn of the millennium. You've learned how those long cycles caused the releases to be spread apart. It sometimes took months, if not years, for two releases of a piece of software to be released. In the next section, you will see how agile methodologies completely revolutionized the way we build software.

Understanding the impacts of Agile development practices

At the same time as I was making all those round trips to my customer, a group of software practitioners met at a conference. These thinkers came out of this event with the foundation of what became the "Agile Alliance." You can find out more about the Agile Alliance and the manifesto they wrote at http://agilemanifesto.org.

The agile manifesto, which lists the main principles behind the methodology by the same name, can be summarized as follows:

Individuals and interactions over processes and toolsWorking software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan

Those principles revolutionized software engineering. It was a significant change from the Waterfall model, and it is now the method that's used for most modern software development projects.

Had agile methodologies been used when I originally wrote my first piece of software, there are many things I would have done differently.

First, I would have fostered working much more closely with my customer. Right from my first release, it was apparent that we had a disconnect in the project's vision. Some of the features that he needed were not implemented in a way that made sense for his day-to-day usage. Even though our team provided many documents and charts to him to explain what I was about to implement, it would have probably been easier to discuss how they were planning to use the software. Picking up a phone or firing off an email to ask a question will always provide a better solution than blindly following a requirements document. Nowadays, there is tooling to make it easier to collaborate more closely and get better feedback.

One part of the software that I delivered that made me immensely proud was an advanced templating engine that would let the customer automate a mail-out process. It used a particular syntax, and I provided a guide that was a few pages long (yes, a hard copy!) for the users to be able to use it. They barely ever used it, and I ultimately removed the engine in a future version favoring a hardcoded template. They filled in one or two fields, clicked Submit, and they were done. When the template needed to be changed, I would update the software, and within a few hours, they had a patch for the new template. In this specific case, it didn't matter how well-written my documentation was; the solution did not work for them.

This over-engineered feature is also a great example of where customer collaboration is so important. In this specific situation, had I worked more closely with the customer, I might have better understood their needs. Instead, I focused on the documentation that was prepared in advance and stuck to it.

Finally, there's responding to change over following a plan. Months would go by between my updates. In this day and age, this might seem inconceivable. The planning processes were long, and it was common practice to publish all the requirements beforehand. Not only that, but deploying software was a lot harder than it is nowadays. Every time I needed to push an update, I needed to meet with the system administrators a couple of weeks before the installation. This sysadmin would check the requirements, test everything out, and eventually prepare the desktop to receive the software's dependencies. On the day of installation, I needed to coordinate with the users and system administrators to access those machines. I was then able to install the latest version on their device manually. It required many people's intervention, and no one wanted me to come back in 2 days with a new update, which made it hard to respond to changes.

Those agile principles might seem like the norm nowadays, but the world was different back then. A lot of those cumbersome processes were required due to technological limitations. Sending large files over the internet was tricky, and desktop applications were the norm. It was also the beginning of what came to be known as Web 2.0. With the emergence of new languages such as PHP and ASP, more and more applications were being developed and deployed to the web.

It was generally easier to deploy applications to run on the web; it simply consisted of uploading files to an FTP server. It didn't require physical access to a computer and much fewer interactions with system administrators. The end users didn't need to update their application manually; they would access the application as they always would and notice a change in a feature or an interface. The interactions were limited between the software developers and the system administrators to get a new version of the application up and running.

Yet, the Waterfall mentality was still strong. More and more software development teams were trying to implement agile practices, but the application deployment cycle was still somewhat slow. The main reason for this was that they were scared of breaking a production build with an update.

Here be testing

Software engineers adopted many strategies to mitigate the risk associated with deploying a new version of the application. One such method was unit testing and test-driven development. With unit testing, software developers were able to run many tests on their code base, ensuring that the software was still working. By executing a test run, developers could be reassured that the new features they implemented didn't break a previously developed component.

Having those tests in place made it much easier to build in small iterations and show the changes to a customer, knowing that the software didn't suffer from any regressions. The customer was then able to provide feedback much earlier in the development loop. The development teams could react to those comments before they invested too much time in a feature that would end up not satisfying the users in the end.

It was a great win for the customers, but it also turned out to be a great way to help the system administrators. With software that was tested, there were much fewer chances of introducing regression in the current application. Sysadmins were more confident in the build and more willing to deploy the applications regularly. The processes were starting to become automated via some bash scripting by the administrators to facilitate the processes.

Still, some changes were harder to push. When changes needed to be made to a database or an upgrade was required for a runtime, operators were usually more hesitant to implement those changes. They would need to set up a new environment to test out the new software and ensure that those changes would not cause problems with the servers. That reality changed in 2006 when Amazon first introduced AWS.

Cloud computing was to technology what agile methodologies were to software development processes. The changes that they brought changed the way developers did their jobs. Now, let's dig deeper to see how the cloud impacted software engineering.

Deploying in the era of the cloud

The cloud brought drastic changes to the way applications were built and maintained. Until then, most software development shops or online businesses had their own servers and team to maintain said servers. With the advent of AWS, all of this changed. It was now possible to spin up a new environment and use that new environment directly on someone else's infrastructure. This new way of doing things meant less time managing actual hardware and the capability to create reproducible environments easily.

With what was soon known as the cloud, it was easier than ever to deploy a new application. A software developer could now spin up a virtual machine that had the necessary software and runtimes, and then execute a batch of unit tests to ensure that it was running on that specific server. You could also create an environment for the customers to see the application changes at the end of each iteration, which helped them approve those new features or provide feedback on a requested enhancement.

With server environments that were easier to start, faster to scale, and cheaper than actual hardware, more and more people moved to cloud-based software development. This move also facilitated the automation of many processes around software deployment practices. Using a command-line tool, it was now possible to start a new staging environment, spin up a new database, or take down a server that wasn't needed.

More and more companies were having a presence on the web, and the competition to get out new features or implement the same features as your competition became a problem. It was no longer acceptable to deploy every few months. If a competitor released a new feature, your product also needed to implement it as soon as possible due to the risk of losing a market share. If there was a delay in fixing a bug, that also meant a potentially significant revenue loss.

These fast changes were at the heart of a revolution in how teams worked to build and deploy applications. Until now, enterprises had teams of software engineers who oversaw designing new features, fixing bugs, and preparing the next releases. On the other hand, a group of system administrators oversaw that all the infrastructures were running smoothly and that no bugs were introduced in the system. Despite having the same goal of making the applications run better, those two teams ended up contradicting each other due to the nature of their work.

The programmers had pressure to release faster, but they would potentially introduce bugs or required software upgrades on the servers with each release. Sysadmins were under pressure to keep the environment stable and pushed changes to avoid breaking the fragile equilibrium in the systems in place. This dichotomy led to a new philosophy in enterprises: DevOps.

DevOps' idea was to bridge that gap between the two teams so that deploying better software quicker was finally possible. Lots of tools aim to make DevOps easier, and containers are one of those technologies.

Works on my machine!

One problem that has always existed is software engineering became more prevalent with the cloud – the "Works on my machine" syndrome. A programmer would install all the required software to run an application on their development machine, and everything ran smoothly. As soon as this software was shipped on a different device, though, everything stopped working.

This is a widespread problem at larger companies where multiple teams have various environments. A programmer would have Apache 2.4 running PHP 8.0, while someone on the QA team would be running Apache 2.3 with PHP 7.2