41,99 €
Unleash the combination of Docker and Jenkins in order to enhance the DevOps workflow
This book is indented to provide a full overview of deep learning. From the beginner in deep learning and artificial intelligence to the data scientist who wants to become familiar with Theano and its supporting libraries, or have an extended understanding of deep neural nets.
Some basic skills in Python programming and computer science will help, as well as skills in elementary algebra and calculus.
The combination of Docker and Jenkins improves your Continuous Delivery pipeline using fewer resources. It also helps you scale up your builds, automate tasks and speed up Jenkins performance with the benefits of Docker containerization.
This book will explain the advantages of combining Jenkins and Docker to improve the continuous integration and delivery process of app development. It will start with setting up a Docker server and configuring Jenkins on it. It will then provide steps to build applications on Docker files and integrate them with Jenkins using continuous delivery processes such as continuous integration, automated acceptance testing, and configuration management.
Moving on you will learn how to ensure quick application deployment with Docker containers along with scaling Jenkins using Docker Swarm. Next, you will get to know how to deploy applications using Docker images and testing them with Jenkins.
By the end of the book, you will be enhancing the DevOps workflow by integrating the functionalities of Docker and Jenkins.
The book is aimed at DevOps Engineers, developers and IT Operations who want to enhance the DevOps culture using Docker and Jenkins.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 351
Veröffentlichungsjahr: 2017
BIRMINGHAM - MUMBAI
Copyright © 2017 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: August 2017
Production reference: 1230817
ISBN 978-1-78712-523-0
www.packtpub.com
Author
Rafał Leszko
Copy Editor
Ulka Manjrekar
Laxmi Subramanian
Reviewers
Michael Pailloncy
Mitesh Soni
Zhiwei Chen
Project Coordinator
Shweta H Birwatkar
Commissioning Editor
Pratik Shah
Proofreader
Safis Editing
Acquisition Editor
Prachi Bisht
Indexer
Pratik Shirodkar
ContentDevelopmentEditor
Deepti Thore
Graphics
Tania Dutta
Technical Editor
Sneha Hanchate
Production Coordinator
Arvindkumar Gupta
Rafał Leszko is a passionate software developer, trainer, and conference speaker living in Krakow, Poland. He has spent his career writing code, designing architecture, and tech leading in a number of companies and organizations such as Google, CERN, and AGH University. Always open to new challenges, he has given talks and conducted workshops at more than a few international conferences such as Devoxx and Voxxed Days.
I would like to thank my wife, Maria, for her support. She was the very first reviewer of this book, always cheering me up, and taking care of our baby to give me time and space for writing. I also give deep thanks and gratitude to the Zooplus company, where I could first experiment with the Continuous Delivery approach and especially, to its former employee Robert Stern for showing me the world of Docker. I would also like to make a special mention of Patroklos Papapetrou for his trust and help in organizing Continuous Delivery workshops in Greece. Last but not the least, thanks to my mom, dad, and brother for being so supportive.
Michael Pailloncy is a developer tending toward the 'Ops' side, constantly trying to keep things simple and as much automated as possible. Michael is passionate about the DevOps culture and has a strong experience in Continuous Integration, Continuous Delivery, automation, big software factory management and loves to share the experiences with others.
Mitesh Soni is an avid learner with 10 years of experience in the IT industry. He is an SCJP, SCWCD, VCP, IBM Urbancode, and IBM Bluemix certified professional. He loves DevOps and cloud computing and also has an interest in programming in Java. He finds design patterns fascinating and believes that "a picture is worth a thousand words."
He occasionally contributes to etutorialsworld.com. He loves to play with kids, fiddle with his camera, and take photographs at Indroda Park. He is addicted to taking pictures without knowing many technical details. He lives in the capital of Mahatma Gandhi's home state.
Mitesh has authored following books with Packt:
DevOps BootcampImplementing DevOps with Microsoft AzureDevOps for Web DevelopmentJenkins EssentialsLearning Chef
For support files and downloads related to your book, please visit www.PacktPub.com.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
https://www.packtpub.com/mapt
Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.
Fully searchable across every book published by Packt
Copy and paste, print, and bookmark content
On demand and accessible via a web browser
Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review on this book's Amazon page at https://www.amazon.com/dp/1787125238.
If you'd like to join our team of regular reviewers, you can e-mail us at [email protected]. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!
To my wonderful wife Maria, for all of her love, wisdom, and smile.
Preface
What this book covers
What you need for this book
Who this book is for
Conventions
Reader feedback
Customer support
Downloading the example code
Downloading the color images of this book
Errata
Piracy
Questions
Introducing Continuous Delivery
What is Continuous Delivery?
The traditional delivery process
Introducing the traditional delivery process
Shortcomings of the traditional delivery process
Benefits of Continuous Delivery
Success stories
The automated deployment pipeline
Continuous Integration
Automated acceptance testing
The Agile testing matrix
The testing pyramid
Configuration management
Prerequisites to Continuous Delivery
Organizational prerequisites
DevOps culture
Client in the process
Business decisions
Technical and development prerequisites
Building the Continuous Delivery process
Introducing tools
Docker ecosystem
Jenkins
Ansible
GitHub
Java/Spring Boot/Gradle
The other tools
Creating a complete Continuous Delivery system
Introducing Docker
Configuring Jenkins
Continuous Integration Pipeline
Automated acceptance testing
Configuration management with Ansible/Continuous Delivery pipeline
Clustering with Docker Swarm/Advanced Continuous Delivery
Summary
Introducing Docker
What is Docker?
Containerization versus virtualization
The need for Docker
Environment
Isolation
Organizing applications
Portability
Kittens and cattle
Alternative containerization technologies
Docker installation
Prerequisites for Docker
Installing on a local machine
Docker for Ubuntu
Docker for Linux
Docker for Mac
Docker for Windows
Testing Docker installation
Installing on a server
Dedicated server
Running Docker hello world>
Docker components
Docker client and server
Docker images and containers
Docker applications
Building images
Docker commit
Dockerfile
Complete Docker application
Write the application
Prepare the environment
Build the image
Run the application
Environment variables
Docker container states
Docker networking
Running services
Container networks
Exposing container ports
Automatic port assignment
Using Docker volumes
Using names in Docker
Naming containers
Tagging images
Docker cleanup
Cleaning up containers
Cleaning up images
Docker commands overview
Exercises
Summary
Configuring Jenkins
What is Jenkins?
Jenkins installation
Requirements for installation
Installing on Docker
Installing without Docker
Initial configuration
Jenkins hello world
Jenkins architecture
Master and slaves
Scalability
Vertical scaling
Horizontal scaling
Test and production instances
Sample architecture
Configuring agents
Communication protocols
Setting agents
Permanent agents
Configuring permanent agents
Understanding permanent agents
Permanent Docker agents
Configuring permanent Docker agents
Understanding permanent Docker agents
Jenkins Swarm agents
Configuring Jenkins Swarm agents
Understanding Jenkins Swarm agents
Dynamically provisioned Docker agents
Configuring dynamically provisioned Docker agents
Understanding dynamically provisioned Docker agents
Testing agents
Custom Jenkins images
Building Jenkins slave
Building Jenkins master
Configuration and management
Plugins
Security
Backup
Blue Ocean UI
Exercises
Summary
Continuous Integration Pipeline
Introducing pipelines
Pipeline structure
Multi-stage Hello World
Pipeline syntax
Sections
Directives
Steps
Commit pipeline
Checkout
Creating a GitHub repository
Creating a checkout stage
Compile
Creating a Java Spring Boot project
Pushing code to GitHub
Creating a compile stage
Unit test
Creating business logic
Writing a unit test
Creating a unit test stage
Jenkinsfile
Creating Jenkinsfile
Running pipeline from Jenkinsfile
Code quality stages
Code coverage
Adding JaCoCo to Gradle
Adding a code coverage stage
Publishing the code coverage report
Static code analysis
Adding the Checkstyle configuration
Adding a static code analysis stage
Publishing static code analysis reports
SonarQube
Triggers and notifications
Triggers
External
Polling SCM
Scheduled build
Notifications
Group chat
Team space
Team development strategies
Development workflows
Trunk-based workflow
Branching workflow
Forking workflow
Adopting Continuous Integration
Branching strategies
Feature toggles
Jenkins Multibranch
Non-technical requirements
Exercises
Summary
Automated Acceptance Testing
Introducing acceptance testing
Docker registry
Artifact repository
Installing Docker registry
Docker Hub
Private Docker registry
Installing the Docker registry application
Adding a domain certificate
Adding an access restriction
Other Docker registries
Using Docker registry
Building an image
Pushing the image
Pulling the image
Acceptance test in pipeline
The Docker build stage
Adding Dockerfile
Adding the Docker build to the pipeline
The Docker push stage
Acceptance testing stage
Adding a staging deployment to the pipeline
Adding an acceptance test to the pipeline
Adding a cleaning stage environment
Docker Compose
What is Docker Compose?
Installing Docker Compose
Defining docker-compose.yml
Using the docker-compose command
Building images
Scaling services
Acceptance testing with Docker Compose
Using a multi-container environment
Adding a Redis client library to Gradle
Adding a Redis cache configuration
Adding Spring Boot caching
Checking the caching environment
Method 1 – Jenkins-first acceptance testing
Changing the staging deployment stage
Changing the acceptance test stage
Method 2 – Docker-first acceptance testing
Creating a Dockerfile for acceptance test
Creating docker-compose.yml for acceptance test
Creating an acceptance test script
Running the acceptance test
Changing the acceptance test stage
Comparing method 1 and method 2
Writing acceptance tests
Writing user-facing tests
Using the acceptance testing framework
Creating acceptance criteria
Creating step definitions
Running an automated acceptance test
Acceptance test-driven development
Exercises
Summary
Configuration Management with Ansible
Introducing configuration management
Traits of good configuration management
Overview of configuration management tools
Installing Ansible
Ansible server requirements
Ansible installation
Docker-based Ansible client
Using Ansible
Creating inventory
Ad hoc commands
Playbooks
Defining a playbook
Executing the playbook
Playbook's idempotency
Handlers
Variables
Roles
Understanding roles
Ansible Galaxy
Deployment with Ansible
Installing Redis
Deploying a web service
Configuring a project to be executable
Changing the Redis host address
Adding calculator deployment to the playbook
Running deployment
Ansible with Docker
Benefits of Ansible
Ansible Docker playbook
Installing Docker
Running Docker containers
Using Docker Compose
Exercises
Summary
Continuous Delivery Pipeline
Environments and infrastructure
Types of environment
Production
Staging
QA
Development
Environments in Continuous Delivery
Securing environments
Nonfunctional testing
Types of nonfunctional test
Performance testing
Load testing
Stress testing
Scalability testing
Endurance testing
Security testing
Maintainability testing
Recovery testing
Nonfunctional challenges
Application versioning
Versioning strategies
Versioning in the Jenkins pipeline
Complete Continuous Delivery pipeline
Inventory
Acceptance testing environment
Release
Smoke testing
Complete Jenkinsfile
Exercises
Summary
Clustering with Docker Swarm
Server clustering
Introducing server clustering
Introducing Docker Swarm
Docker Swarm features overview
Docker Swarm in practice
Setting up a Swarm
Adding worker nodes
Deploying a service
Scaling service
Publishing ports
Advanced Docker Swarm
Rolling updates
Draining nodes
Multiple manager nodes
Scheduling strategy
Docker Compose with Docker Swarm
Introducing Docker Stack
Using Docker Stack
Specifying docker-compose.yml
Running the docker stack command
Verifying the services and containers
Removing the stack
Alternative cluster management systems
Kubernetes
Apache Mesos
Comparing features
Scaling Jenkins
Dynamic slave provisioning
Jenkins Swarm
Comparison of dynamic slave provisioning and Jenkins Swarm
Exercises
Summary
Advanced Continuous Delivery
Managing database changes
Understanding schema updates
Introducing database migrations
Using Flyway
Configuring Flyway
Defining the SQL migration script
Accessing database
Changing database in Continuous Delivery
Backwards-compatible changes
Non-backwards-compatible changes
Adding a new column to the database
Changing the code to use both columns
Merging the data in both columns
Removing the old column from the code
Dropping the old column from the database
Separating database updates from code changes
Avoiding shared database
Preparing test data
Unit testing
Integration/acceptance testing
Performance testing
Pipeline patterns
Parallelizing pipelines
Reusing pipeline components
Build parameters
Shared libraries
Creating a shared library project
Configure the shared library in Jenkins
Use shared library in Jenkinsfile
Rolling back deployments
Adding manual steps
Release patterns
Blue-green deployment
Canary release
Working with legacy systems
Automating build and deployment
Automating tests
Refactoring and introducing new features
Understanding the human element
Exercises
Summary
Best practices
Practice 1 – own process within the team!
Practice 2 – automate everything!
Practice 3 – version everything!
Practice 4 – use business language for acceptance tests!
Practice 5 – be ready to roll back!
Practice 6 – don't underestimate the impact of people
Practice 7 – build in traceability!
Practice 8 – integrate often!
Practice 9 – build binaries only once!
Practice 10 – release often!
I've observed software delivery processes for years. I wrote this book because I know how many people still struggle with releases and get frustrated after spending days and nights on getting their products into production. This all happens even though a lot of automation tools and processes have been developed throughout the years. After I saw for the first time how simple and effective the Continuous Delivery process was, I would never come back to the tedious traditional manual delivery cycle. This book is a result of my experience and a number of Continuous Delivery workshops I conducted. I share the modern approach using Jenkins, Docker, and Ansible; however, this book is more than just the tools. It presents the idea and the reasoning behind Continuous Delivery, and what's most important, my main message to everyone I meet: Continuous Delivery process is simple, use it!
Chapter 1, Introducing Continuous Delivery, presents how companies traditionally deliver their software and explains the idea to improve it using the Continuous Delivery approach. This chapter also discusses the prerequisites for introducing the process and presents the system that will be built throughout the book.
Chapter 2, Introducing Docker, explains the idea of containerization and the fundamentals of the Docker tool. This chapter also shows how to use Docker commands, package an application as a Docker image, publish Docker container's ports, and use Docker volumes.
Chapter 3, Configuring Jenkins, presents how to install, configure, and scale Jenkins. This chapter also shows how to use Docker to simplify Jenkins configuration and to enable dynamic slave provisioning.
Chapter 4, Continuous Integration Pipeline, explains the idea of pipelining and introduces the Jenkinsfile syntax. This chapters also shows how to configure a complete Continuous Integration pipeline.
Chapter 5, Automated Acceptance Testing, presents the idea and implementation of acceptance testing. This chapters also explains the meaning of artifact repositories, the orchestration using Docker Compose, and frameworks for writing BDD-oriented acceptance tests.
Chapter 6, Configuration Management with Ansible, introduces the concept of configuration management and its implementation using Ansible. The chapter also shows how to use Ansible together with Docker and Docker Compose.
Chapter 7, Continuous Delivery Pipeline, combines all the knowledge from the previous chapters in order to build the complete Continuous Delivery process. The chapter also discusses various environments and the aspects of nonfunctional testing.
Chapter 8, Clustering with Docker Swarm, explains the concept of server clustering and the implementation using Docker Swarm. The chapter also compares alternative clustering tools (Kubernetes and Apache Mesos) and explains how to use clustering for dynamic Jenkins agents.
Chapter 9, Advanced Continuous Delivery, presents a mixture of different aspects related to the Continuous Delivery process: database management, parallel pipeline steps, rollback strategies, legacy systems, and zero-downtime deployments. The chapter also includes best practices for the Continuous Delivery process.
Docker requires the 64-bit Linux operating system. All examples in this book have been developed using Ubuntu 16.04, but any other Linux system with the kernel version 3.10 or above is sufficient.
This book is for developers and DevOps who would like to improve their delivery process. No prior knowledge is required to understand this book.
In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning.
Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: docker info
A block of code is set as follows:
pipeline { agent any stages { stage("Hello") { steps { echo 'Hello World' } } } }
When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
FROM ubuntu:16.04
RUN apt-get update && \
apt-get install -y python
Any command-line input or output is written as follows:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu_with_python latest d6e85f39f5b7 About a minute ago 202.6 MB
ubuntu_with_git_and_jdk latest 8464dc10abbb 3 minutes ago 610.9 MB
New terms and important words are shown in bold. Words that you see on the screen, in menus or dialog boxes for example, appear in the text like this: "Click on New Item".
Warnings or important notes appear in a box like this.
Tips and tricks appear like this.
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of.
To send us general feedback, simply send an e-mail to [email protected], and mention the book title via the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
You can download the code files by following these steps:
Log in or register to our website using your e-mail address and password.
Hover the mouse pointer on the
SUPPORT
tab at the top.
Click on
Code Downloads & Errata
.
Enter the name of the book in the
Search
box.
Select the book for which you're looking to download the code files.
Choose from the drop-down menu where you purchased this book from.
Click on
Code Download
.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR / 7-Zip for Windows
Zipeg / iZip / UnRarX for Mac
7-Zip / PeaZip for Linux
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Continuous-Delivery-with-Docker-and-Jenkins. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/ContinuousDeliverywithDockerandJenkins_ColorImages.pdf.
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website, or added to any list of existing errata, under the Errata section of that title. Any existing errata can be viewed by selecting your title from http://www.packtpub.com/support.
Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at [email protected] with a link to the suspected pirated material.
We appreciate your help in protecting our authors, and our ability to bring you valuable content.
You can contact us at [email protected] if you are having a problem with any aspect of the book, and we will do our best to address it.
The common problem faced by most developers is how to release the implemented code quickly and safely. The delivery process used traditionally is, however, a source of pitfalls and usually leads to the disappointment of both developers and clients. This chapter presents the idea of the Continuous Delivery approach and provides the context for the rest of the book.
This chapter covers the following points:
Introducing the traditional delivery process and its drawbacks
Describing the idea of Continuous Delivery and the benefits it brings
Comparing how different companies deliver their software
Explaining the automated deployment pipeline and its phases
Classifying different types of tests and their place in the process
Pointing out the prerequisites to the successful Continuous Delivery process
Presenting tools that will be used throughout the book
Showing the complete system that will be built throughout the book
The most accurate definition of the Continuous Delivery is stated by Jez Humble and reads as follows: "Continuous Delivery is the ability to get changes of all types—including new features, configuration changes, bug fixes, and experiments—into production, or into the hands of users, safely and quickly in a sustainable way." That definition covers the key points.
To understand it better, let's imagine a scenario. You are responsible for the product, let's say, the email client application. Users come to you with a new requirement—they want to sort emails by size. You decide that the development will take around one week. When can the user expect to use the feature? Usually, after the development is done, you hand over the completed feature first to the QA team and then to the operations team, which takes additional time ranging from days to months. Therefore, even though the development took only one week, the user receives it in a couple of months! The Continuous Delivery approach addresses that issue by automating manual tasks so that the user could receive a new feature as soon as it's implemented.
To present better what to automate and how, let's start by describing the delivery process that is currently used for most software systems.
The traditional delivery process, as the name suggests, has been in place for many years now and is implemented in most IT companies. Let's define how it works and comment on its shortcomings.
Any delivery process begins with the requirements defined by a customer and ends up with the release on the production. The differences are in between. Traditionally, it looks as presented in the following release cycle diagram:
The release cycle starts with the requirements provided by the Product Owner, who represents the Customer (stakeholders). Then there are three phases, during which the work is passed between different teams:
Development
: Here, the developers (sometimes together with business analysts) work on the product. They often use Agile techniques (Scrum or Kanban) to increase the development velocity and to improve the communication with the client. Demo sessions are organized to obtain a customer's quick feedback. All good development techniques (like test-driven development or extreme programming practices) are welcome. After the implementation is completed, the code is passed to the QA team.
Quality Assurance
: This phase is usually called
User Acceptance Testing
(
UAT
) and it requires a code freeze on the trunk codebase, so that no new development would break the tests. The QA team performs a suite of
Integration Testing
,
Acceptance Testing
, and
Non-functional Testing
(performance, recovery, security, and so on). Any bug that is detected goes back to the development team, so developers usually also have their hands full of work. After the UAT phase is completed, the QA team approves the features that are planned for the next release.
Operations
: The last phase, usually the shortest one, means passing the code to the
Operations
team, so that they can perform the release and monitor the production. If anything goes wrong, they contact developers to help with the production system.
The length of the release cycle depends on the system and the organization, but it usually ranges from a week to a few months. The longest I've heard about was one year.The longest I worked with was quarterly-based and each part took as follows: development-1.5 months, UAT-1 month and 3 weeks, release (and strict production monitoring)-1 week.
The traditional delivery process is widely used in the IT industry and it's probably not the first time you've read about such an approach. Nevertheless, it has a number of drawbacks. Let's look at them explicity to understand why we need to strive for something better.
The most significant shortcomings of the traditional delivery process include the following:
Slow delivery
: Here, the customer receives the product long after the requirements were specified. It results in the unsatisfactory time to market and delays of the customer's feedback.
Long feedback cycle
: The feedback cycle is not only related to customers, but also to developers. Imagine that you accidentally created a bug and you learn about it during the UAT phase. How long does it take to fix something you worked on two months ago? Even minor bugs can consume weeks.
Lack of automation
: Rare releases don't encourage the automation, which leads to unpredictable releases.
Risky hotfixes
: Hotfixes can't usually wait for the full UAT phase, so they tend to be tested differently (the UAT phase is shortened) or not tested at all.
Stress
: Unpredictable releases are stressful for the operations team. What's more, the release cycle is usually tightly scheduled which imposes an additional stress on developers and testers.
Poor communication
: Work passed from one team to another represents the waterfall approach, in which people start to care only about their part, rather than the complete product. In case anything goes wrong, that usually leads to the blaming game instead of cooperation.
Shared responsibility
: No team takes the responsibility for the product from A to Z. For developers: "done" means that requirements are implemented. For testers: "done" means that the code is tested. For operations: "done" means that the code is released.
Lower job satisfaction
: Each phase is interesting for a different team, but other teams need to support the process. For example, the development phase is interesting for developers but, during two other phases, they still need to fix bugs and support the release, which usually is not interesting for them at all.
These drawbacks represent just a tip of the iceberg of the challenges related to the traditional delivery process. You may already feel that there must be a better way to develop the software and this better way is, obviously, the Continuous Delivery approach.
“How long would it take your organization to deploy a change that involves just one single line of code? Do you do this on a repeatable, reliable basis?" These are the famous questions from Mary and Tom Poppendieck (authors of Implementing Lean Software Development), which have been quoted many times by Jez Humble and other authors. Actually, the answer to these questions is the only valid measurement of the health of your delivery process.
To be able to deliver continuously, and not to spend a fortune on the army of operations teams working 24/7, we need automation. That is why, in short, Continuous Delivery is all about changing each phase of the traditional delivery process into a sequence of scripts, called the automated deployment pipeline or the Continuous Delivery pipeline. Then, if no manual steps are required, we can run the process after every code change and, therefore, deliver the product continuously to the users.
Continuous Delivery lets us get rid of the tedious release cycle and, therefore, brings the following benefits:
Fast delivery
: Time to market is significantly reduced as customers can use the product as soon as the development is completed. Remember, the software delivers no revenue until it is in the hands of its users.
Fast feedback cycle
: Imagine you created a bug in the code, which goes into the production the same day. How much time does it take to fix something you worked on the same day? Probably not much. This, together with the quick rollback strategy, is the best way to keep the production stable.
Low-risk releases
: If you release on a daily basis, then the process becomes repeatable and therefore much safer. As the saying goes, "If it hurts, do it more often."
Flexible release options
: In case you need to release immediately, everything is already prepared, so there is no additional time/cost associated with the release decision.
Needless to say, we could achieve all the benefits simply by eliminating all delivery phases and proceeding with the development directly on the production. It would, however, cause the quality to decline. Actually, the whole difficulty of introducing Continuous Delivery is the concern that the quality would decrease together with eliminating manual steps. In this book, we will show how to approach it in a safe manner and explain why, contrary to common beliefs, the products delivered continuously have fewer bugs and are better adjusted to the customer's needs.
My favorite story on Continuous Delivery was told by Rolf Russell at one of his talks. It goes as follows. In 2005, Yahoo acquired Flickr and it was a clash of two cultures in the developer's world. Flickr, by that time, was a company with the start-up approach in mind. Yahoo, on the contrary, was a huge corporation with strict rules and the safety-first attitude. Their release processes differed a lot. While Yahoo used the traditional delivery process, Flickr released many times a day. Every change implemented by developers went on the production the same day. They even had a footer at the bottom of their page showing the time of the last release and the avatars of the developers who did the changes.
Yahoo deployed rarely and each release brought a lot of changes well tested and prepared. Flickr worked in very small chunks, each feature was divided into small incremental parts and each part was deployed quickly to the production. The difference is presented in the following diagram:
You can imagine what happened when the developers from two companies met. Yahoo obviously treated Flickr's colleagues as junior irresponsible developers, "a bunch of software cowboys who don't know what they are doing." So, the first thing they wanted to change was to add a QA team and the UAT phase into Flickr's delivery process. Before they applied the change, however, Flickr's developers had only one wish. They asked to evaluate the most reliable products in the whole Yahoo company. What a surprise when it happened that of all the software in Yahoo, Flickr had the lowest downtime. The Yahoo team didn't understand it at first, but let Flickr stay with their current process anyway. After all, they were engineers, so the evaluation result was conclusive. Only after some time, they realized that the Continuous Delivery process can be beneficial for all products in Yahoo and they started to gradually introduce it everywhere.
The most important question of the story remains-how was it possible that Flickr was the most reliable system? Actually, the reason for that fact was what we already mentioned in the previous sections. A release is less risky if:
The delta of code changes is small
The process is repeatable
That is why, even though the release itself is a difficult activity, it is much safer when done frequently.
The story of Yahoo and Flickr is only an example of many successful companies for which the Continuous Delivery process proved to be right. Some of them even proudly share details from their systems, as follows:
Amazon
: In 2011, they announced reaching 11.6 seconds (on average) between deployments
: In 2013, they announced deployment of code changes twice a day
HubSpot
: In 2013, they announced deployment 300 times a day
Atlassian
: In 2016, they published a survey stating that 65% of their customers practice continuous delivery
Keep in mind that the statistics get better every day. However, even without any numbers, just imagine a world in which every line of code you implement goes safely into the production. Clients can react quickly and adjust their requirements, developers are happy because they don't have to solve that many bugs, managers are satisfied because they always know what is the current state of work. After all, remember, the only true measure of progress is the released software.
We already know what the Continuous Delivery process is and why we use it. In this section, we describe how to implement it.
Let's start by emphasizing that each phase in the traditional delivery process is important. Otherwise, it would never have been created in the first place. No one wants to deliver software without testing it first! The role of the UAT phase is to detect bugs and to ensure that what developers created is what the customer wanted. The same applies to the operations team—the software must be configured, deployed to the production, and monitored. That's out of the question. So, how do we automate the process so that we preserve all the phases? That is the role of the automated deployment pipeline, which consists of three stages as presented in the following diagram:
The automated deployment pipeline is a sequence of scripts that is executed after every code change committed to the repository. If the process is successful, it ends up with the deployment to the production environment.
Each step corresponds to a phase in the traditional delivery process as follows:
Continuous Integration
: This checks to make sure that the code written by different developers integrates together
Automated Acceptance Testing
: This replaces the manual QA phase and checks if the features implemented by developers meet the client's requirements
Configuration Management
: This replaces the manual operations phase-configures the environment and deploys the software
Let's take a deeper look at each phase to understand what is its responsibility and what steps it includes.
The Continuous Integration phase provides the first feedback to developers. It checks out the code from the repository, compiles it, runs unit tests, and verifies the code quality. If any step fails, the pipeline execution is stopped and the first thing the developers should do is fix the Continuous Integration build. The essential aspect of the phase is time; it must be executed in a timely manner. For example, if this phase took an hour to complete then the developers would commit the code faster, which would result in the constantly failing pipeline.
The Continuous Integration pipeline is usually the starting point. Setting it up is simple because everything is done within the development team and no agreement with the QA and operations teams is necessary.
The automated acceptance testing phase is a suite of tests written together with the client (and QAs) that is supposed to replace the manual UAT stage. It acts as a quality gate to decide whether a product is ready for the release or not. If any of the acceptance tests fail, then the pipeline execution is stopped and no further steps are run. It prevents movement to the Configuration Management phase and therefore the release.
The whole idea of automating the acceptance phase is to build the quality into the product instead of verifying it later. In other words, when a developer completes the implementation, the software is delivered already together with acceptance tests which verify that the software is what the client wanted. That is a large shift in thinking about testing software. There is no longer a single person (or team) who approves the release, but everything depends on passing the acceptance test suite. That is why creating this phase is usually the most difficult part of the Continuous Delivery process. It requires a close cooperation with the client and creating tests at the beginning (not at the end) of the process.
There is usually a lot of confusion about the types of tests and their place in the Continuous Delivery process. It's also often unclear how to automate each type, what should be the coverage, and what should be the role of the QA team in the whole development process. Let's clarify it using the Agile testing matrix and the testing pyramid.
Brian Marick, in a series of his blog posts, made a classification of software tests in a form of the so-called agile testing matrix. It places tests in two dimensions: business or technology facing and support programmers or critique the product. Let's have a look at that classification:
Let's comment briefly on each type of test:
Acceptance Testing (automated)
: These are tests that represent functional requirements seen from the business perspective. They are written in the form of stories or examples by clients and developers to agree on how the software should work.
Unit Testing (automated)
: These are tests that help developers to provide the high-quality software and minimize the number of bugs.
Exploratory Testing (manual)
: This is the manual black-box testing, which tries to break or improve the system.
Non-functional Testing (automated)
: These are tests that represent system properties related to the performance, scalability, security, and so on.
This classification answers one of the most important questions about the Continuous Delivery process: what is the role of a QA in the process?
