Jenkins Administrator's Guide - Calvin Sangbin Park - E-Book

Jenkins Administrator's Guide E-Book

Calvin Sangbin Park

0,0
34,79 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Jenkins is a renowned name among build and release CI/CD DevOps engineers because of its usefulness in automating builds, releases, and even operations. Despite its capabilities and popularity, it's not easy to scale Jenkins in a production environment. Jenkins Administrator's Guide will not only teach you how to set up a production-grade Jenkins instance from scratch, but also cover management and scaling strategies.
This book will guide you through the steps for setting up a Jenkins instance on AWS and inside a corporate firewall, while discussing design choices and configuration options, such as TLS termination points and security policies. You’ll create CI/CD pipelines that are triggered through GitHub pull request events, and also understand the various Jenkinsfile syntax types to help you develop a build and release process unique to your requirements. For readers who are new to Amazon Web Services, the book has a dedicated chapter on AWS with screenshots. You’ll also get to grips with Jenkins Configuration as Code, disaster recovery, upgrading plans, removing bottlenecks, and more to help you manage and scale your Jenkins instance.
By the end of this book, you’ll not only have a production-grade Jenkins instance with CI/CD pipelines in place, but also knowledge of best practices by industry experts.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 483

Veröffentlichungsjahr: 2021

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Jenkins Administratorʼs Guide

Install, manage, and scale a CI/CD build and release system to accelerate your product life cycle

 

 

Calvin Sangbin Park

Lalit Adithya

Samuel Gleske

 

 

BIRMINGHAM—MUMBAI

Jenkins Administratorʼs GuideCalvin Sangbin Park, Lalit Adithya, & Samuel Gleske

Copyright © 2021 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of cited brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Production reference: 1111121

Group Product Manager: Vijin Boricha

Publishing Product Manager: Vijin Boricha

Senior Editor: Hayden Edwards

Content Development Editor: Nihar Kapadia

Technical Editor: Nithik Cheruvakodan

Copy Editor: Safis Editing

Project Coordinator: Neil Dmello

Proofreader: Safis Editing

Indexer: Pratik Shirodkar

Production Designer: Aparna Bhagat

Senior Designer: Joseph Runnacles

First published: December 2021

Published by Packt Publishing Ltd.Livery Place35 Livery StreetBirminghamB3 2PB, UK.

ISBN 978-1-83882-432-7

www.packt.com

 

 

 

 

To my parents, who will have the hardest time bragging about their son’s book on Jenkins.

– Calvin Sangbin Park

 

I dedicate this to my wife, Kristie, whose support for me made this possible. Also, my mother, Audrey, and Kristie's mother, Tina, who helped us tremendously during this period of our pregnancy and the birth of our son, Cory.

– Samuel Gleske

Foreword

In 2019, I had the pleasure of working closely with Kohsuke Kawaguchi, creator of Jenkins. That year, Jenkins was celebrating its 15th anniversary, and Kohsuke reached out to the community to solicit Jenkins success stories. Each response was more impressive than the last, with user testimonies of how Jenkins had transformed industries, as well as success stories of how Jenkins had helped propel the careers of individual practitioners. My favorite was learning how Jenkins was used to build the World Anti-Malarial Network.

I came to see firsthand how Jenkins is one of the most used and loved technologies in the software industry. For over 15 years, Jenkins has had a huge impact on developer productivity in a wide range of industries, everything from aerospace to retail, through to education and finance. Jenkins is an open source project that is backed by one of the most dedicated open source communities, enabling it to continue a very long trend of pioneering work in the CI/CD space. As a result, Jenkins is continuously evolving and improving to serve its users’ needs.

Software is increasingly playing a key role in various organizations and industries. We are delivering more software than ever before, and software delivery is a key differentiator for every organization. Jenkins remains at the heart of this transformation. As Jenkins continuously innovates, it is important to be able to keep up with the latest changes and understand the best ways to use this powerful technology. That is why I am delighted that Calvin Park, Lalit Adithya, and Sam Gleske have come together to write this book, with support from Vijin Boricha and all at Packt. I had the pleasure of meeting Sam Gleske at one of the Jenkins World contributor summits a few years ago and I appreciate his many and continuous contributions to the project.

In Jenkins Administratorʼs Guide, many key topics are covered. From the ever-important script security to tackling shared libraries, as well as using the ever-so-powerful Jenkins Configuration as Code. The book also covers emerging trends such as GitOps and very practical information for optimizing your Jenkins setup (complete with warnings not to overengineer your setup if you don’t need to!).

This book is an invaluable resource for those who want to make the most of Jenkins and keep up with recent improvements and make the most of the amazing productivity you can unlock with an optimal Jenkins setup.

Tracy MirandaExecutive Director, Continuous Delivery Foundation

Contributors

About the authors

Calvin Sangbin Park is a CI/CD DevOps engineer at NVIDIA. He's been using Jenkins throughout his career to automate builds for Arduino maker boards, Android tablets, enterprise software packages, and even firmware for an industrial laser for etching CPUs. Lately, he's been focusing on Kubernetes, monitoring, and process visualizations. He plans to contribute to the open source community by developing a plugin that optimizes Kubernetes cluster management.

Behind every great book is a great family. Thank you, Eunyoung and Younghwan. This book is as much yours as it is mine. I love you.

Also, many thanks to my brother and editor, Sangyoon; my brilliant coauthors, Lalit and Sam; the insightful technical reviewers, Huo, Dominic, and Ray; and the wise mentors, Madhusudan N and Sebass van Boxel.

Lalit Adithya is a software engineer with the DevOps team at NVIDIA. He has built code-commit-to-production pipelines using Jenkins and GitHub Actions. He has built and scaled business-critical applications that serve several thousand requests every minute. He has also built frameworks that have boosted developer productivity by abstracting away the complexities of networking, request/response routing, and more. He knows the ins and outs of several public cloud platforms and can architect cost-effective and scalable cloud-native solutions.

I thank my parents for all their love, support, and encouragement. I also thank all my mentors and friends who supported me, encouraged me to be the best version of myself, and helped me strive for perfection.

Samuel Gleske has been a Jenkins user since 2011, actively contributing to documentation and plugins, and discovering security issues in the system. Some notable plugins that Sam has maintained include the Slack plugin, the GHPRB plugin, the GitHub Authentication plugin, and a half dozen others. Sam has presented on and shared scripts for the Script Console documentation and is the primary author of its Wiki page. Since 2014, Sam has been developing Jervis – Jenkins as a service – which enables Jenkins to scale to more than 4,000 users and 30,000 jobs in a single Jenkins controller. Jervis emphasizes full self-service within Jenkins for users while balancing security.

I thank my wife, Kristie, whose support for me made this possible.

About the reviewers

Huo Bunn has been working professionally in the tech field for over 15 years. He started as a computer specialist at a computer help desk center; he progressed to be a web developer and eventually moved into the DevOps space. At the time of reviewing this book, he is currently a senior DevOps engineer who works with and provides support for multiple teams of software developers. He provides accounts and security governance via automation for the three major cloud providers, Google Cloud Platform, Azure, and Amazon Web Services. Huo is also responsible for maintaining a Kubernetes cluster for a testing framework that relies heavily on Jenkins as the interface for other teams to run workload test suites against their code changes.

I would like to thank Calvin, the author of this book, for giving me the opportunity to be a part of this new experience. I have learned a great deal from reviewing the book and know that what I have learned will come in handy in my everyday operations as a DevOps engineer. I would also like to thank the Packt team for coordinating and providing an easy way to review the book.

Dominic Lam is a senior manager in cloud infrastructure at NVIDIA. He has worked in software development for more than 20 years in different disciplines ranging across kernel drivers, application security, medical imaging, and cloud infrastructure. He has been using Jenkins, fka Hudson, since 2007 and has an interest in making software development seamless through CI/CD. He served as a board member on the Industry Advisory Board of the computer science department at San Jose State University for 2 years. Dominic holds a master’s degree in computer science from Stanford University and a bachelor’s degree in electrical engineering and computer science from the University of California, Berkeley.

Raymond Douglass III holds a BSc in computer science and an MSc in information technology, specializing in software engineering. He has been in the software development industry for over 10 years, in positions ranging from software developer to DevOps engineer to manager. He has more than 5 years of experience as a Jenkins administrator as well as experience writing and maintaining Jenkins plugins.

I’d like to thank both my parents for all their love and support throughout my life. I’d also like to thank Calvin and Packt for giving me the opportunity to review this fantastic book. Finally, I’d like to thank all the co-workers I’ve had through the years for helping me to learn new things and grow as a developer and as a person.

Contents

Foreword

Contributors

About the authors

About the reviewers

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Download the color images

Conventions used

Get in touch

Reviews

1Jenkins Infrastructure with TLS/SSL and Reverse Proxy

Technical requirements

Why Jenkins?

Searching for answers online with Jenkins keywords

Understanding the Jenkins architecture

Controller

Domain name, TLS/HTTPS, load balancer, and reverse proxy

Agents

Bringing it all together

AWS: FAQs, routing rules, EC2 instances, and EIPs

EC2 instance types and sizes

Regions and Availability Zones

Routing rules

EC2 instances and EIPs

Installing Docker on our VMs

Acquiring domain names and TLS/SSL certificates

Domain names

TLS/SSL certificates

Storage concerns

IOPS benchmarks using fio

EC2 and EBS

The IT VM's disk

NFS/SAN

Physical disks

Review

Summary

2Jenkins with Docker on HTTPS on AWS and inside a Corporate Firewall

Technical requirements

Running a Jenkins controller with Docker on HTTPS

Custom image to match the UID/GID for a bind mount

Running Jenkins

Reverse proxy and TLS/SSL termination options

TLS termination at the reverse proxy

Terminating the TLS certificate directly on the Jenkins controller

Installing plugins and configuring Jenkins

Installing more plugins

Configure System

Configure Global Security

Configure Global Credentials

Installing even more plugins

Attaching SSH and inbound agents

SSH agent

Inbound agent

Labels and Usage

Creating a secure Docker Cloud

Generating a CA, server certificates, and client certificates

Storing the certificates

Configuring the Docker service

Configuring Jenkins

Summary

3GitOps-Driven CI Pipeline with GitHub

Technical requirements

Project overview

Creating two sets of projects and users in Jenkins

Creating a static pipeline for build and unit tests

Displaying test results and a code coverage report

Creating a premerge CI pipeline with GitHub PR hooks

GitHub personal access token

GitHub Pull Request Builder System Configuration

Configuring the premerge trigger

Testing the premerge trigger

Building the PR branch

Building an arbitrary branch

Requiring a successful build for a merge

Summary

4GitOps-Driven CD Pipeline with Docker Hub and More Jenkinsfile Features

Technical requirements

Project overview

Packaging the Docker image and running integration tests

Versioning Git and Docker using Semantic Versioning

Using more Jenkinsfile features with DooD and bare-metal agents

agent none, buildDiscarder options, and credentials in environment variables

Using a custom Dockerfile for a dockerfile agent and running Groovy code in a script block

Docker-outside-of-Docker in Jenkins

Variable handling, Docker Hub login, and docker push

Bare-metal agents, Groovy language features, and alternate ways to run Docker and handle credentials

post

Saving the files, making a PR, and merging

Creating a static pipeline for packaging, integration tests, and delivery

Creating a postmerge CD pipeline with a GitHub webhook and polling

Configuring the postmerge trigger

Testing the postmerge trigger

Summary

5Headfirst AWS for Jenkins

Technical requirements

Logging in to AWS

Navigating the AWS console

Important notes

EC2 instances and EIPs

Step 1 – Create an SSH key pair

Step 2 – Create a security group

Step 3 – Create an EC2 instance

Step 4 – Create and attach an EIP

Let's Encrypt

Manual verification

Automated verification for AWS Route 53

Setting up an application ELB for the AWS Jenkins controller

Step 1 – Create a TLS certificate in AWS Certificate Manager

Step 2 – Create a security group

Step 3 – Create an ALB

Other DNS providers

Summary

6Jenkins Configuration as Code (JCasC)

Technical requirements

Downloading and understanding the current configuration

User passwords aren't codified

Secrets aren't portable

Most entries are auto-generated defaults

Converting controller configuration to JCasC

Converting agent configuration to JCasC

Converting Docker cloud configuration to JCasC

Converting the pipeline configurations to JCasC

Redeploying Jenkins using JCasC

Reverting back to the original Jenkins

Retrospective

Advanced: CasC Plugin – Groovy Scripting Extension

Summary

7Backup and Restore and Disaster Recovery

Technical requirements

A small change for testing backup and restore

Backup strategies

Snapshotting the entire disk as an image

Saving the directory content as files

Backing up a large Jenkins instance

Deciding which files to back up and at what frequency

Directories for live backup

Backing up and restoring with the ThinBackup plugin

Moving the backup archives out of the disk

Experimenting with ThinBackup

Restoring a backup using ThinBackup

Configuring ThinBackup

Disaster recovery from a user mistake

Disaster

Recovery

Disaster recovery from an infrastructure failure

Summary

8Upgrading the Jenkins Controller, Agents, and Plugins

Technical requirements

Understanding the challenges of plugin version management

Upgrading to the next immediate LTS version of Jenkins

Upgrading while skipping many versions of LTS releases

Pitfalls of preinstalling failed plugins

Upgrade strategies

Upgrade strategy for a small- to medium-scale Jenkins instance

Upgrade strategy for a large-scale Jenkins instance

Upgrading plugins using Plugin Manager

Upgrading the controller

Announcing the upgrade plans to the users

Building a new controller image

Pre-upgrade checklist

Finally, the actual upgrade

Summary

9Reducing Bottlenecks

Technical requirements

Recommendations for hosting Jenkins to avoid bottlenecks

General server recommendations

How to keep Jenkins memory footprint light

Memory and garbage collection tuning

Periodic triggers versus webhook triggers297

Tracking operational costs in the cloud

Quick performance improvements in an existing Jenkins instance

GitHub Pull Request Builder plugin boot optimization

Frontpage load delay due to the "weather" health display

Pipeline speed/durability settings

Improving Jenkins uptime and long-term health

What is a periodic maintenance job and how do you create one?

Terminating long-running pipelines

Releasing stale locks in lockable resources from force killing builds

Log cleanup for beginners

Log cleanup for multibranch pipeline job types

Avoiding and reducing the use of echo step

CPU bottleneck: NonCPS versus CPS pipeline as code

Pre-compiling all NonCPS code as an external jar

Including a NonCPS library as a plugin

Controller bottlenecks created by an agent

Defining agent and controller interaction bottlenecks

Agent booting start up bottleneck

Stashing and archiving artifacts

Storing controller and agent logs in CloudWatch

Pipeline Logging over CloudWatch plugin

Controller logging over CloudWatch

AWS IAM roles for controller and agent CloudWatch logging

Other ways to reduce agent log output

Strategy – Writing logs to the agent disk

Drawbacks: Writing logs to the agent disk

Summary

10Shared Libraries

Technical requirements

Understanding the directory structure

Creating a shared library

Providing shared libraries

Folder-level shared libraries

Global shared libraries

Using shared libraries

Static loading

Dynamic loading

Use cases

Code reuse via global variables – Pre-formatted Slack messages

Advanced – Custom DSL

Summary

11Script Security

Technical requirements

Administrator versus non-administrator

Outside the Groovy sandbox

Direct pipeline

Global shared library

Inside the Groovy sandbox

Approve assuming permission check

Identity crisis – everyone is a SYSTEM userWhere the SYSTEM user can do things

What the SYSTEM user can do everywhere

Understanding why the Authorize Project plugin is needed

Configuring the Authorize Project plugin

Thoughts on disabling Script Security

Summary

Index

Preface

Jenkins is a renowned name among build and release CI/CD DevOps engineers because of its usefulness in automating builds, releases, and even operations. Despite its capabilities and popularity, it's not easy to scale Jenkins in a production environment. Jenkins Administratorʼs Guide will not only teach you how to set up a production-grade Jenkins instance from scratch, but also cover management and scaling strategies.

This book will guide you through the steps for setting up a Jenkins instance on AWS and inside a corporate firewall, while discussing design choices and configuration options, such as TLS termination points and security policies. You’ll create CI/CD pipelines that are triggered through GitHub pull request events, and also understand the various Jenkinsfile syntax types to help you develop a build and release process unique to your requirements. For readers who are new to Amazon Web Services, the book has a dedicated chapter on AWS with screenshots. You’ll also get to grips with Jenkins Configuration as Code, disaster recovery, upgrading plans, removing bottlenecks, and more to help you manage and scale your Jenkins instance.

By the end of this book, you’ll not only have a production-grade Jenkins instance with CI/CD pipelines in place, but also knowledge of best practices from industry experts.

Who this book is for

This book is for both new Jenkins administrators and advanced users who want to optimize and scale Jenkins. Jenkins beginners can follow the step-by-step directions, while advanced readers can join in-depth discussions on Script Security, removing bottlenecks, and other interesting topics. Build and release CI/CD DevOps engineers of all levels will also find new and useful information to help them run a production-grade Jenkins instance, following industry best practices.

What this book covers

Chapter 1, Jenkins Infrastructure with TLS/SSL and Reverse Proxy, introduces Jenkins and discusses its strengths, along with a little bit of history and important keywords. The chapter describes the architecture of the Jenkins infrastructure that we will be building in the coming chapters, one for Jenkins on AWS and another for Jenkins inside a corporate firewall. It discusses the architecture of the controllers, reverse proxy, agents, and the Docker cloud by listing the required virtual machines, operating system, and software packages we’ll use, the ports that need to be opened, and other required components. The chapter continues to discuss frequently asked questions for the AWS infrastructure, such as EC2 instance types and sizes, Regions and Availability Zones, routing rules, and Elastic IPs. Then, the chapter discusses the TLS/SSL certificate choices and goes through the steps for using Let’s Encrypt in detail to create a free certificate. Finally, the chapter discusses the importance of storage backend choices. It discusses the different options by benchmarking performance and going through the pros and cons of the popular storage backend solutions.

Chapter 2, Jenkins with Docker on HTTPS on AWS and inside a Corporate Firewall, goes through the entire journey of setting up the Jenkins controller, reverse proxy for HTTPS connections, agents, and the Docker cloud. It shows a way to create a directory on the host machine and mount it in a Docker container running Jenkins, so that the state is preserved across container restarts. It also shows three different ways of terminating the TLS to provide HTTPS connections. Once Jenkins is running on HTTPS, the chapter goes through the basic configuration options for login methods, pipeline default speed, user permissions, and other useful default settings. It continues to show the steps for attaching agents and creating and attaching a Docker cloud, so that we end up with production-grade Jenkins.

Chapter 3, GitOps-Driven CI Pipeline with GitHub, shows the steps for creating premerge CI pipelines that are triggered from a GitHub pull request activity. It first creates four example users, then assigns them various permissions for the two example projects, adder and subtractor, to demonstrate the Jenkins permission model. It then goes through the steps for creating the CI pipelines in detail, demonstrating and discussing each step as we progress. It shows the two different ways to configure a CI pipeline, one for AWS Jenkins using push hooks and another for firewalled Jenkins using the GitHub Pull Request Builder plugin. It finishes by showing the optional steps to allow the CI pipeline to build an arbitrary branch, along with the steps to require a successful build for merging a pull request.

Chapter 4, GitOps-Driven CD Pipeline with Docker Hub and More Jenkinsfile Features, shows the steps for creating postmerge CD pipelines that are triggered from a GitHub pull request merge activity. Along the way, it discusses various Jenkinsfile techniques such as running external scripts, passing variables across steps, several ways of using Docker-outside-of-Docker (DooD), using bare-metal agents, using credentials, and interacting with GitHub and Docker Hub. Similar to Chapter 3, GitOps-Driven CI Pipeline with GitHub, it goes through the detailed steps for creating CD pipelines for both AWS Jenkins using push hooks and firewalled Jenkins using polling.

Chapter 5, Headfirst AWS for Jenkins, shows detailed instructions on using AWS. In the earlier chapters, we have skipped the details on most of the AWS operations in order to keep the focus on Jenkins, and in this chapter, we discuss them in full detail so that new users can follow the steps click by click while referring to the numerous screenshots. It starts by discussing the basics of logging into AWS, then continues to the steps for creating an SSH key pair, managing security groups, creating EC2 instances with Elastic IPs, using Let’s Encrypt to generate TLS/SSL certificates, creating and configuring Elastic Load Balancers (ELBs), using AWS Certificate Manager to generate TLS/SSL certificates, setting up routing rules, and finally configuring Route 53 to point the Jenkins URL to the controller.

Chapter 6, Jenkins Configuration as Code (JCasC), discusses JCasC in detail by creating a whole new Jenkins instance using a configuration file we generate throughout the chapter. It starts by installing the JCasC plugin and discussing the limitations and the boundaries of what JCasC can manage. It then continues to read the configuration details of the Jenkins we set up in Chapter 1, Jenkins Infrastructure with TLS/SSL and Reverse Proxy, through Chapter 4, GitOps-Driven CD Pipeline with Docker Hub and More Jenkinsfile Features. It discusses each section of the configuration, and builds a new JCasC configuration file based on the entries from the existing Jenkins. Once the configuration file is built for the controller, agent, and Docker cloud, it creates a new Jenkins instance using the configuration file. It revisits each configuration item and discusses how well (or not) it was restored. Finally, it shows an optional way to use Groovy scripting to work around some of the issues found during the restoration.

Chapter 7, Backup and Restore and Disaster Recovery, discusses the backup strategies for different scenarios, and goes over the exact steps to set up an automated backup system. It first discusses the pros and cons of a disk snapshot backup and a file-level backup. Then it looks at the content of $JENKINS_HOME and identifies the files and folders that need to be backed up at a high frequency, as opposed to the ones that need to be backed up only once a day. Once we have determined which files to back up, the chapter goes through ThinBackup plugin configurations. It first provides an off-site backup solution using NFS and Docker volume mount, then goes into the specifics of configuring the ThinBackup plugin to generate the backup files effectively. Once backup files are generated, the chapter goes through a disaster scenario where we restore a pipeline that a user mistakenly deleted. It shows various ways to identify the correct backup snapshot to restore, then goes into deep discussions on how to restore a backup effectively. In addition to restoring the mistakenly deleted pipeline, it teaches the fundamental mechanism for backup and restore by demonstrating a way to restore a pipeline that didn’t exist. Finally, it goes through an infrastructure failure disaster scenario and provides a recovery playbook that you can follow.

Chapter 8, Upgrading the Jenkins Controller, Agents, and Plugins, discusses the upgrade strategies for both small and large Jenkins instances. It first discusses the pitfalls of upgrading plugins, and provides various ways of upgrading the plugins and controller effectively. In addition to the upgrade process, the chapter goes through an SRE runbook for an upgrade scenario where you are taught when and how to communicate with users about the upgrade. The runbook covers not only the success path but also the failure scenario and discusses the restore and rollback strategies.

Chapter 9, Reducing Bottlenecks, teaches you various ways to optimize your Jenkins, such as picking the right EC2 instance size, reducing the Jenkins memory footprint, not using periodic triggers in favor of webhook triggers, tracking the AWS costs, optimizing GitHub Pull Request Builder options, and removing the weather icon from the home page. It continues on to discuss the various Groovy scripts that terminate long-running pipelines, release stale locks, and clean up logs. It also discusses the best practices for writing Jenkins pipeline code, such as reducing the use of the echo step and using NonCPS code for faster execution. It then talks about reducing the agent startup time by baking the plugin archives into the EC2 AMI, and then finally discusses the ways to manage various logs effectively.

Chapter 10, Shared Libraries, starts by discussing the directory structure and the content of a shared library. Afterward, it creates an example shared library that uses many common features, then explains the differences between providing the shared library as a global shared library versus a folder-level shared library. Once the shared library is available to be used, the chapter teaches you several different methods of loading it and discusses the use case for each method. Afterward, the chapter goes through a hands-on example of creating shared library functions that use the Slack messenger app to provide standardized messaging wrappers. Finally, the chapter dives deeper into a more advanced use case of creating custom domain-specific languages (DSLs) using shared libraries.

Chapter 11, Script Security, starts with an explanation of the role of an administrator versus a non-administrator in Jenkins. It continues by explaining the concept of the Groovy sandbox, and discusses running outside and inside the sandbox. It teaches the dangers of running pipelines outside of the sandbox and provides a use case of using a global shared library to wrap dangerous method calls. It then continues to explain the Jenkins permission model by discussing running inside the sandbox and teaches you how to use method signature approvals effectively. The chapter takes a deep dive into explaining the approve assuming permissions check button, and explains the SYSTEM user and the dangers of the default Jenkins permission model. Finally, it discusses an alternate Jenkins design that doesn’t rely on the Script Security plugin’s protection.

To get the most out of this book

We will be using Git, Docker, systemd, OpenSSL, NGINX, and other tools on Linux, so you need basic familiarity with the Linux command line. For AWS Jenkins, you need an AWS account where you can create and manage EC2 instances, ELBs, AWS Certificate Manager certificates, and Route 53 entries. For the Jenkins inside a corporate firewall, you need three virtual machines and optionally access to a company Public Key Infrastructure (PKI) where you can generate TLS/SSL certificates. You also need a GitHub and a Docker Hub account.

Software/hardware covered in the book

Windows, macOS, Linux, or any other operating system that you can use to SSH into a Linux machine

Ubuntu 20.04 for the virtual machines and EC2 instances

Docker 18 or higher, used in the Ubuntu 20.04 hosts

Git and OpenSSL, which are preinstalled in Ubuntu 20.04

Jenkins 2.263.1-LTS or higher

Jenkinsfiles and Shared Libraries code are written in the Groovy programming language. You can follow along without prior experience with Groovy, but familiarity with Groovy will help you understand the shared libraries chapter more easily. It’s very similar to Java.

If you are using the digital version of this book, we advise you to type the code yourself or access the code via the GitHub repository (link available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Jenkins-Administrators-Guide. In case there’s an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://static.packt-cdn.com/downloads/9781838824327_ColorImages.pdf

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, folder names, filenames, dummy URLs, and user input. Here is an example: “For example, builds for a pipeline that specifies agent { label 'ubuntu2004-agent' } would run only on ubuntu2004-agent, even if you didn’t label the agent with its own name.”

A block of code is set as follows:

$ ssh [email protected]$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

$ sudo usermod -aG docker $USER$ exitlogout

Screen text: Indicates words that you see onscreen. Here is an example: “Click Save and Finish to continue (we will change this soon), then click Start using Jenkins.”

Italics: Indicates an important word a phrase. Here is an example: “Most importantly, you are responsible for the restoration in the event of a disaster.”

Tips or important notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packt.com.

Part 1

Securing Jenkins on AWS and inside a Corporate Firewall for GitOps-Driven CI/CD with GitHub and Docker Hub

In this section, you will learn about the Jenkins infrastructure architecture, the application architecture, and the project architecture. You will learn how to set up Jenkins securely from scratch with static agents, as well as dynamic agents from the Docker cloud. You will create pipelines and connect them to GitHub for CI builds, and to Docker Hub for CD builds.

This part of the book comprises the following chapters:

Chapter 1, Jenkins Infrastructure with TLS/SSL and Reverse ProxyChapter 2, Jenkins with Docker on HTTPS on AWS and inside a Corporate FirewallChapter 3, GitOps-Driven CI Pipeline with GitHubChapter 4, GitOps-Driven CD Pipeline with Docker Hub and More Jenkinsfile FeaturesChapter 5, Headfirst AWS for Jenkins

1

Jenkins Infrastructure with TLS/SSL and Reverse Proxy

In this chapter, we will learn about the foundational components of Jenkins: the controller, agents, cloud, domain name, TLS/SSL certificates, and reverse proxy. First, we will learn where each component fits into the architecture, and then prepare the VMs and TLS/SSL certificates. Finally, we will learn the importance of choosing the right storage medium for the Jenkins controller and discuss the pros and cons of some of the popular storage options. By the end of this chapter, we will understand the Jenkins architecture and have the necessary components ready so that we can put them together in the next chapter.

In this chapter, we're going to cover the following main topics:

Why Jenkins?Searching for answers online with Jenkins keywordsUnderstanding the Jenkins architectureAWS: FAQs, routing rules, EC2 instances, and EIPsInstalling Docker on our VMsAcquiring domain names and TLS/SSL certificatesStorage concerns (very important!)

Technical requirements

You need a domain name (for example, jenkins.example.com) for your Jenkins instance and one or both of the following:

An AWS account with permission to create three EC2 instances, create an Application ELB, create certificates via AWS Certificate Manager, create AWS access keys via IAM, and modify domain records in Route 53.Three VMs running Ubuntu 20.04, access to domain records for your domain, and optionally a company public key infrastructure (PKI).

The files for this chapter are available in this book’s GitHub repository at https://github.com/PacktPublishing/Practical-Jenkins-2.x/blob/main/ch1.

Why Jenkins?

A Continuous Integration (CI) build runs when a pull request (PR) is created or updated so that a developer can build and test the proposed change before merging the change. In the 2020s, a CI system is a normal fact of life for any software development, and it is difficult to imagine developing a software product without it.

But finding a good CI solution remains difficult, particularly when the validation process is more complex than a simple use case of building a Go binary and creating a Docker image with it. For example, if we need to build a phone firmware and need to flash a physical phone in one of the three test phones as a part of the validation, we would need a CI system that can handle physical connections to the hardware, as well as manage resource locks to make sure that the validation does not disrupt other validations that are already in progress.

The use cases are even more complex during the Continuous Delivery or Continuous Deployment (CD) process, where the end product of a build is either stored in an archive (such as Docker registry or Artifactory) or deployed to a production system. The CD builds' more frequent interactions with external systems require credential management and environment preparation for a deployment. For example, it's not uncommon for a production system to be in an isolated network inaccessible from the corporate network, and a jumpbox session must be prepared before an update can be deployed. It's also common that the deployments are handled in a different set of pipelines that are not tied to the code changes in the Git repository, which means that we would need a CD solution made up of free-standing pipelines that are detached from any PR activities.

Jenkins, known for its supreme flexibility, can handle such complex use cases easily. Managing a physical hardware connection, handling resource locks, managing credentials, and handling session data across multiple stages is all built into Jenkins or can be made available easily through one of 1,500+ plugins1.

In addition to the rich feature set, Jenkins supports real programming languages such as Java and Groovy to specify the build steps. This allows us to write real code to express our build logic using variables, conditionals, loops, exceptions, and sometimes even classes and methods, rather than being bound by a domain-specific language (DSL) that is often limiting.

For those of us who prefer a more structured solution, Jenkins also supports a DSL to provide a uniform development experience. The most typical use cases are covered by the plugins that provide a wrapper to the common code, created by the dedicated user base who continues to contribute to the open source platform. CloudBees, which does an excellent job maintaining the project, also contributes to the open source plugin ecosystem. The vast number of available plugins indicates that developing a plugin is easy if we need to create a specific solution for our business use case.

Finally, a big advantage of Jenkins is that it's free. The Jenkins source code uses the MIT License2, which is one of the most permissive open source licenses. We can scale Jenkins vertically by having one very powerful shared instance, or we can develop a "Jenkins vending machine" infrastructure, which creates a new Jenkins instance for each team, all without paying any licensing fees. We can even embed Jenkins in a commercial product and sell the product.

Let's learn how to use Jenkins, the most flexible and powerful CI system.

Searching for answers online with Jenkins keywords

Let's start with the most important aspect of software engineering: searching for answers online. Jenkins has a development history of over a decade and many ideas and keywords have come and gone. Let's go over the keywords so that we can search for them online effectively.

The Jenkins project was initially released in 2005 under the name Hudson. After an ownership dispute about the name with Oracle Corporation, in 2011, it was officially renamed and released as Jenkins. Hudson rarely comes up during normal use, so casual Jenkins users will not come across the term. However, a large part of the code is under the hudson Java package (https://github.com/jenkinsci/jenkins/tree/master/core/src/main/java/hudson), so an admin or an author of a plugin should understand that Hudson is a precursor of Jenkins.

The Jenkins architecture has a controller that acts as a central server and an agent that runs build steps. The controller and agent were originally called master and slave until they were renamed for Jenkins 2.03. This is a recent change, so you'll still see references to master throughout Jenkins. For example, the first agent is named master and it can't be changed (https://jenkins.example.com/computer/(master)/). In addition, as of 2021, there are more search results for jenkins master than jenkins controller, so debugging a controller issue may require we search with the term master. There is also a node, which means a computer. Both the controller and agent are nodes, but sometimes, an agent is mistakenly called a node.

The two most popular agent types are the SSH agent and inbound agent. SSH agents are typically used when the controller can reach out to the agent, whereas the inbound agents are typically used when the controller cannot reach the agent and the agent must initiate the connection back to the controller. The inbound agent was originally called JNLP slave (https://hub.docker.com/r/jenkins/jnlp-slave), and we may find many references to it as we search online for help.

In Jenkins, a configuration is created to perform a task (for example, build software), and this configuration is called a project. In the past, a project was called a job. Even though the term job had been deprecated, it is still widely used, which leads us to the next term...

Job DSL is one of the first plugins that was created for codifying a Jenkins project. DSL is a fancy term for custom programming language, so Job DSL means a custom programming language for a Jenkins project. It is used to define and create a Jenkins project through code rather than through a GUI. Job DSL is not Pipeline DSL. They are entirely different solutions and an answer for a question for one of them will not be applicable to the other.

Pipeline DSL is the custom programming language for a Jenkins Pipeline (or simply Pipeline with a capital P (https://www.jenkins.io/doc/book/pipeline/#overview). Pipelines are enabled through the Pipeline plugin (https://plugins.jenkins.io/workflow-aggregator/), which used to be called Workflow, a term you may come across if you're creating a plugin.

There are two flavors of Pipeline DSL syntax: Scripted Pipeline and Declarative Pipeline (https://www.jenkins.io/doc/book/pipeline/syntax/#compare). Both are Pipeline DSLs that are written into a text file named Jenkinsfile, and Jenkins can process either flavor of the syntax. Scripted Pipeline syntax was created first, so older posts on online forums will likely be in Scripted Pipeline syntax, while Declarative Pipeline syntax will typically be found on newer posts. The two syntaxes are not directly interchangeable, but an answer to a problem for one syntax flavor can usually be converted into the other. The underlying language of both is Groovy, a fully featured programming language independent of Jenkins.

Finally, the verb for executing a project is build. For example, we build a pipeline to execute its steps. Some pipelines are generic tasks that don't really build software, so many people say run a pipeline or even run a build for a pipeline instead. However, regardless of what the pipeline does, the button for executing a pipeline is captioned Build Now. Most people use the terms project, job, and pipeline interchangeably, so building a project, running a pipeline, and running a build for a job all mean the same thing.

With that, you should have all the keywords for the web searches in case you have questions that this book doesn't answer. Next, let's understand the Jenkins architecture.

Understanding the Jenkins architecture

Before we set up Jenkins, let's go over the blueprint of what we will build.

Controller

As we mentioned previously, the controller is the central server for Jenkins. We'll set up the controller as follows:

There are two Jenkins instances – one on AWS and the other in a corporate firewalled network. AWS Jenkins is built with an open source project in mind – builds need to run on the internet, where all contributors can see them. Firewalled Jenkins is for a typical corporation setting where all development and testing happens inside the corporate firewall. You can create one that fits your use case or create both to see which fits your needs better, even though, ultimately, you'll only need only one of the two. The settings for the two are not always interchangeable, so be sure to pick the correct sections for each type as you read ahead.Both Jenkins controllers use a VM running the latest Ubuntu LTS, which is 20.04 at the time of writing.The Jenkins controller runs as a Docker container listening on port 8080 for HTTP and port 50000 for inbound agents.The Jenkins controller container bind mounts a directory on the host to store jenkins_home.The AWS controller's hostname is aws-controller and it runs as the ubuntu user. Therefore, the commands that are run on it will start with ubuntu@aws-controller:<path>$.The firewalled controller's hostname is firewalled-controller and it runs as the robot_acct user. Therefore, the commands that are run on it will start with robot_acct@firewalled-controller:<path>$.The commands starting with controller:<path>$ indicate the commands that can run on either controller.Finally, the AWS controller has an Elastic IP (EIP) attached to it. The EIP provides a static IP that doesn't change, and the inbound agent will connect to the EIP's address.

This is what it looks like:

Figure 1.1 – Architecture of the VMs and containers for Jenkins controllers

With that, we have learned about the architectures of the AWS controller and firewalled controller. Now, let's move on and cover some more components.

Domain name, TLS/HTTPS, load balancer, and reverse proxy

The endpoint for a production-grade web service should be on HTTPS with a domain name rather than on HTTP with an IP. This requires that we have domain names, TLS certificates, a load balancer, and a reverse proxy. We'll set them up as follows:

jenkins-aws.lvin.ca points to the AWS Jenkins controller.jenkins-firewalled.lvin.ca points to the firewalled Jenkins controller.HTTPS is provided through a load balancer, reverse proxy, or directly on Jenkins.The AWS Jenkins controller uses an Elastic Load Balancer (ELB) for TLS termination.The firewalled Jenkins controller uses an NGINX reverse proxy or the Jenkins controller itself for TLS termination.The load balancer and reverse proxy receive traffic from HTTP port 80 and redirects it to HTTPS port 443 for secure communication.The load balancer and reverse proxy receive traffic from HTTPS port 443, terminate the TLS, and proxy it to HTTP port 8080, which is where the Jenkins controller listens.

Here's how it's arranged:

Figure 1.2 – Architecture of the DNS and reverse proxy for Jenkins controllers

With that, we have learned about the network architecture of DNS, TLS, the load balancer, and the reverse proxy. Let's continue and look at agents.

Agents

If a controller is where the pipelines are managed, then agents are where the actual builds run. We'll set up our agents as follows:

There are two nodes for agents – one on AWS and another inside a corporate firewall.The agent nodes also run on VMs running Ubuntu 20.04.Each agent node connects to both Jenkins instances (yes, it's possible to connect a node to multiple Jenkins controllers).The AWS agent has an EIP attached to it. The EIP provides a static IP that doesn't change, and the firewalled controller will connect to the EIP's address.The firewalled agent connects to the AWS Jenkins controller on port 50000 as an inbound agent. For all other agents, the controller initiates the SSH connection on port 22 to configure the agents as SSH agents.The AWS agent's hostname is aws-agent, and it runs as the ubuntu user. Therefore, the commands that are run on it will start with ubuntu@aws-agent:<path>$.The firewalled agent's hostname is firewalled-agent, and it runs as the robot_acct user. Therefore, the commands that are run on it will start with robot_acct@firewalled-agent:<path>$.The commands starting with agent:<path>$ indicate the commands that can run on either agent.

Here's what it looks like:

Figure 1.3 – Architecture of SSH and inbound Jenkins agents

With that, we have learned about the architectures of two possible agent connection types that can cross a corporate firewall. We have just one component remaining: Docker cloud.

Docker cloud

Docker cloud is used to dynamically generate an agent using Docker containers. There needs to be a Docker host where the containers will run, and this is how we will set it up:

There are two Docker hosts – one on AWS and another inside a corporate firewall.The Docker hosts also run on VMs running Ubuntu 20.04.The Docker hosts are not Jenkins agents. Each provides a TCP endpoint for ephemeral Docker agents to be dynamically generated.A controller communicates with a Docker host on TCP port 2376, which is secured with an X.509 certificate. We will follow the steps in the official document4.The AWS Docker host's hostname is aws-docker-host, and it runs as the ubuntu user. Therefore, the commands that are run on it will start with ubuntu@aws-docker-host:<path>$.The firewalled Docker host's hostname is firewalled-docker-host, and it runs as the robot_acct user. Therefore, the commands that are run on it will start with robot_acct@firewalled-docker-host:<path>$.The commands starting with docker-host:<path>$ indicate the commands that can run on either Docker host.

Here's what it looks like:

Figure 1.4 – Architecture of the Docker cloud host

With that, we've learned how a Docker host is configured for a Docker cloud and how a controller connects to the Docker host. Now, let's take a step back and look at it in its entirety.

Bringing it all together

There are six machines all running Ubuntu 20.04. The NGINX reverse proxy runs on the same machine running the firewalled Jenkins controller:

AWS Jenkins controller (ubuntu@aws-controller)AWS Jenkins agent (ubuntu@aws-agent)AWS Docker host (ubuntu@aws-docker-host)Firewalled Jenkins controller (robot_acct@firewalled-controller)Firewalled Jenkins agent (robot_acct@firewalled-agent)Firewalled Docker host (robot_acct@firewalled-docker-host)

Here's how it all stacks up:

Figure 1.5 – Overview of the complete Jenkins architecture

That is the complete picture of what we will build. Let's continue and get our hands dirty by preparing the VMs and the TLS certificates.

AWS: FAQs, routing rules, EC2 instances, and EIPs

AWS is used heavily in this book, but it is such a vast ecosystem that we can't sufficiently discuss all the details without taking the focus away from Jenkins. Rather than trying to guess what level of detail we should provide, instead we have dedicated a separate chapter to discussing AWS in depth. Chapter 5, Headfirst AWS for Jenkins, features step-by-step instructions with plenty of screenshots, best practices you should follow, and more. The rest of this book will still cover the AWS topics at a high level, and you can turn to Chapter 5, Headfirst AWS for Jenkins, for a deeper dive.

Now, let's cover some common pitfalls that everyone should watch out for.

EC2 instance types and sizes

You can start with an EC2 instance as small as t2.micro – I used t2.micro for the AWS Jenkins build for this book and it worked just fine. For a production controller, you can start with a larger end of the T2 type and then switch to a more powerful C5 type if you need to. The agent can be a general-purpose T2 type since it serves various pipelines.

Next, let's look at where to put them.

Regions and Availability Zones

Putting all the VMs in the same Availability Zone yields the best performance, but it's not strictly necessary. It would be a good idea to put them in the same region, but it's not an invalid setup to use an agent in a different region if the agent needs to be geographically closer to the resources that are used during a build. For example, if your HQ is in the US but the lab equipment that's used for testing is in India, the agent should be in India to minimize any latency-related issues during the build. Beware that in some cases, AWS charges extra for transferring data across regions. For example, copying data from an S3 bucket to an EC2 instance within the same region is free of charge, whereas accessing the S3 bucket from a different region incurs costs5. If there is a large amount of data transfer, it may make sense to plan the agent's locations based on the data transfer cost.

Next, let's check out the routing rules.

Routing rules

VPC routing is a very complex topic that will be discussed more fully in Chapter 5, Headfirst AWS for Jenkins. For now, the most important rules are as follows:

The VMs and ELB can talk to one another.Anyone can reach the ELB on port 80 for HTTP and port 443 for HTTPS.Only the inbound agent can reach the controller on port 50000.Only we can reach the VMs on port 22 for SSH.Only we can reach the VMs on port 8080 to test Jenkins without going through ELB.

The easiest way is to have three security groups – one for internal connections, another for the VMs, and the last for the ELB.

On EC2 Dashboard, click Security Groups under Network & Security. Find the security group named default – this is your default security group. All resources (such as EC2 or ELB) with this security group attached to them can talk to one other. For the internal connections, attach this security group to all your VMs and ELBs. Let's create the other two security groups.

First, create a new security group named jenkins-vm that accept traffic to port 22, 8080, and 50000. All three ports should accept traffic from just My IP. This way, only we can SSH to the hosts or connect to port 8080 to debug. This configuration assumes that the inbound agent's IP is the same as the IP of your laptop. If the inbound agent's IP is not from the same network as your laptop, then change the source IP for the port 50000 to match the inbound agent's IP. All VMs we create should have this security group attached in addition to the default security group as discussed.

Inbound rules should look like this (the source CIDR will be different for yours):

Figure 1.6 – Security group inbound rules for jenkins-vm

Next, create a new security group named jenkins-elb that accepts traffic to port 80 and 443. Both ports should accept traffic from Anywhere to allow anyone on the internet to access Jenkins through HTTP and HTTPS. HTTP will be redirected to HTTPS as we'll see in the next chapter. The ELB we create should have this security group attached in addition to the default security group. Inbound rules should look like this:

Figure 1.7 – Security group inbound rules for jenkins-elb

Finally, let's create the EC2 instances and the EIPs.

EC2 instances and EIPs

Create three EC2 instances to the following specifications. If you're unsure of the steps, check out Chapter 5, Headfirst AWS for Jenkins, for a detailed guide:

Amazon Machine Image (AMI): Ubuntu Server 20.04 LTS (HVM), SSD Volume Type with the 64-bit (x86) architecture.Instance Type: t2.micro. Once you get the hang of running Jenkins, you will be able to create a larger one for your production server.Instance Details / Auto-assign Public IP: Disable. We will use an EIP instead.Storage: For a test instance, 8 GiB is fine. For a production instance, increase it to 100 GiB. Keep the type as the gp2 type because io1 and io2 are very expensive. If the performance becomes a problem, check out the tips and tricks from Chapter 9, Reducing Bottlenecks, for the solutions.Tags: Set Name as Jenkins Controller, Jenkins Agent, or Jenkins Docker cloud host for each of the three hosts:

Figure 1.8 – Name tag for Jenkins Controller

Security Group: This part is important. Click Select an existing security group, then check both the default and jenkins-vm security groups as shown here:

Figure 1.9 – Selecting both the default and jenkins-vm security groups

Finally, create an EIP and attach it to the instance. The instance details page should show its public IP matching its EIP:

Figure 1.10 – EIP is attached to Jenkins Controller EC2 instance

Once the three EC2 instances are ready, let's continue to install Docker.

Installing Docker on our VMs

Docker is a fundamental tool in modern software engineering. It provides a convenient way of establishing a preconfigured isolated environment that is defined as text in a Dockerfile.

Docker is used for everything in our Jenkins setup, so we need to install Docker on all our VMs. Follow the installation steps in Docker's official documentation (https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository).

Once the installation is complete, be sure to add the user to the docker group, log out, and log back in. 52.53.150.203 is the IP of one of my VMs. You should use your VM's IP instead:

$ sudo usermod -aG docker $USER$ exitlogoutConnection to 52.53.150.203 closed.$ ssh [email protected]$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

With Docker installed and the user added to the docker group, the VMs are ready to be used for Jenkins. The first task in setting up Jenkins is acquiring the TLS certificates. Let's go through that together.

Acquiring domain names and TLS/SSL certificates

A production-grade web service should use a domain name and HTTPS, even if it's an internal tool. Let's examine their role in our architecture.

Domain names

Two domain names are needed for the two Jenkins instances. If you are using a subdomain of your company's domain (for example, jenkins.companyname.com), be sure that you can modify the A record, CNAME, and TXT record for the domain name. A new .com domain name can be purchased from AWS for around $12. For the AWS Jenkins instance, the DNS configuration is simpler if the domain is managed through Route 53. In our setup, we will be using jenkins-aws.lvin.ca and jenkins-firewalled.lvin.ca.

TLS/SSL certificates

TLS (also commonly referred to as SSL, which is TLS's predecessor technology) enables HTTPS, which allows secure communication. A TLS certificate can be obtained in several different ways:

AWS Certificate Manager provides free public certificates to be used by AWS resources. This is useful if your Jenkins instance is on AWS, but the free certificates cannot be exported to be used in more advanced ways. For example, the certificate can be used on ELB, but cannot be exported for your own NGINX reverse proxy running on an EC2 instance.In a corporate setting, sometimes, there is a PKI at pki.companyname.com where you can generate TLS certificates for the domain names that the company owns. These are often internal certificates that are signed by the company's own certificate authority (CA), which are only accepted by the machines where the company's CA is pre-installed. This is useful if your Jenkins instance is behind a corporate firewall and will only be accessed by the company's equipment.Let's Encrypt provides free public certificates. This is useful when the Jenkins instance is not running on AWS and your company doesn't provide a PKI. The certificates, however, are valid only for 90 days and it requires additional configuration to auto-renew.Commercial vendors such as Comodo or RapidSSL sell public certificates. There are resellers who sell the same certificates for a fraction of the original cost, so search online for deals.

Try the various methods and get the TLS certificate for your Jenkins URLs.

Let's Encrypt

Let's Encrypt certificates can be generated and renewed using Certbot. Since the certificates are used in the controller, the certificates should be generated on the controller host.

The certificates expire in 90 days, and the same commands can be run again to regenerate the same certificates with a renewed expiry date. The generation is rate limited, so plan to minimize the number of requests to Let's Encrypt by waiting about 80 days before regenerating the updated certificates and not sooner. Creating new certificates doesn't invalidate the existing certificates.

Prepare the work directories, like so:

robot_acct@firewalled-controller:~$ mkdir -p ~/letsencrypt/{certs,logs,work}

When we request Let's Encrypt to generate a certificate for a domain, Let's Encrypt asks us to verify that we own the domain. We can verify it either by manually modifying the TXT record on the domain or by letting Certbot automatically modify it on Amazon Route 53 for us.

Manual verification

A manual verification