The DevOps 2.0 Toolkit - Viktor Farcic - E-Book

The DevOps 2.0 Toolkit E-Book

Viktor Farcic

0,0
41,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Automating the Continuous Deployment Pipeline with Containerized Microservices

About This Book

  • First principles of devops, Ansible, Docker, Kubernetes, microservices
  • Architect your software in a better and more efficient way with microservices packed as immutable containers
  • Practical guide describing an extremely modern and advanced devops toolchain that can be improved continuously

Who This Book Is For

If you are an intermediate-level developer who wants to master the whole microservices development and deployment lifecycle using some of the latest and greatest practices and tools, this is the book for you. Familiarity with the basics of Devops and Continuous Deployment will be useful.

What You Will Learn

  • Get to grips with the fundamentals of Devops
  • Architect efficient software in a better and more efficient way with the help of microservices
  • Use Docker, Kubernetes, Ansible, Ubuntu, Docker Swarm and more
  • Implement fast, reliable and continuous deployments with zero-downtime and ability to roll-back
  • Learn about centralized logging and monitoring of your cluster
  • Design self-healing systems capable of recovery from both hardware and software failures

In Detail

Building a complete modern devops toolchain requires not only the whole microservices development and a complete deployment lifecycle, but also the latest and greatest practices and tools. Victor Farcic argues from first principles how to build a devops toolchain. This book shows you how to chain together Docker, Kubernetes, Ansible, Ubuntu, and other tools to build the complete devops toolkit.

Style and approach

This book follows a unique, hands-on approach familiarizing you to the Devops 2.0 toolkit in a very practical manner. Although there will be a lot of theory, you won't be able to complete this book by reading it in a metro on a way to work. You'll need to be in front of your computer and get your hands dirty.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 580

Veröffentlichungsjahr: 2016

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

The DevOps 2.0 Toolkit
Credits
About the Author
www.PacktPub.com
eBooks, discount offers, and more
Why subscribe?
Preface
Overview
Audience
1. The DevOps Ideal
Continuous Integration, Delivery, and Deployment
Architecture
Deployments
Orchestration
The Light at the End of the Deployment pipeline
2. The Implementation Breakthrough – Continuous Deployment, Microservices, and Containers
Continuous Integration
Pushing to the Code repository
Static analysis
Pre-Deployment testing
Packaging and Deployment to the Test environment
Post-Deployment testing
Continuous Delivery and Deployment
Microservices
Containers
The Three Musketeers – Synergy of Continuous Deployment, Microservices, and Containers
3. System Architecture
Monolithic Applications
Services Split Horizontally
Microservices
Monolithic Applications and Microservices Compared
Operational and Deployment Complexity
Remote Process Calls
Scaling
Innovation
Size
Deployment, Rollback, and Fault Isolation
Commitment Term
Deployment Strategies
Mutable Monster Server
Immutable Server and Reverse Proxy
Immutable Microservices
Microservices Best Practices
Containers
Proxy Microservices or API Gateway
Reverse Proxy
Minimalist Approach
Configuration Management
Cross-Functional Teams
API Versioning
Final Thoughts
4. Setting Up the Development Environment with Vagrant and Docker
Combining Microservice Architecture and Container Technology
Vagrant and Docker
Development Environment Setup
Vagrant
Docker
Development Environment Usage
5. Implementation of the Deployment Pipeline – Initial Stages
Spinning Up the Continuous Deployment Virtual Machine
Deployment Pipeline Steps
Checking Out the Code
Running Pre – Deployment Tests, Compiling, and Packaging the Code
Building Docker Containers
Running Containers
Pushing Containers to the Registry
The Checklist
6. Configuration Management in the Docker World
CFEngine
Puppet
Chef
Ansible
Final Thoughts
Configuring the Production Environment
Setting Up the Ansible Playbook
7. Implementation of the Deployment Pipeline – Intermediate Stages
Deploying Containers to the Production Server
Docker UI
The Checklist
8. Service Discovery – The Key to Distributed Services
Service Registry
Service Registration
Self-Registration
Registration Service
Service Discovery
Self-Discovery
Proxy Service
Service Discovery Tools
Manual Configuration
Zookeeper
etcd
Setting Up etcd
Setting Up Registrator
Setting Up confd
Combining etcd, Registrator, and confd
Consul
Setting Up Consul
Setting Up Registrator
Setting Up Consul Template
Consul Health Checks, Web UI, and Data Centers
Combining Consul, Registrator, Template, Health Checks and WEB UI
Service Discovery Tools Compared
9. Proxy Services
Reverse Proxy Service
How can Proxy Service help our project?
nginx
Setting Up nginx
Living without a Proxy
Manually Configuring nginx
Automatically Configuring nginx
HAProxy
Manually Configuring HAProxy
Automatically Configuring HAProxy
Proxy Tools Compared
10. Implementation of the Deployment Pipeline – The Late Stages
Starting the Containers
Integrating the Service
Running Post-Deployment Tests
Pushing the Tests Container to the Registry
The Checklist
11. Automating Implementation of the Deployment Pipeline
Deployment Pipeline Steps
The Playbook and the Role
Pre-Deployment tasks
Deployment tasks
Post-Deployment tasks
Running the Automated Deployment Pipeline
12. Continuous Integration, Delivery and Deployment Tools
CI/CD Tools Compared
The Short History of CI/CD Tools
Jenkins
Setting Up Jenkins
Setting Up Jenkins with Ansible
Running Jenkins Jobs
Setting Up Jenkins Workflow Jobs
Setting Up Jenkins Multibranch Workflow and Jenkinsfile
Final Thoughts
13. Blue-Green Deployment
The blue-green deployment process
Manually running the blue-green deployment
Deploying the blue release
Integrating the blue release
Deploying the green release
Integrating the green release
Removing the blue release
Discovering which release to deploy and rolling back
Automating the blue-green deployment with Jenkins workflow
Blue-green deployment role
Running the blue-green deployment
14. Clustering and Scaling Services
Scalability
Axis scaling
X-Axis scaling
Y-Axis scaling
Z-Axis scaling
Clustering
Docker Clustering Tools Compared – Kubernetes versus Docker Swarm versus Mesos
Kubernetes
Docker Swarm
Apache Mesos
Setting It Up
Running Containers
The Choice
Docker Swarm walkthrough
Setting Up Docker Swarm
Deploying with Docker Swarm
Deploying with Docker Swarm without Links
Deploying with Docker Swarm and Docker Networking
Scaling Services with Docker Swarm
Scheduling Containers Depending on Reserved CPUs and Memory
Automating Deployment with Docker Swarm and Ansible
Examining the Swarm Deployment Playbook
Running the Swarm Jenkins Workflow
The Second Run of the Swarm Deployment Playbook
Cleaning Up
15. Self-Healing Systems
Self-Healing Levels and Types
Self-Healing on the Application Level
Self-Healing on the System Level
Time-To-Live
Pinging
Self-Healing on the Hardware Level
Reactive healing
Preventive healing
Self-Healing Architecture
Self-Healing with Docker, Consul Watches, and Jenkins
Setting Up the Environments
Setting Up Consul Health Checks and Watches for Monitoring Hardware
Automatically Setting Up Consul Health Checks and Watches for Monitoring Hardware
Setting Up Consul Health Checks and Watches for Monitoring Services
Preventive Healing through Scheduled Scaling and Descaling
Reactive Healing with Docker Restart Policies
Combining On-Premise with Cloud Nodes
Self-Healing Summary (So Far)
16. Centralized Logging and Monitoring
The Need for Centralized Logging
Sending Log Entries to ElasticSearch
Parsing Log Entries
Sending Log Entries to a Central LogStash Instance
Sending Docker Log Entries to a Central LogStash Instance
Self-Healing Based on Software Data
Logging Hardware Status
Self-Healing Based on Hardware Data
Final Thoughts
17. Farewell
A. Docker Flow
The Background
The Standard Setup
The Problems
Deploying Without Downtime
Scaling Containers using Relative Numbers
Proxy Reconfiguration after the New Release Is Tested
Solving The Problems
Docker Flow Walkthrough
Setting it up
Reconfiguring Proxy after Deployment
Deploying a New Release without Downtime
Scaling the service
Testing Deployments to Production
Index

The DevOps 2.0 Toolkit

The DevOps 2.0 Toolkit

Copyright © 2016 Viktor Farcic

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: August 2016

Production reference: 1240816

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham B3 2PB, UK.

ISBN 978-1-78528-919-4

www.packtpub.com

Credits

Author

Viktor Farcic

Acquisition Editor

Frank Pohlmann

Technical Editor

Danish Shaikh

Indexer

Mariammal Chettiyar

Graphics

Disha Haria

Production Coordinator

Arvindkumar Gupta

Cover Work

Arvindkumar Gupta

About the Author

Viktor Farcic is a Senior Consultant at CloudBees. He coded using a plethora of languages starting with Pascal (yes, he is old), Basic (before it got Visual prefix), ASP (before it got .Net suffix), C, C++, Perl, Python, ASP.Net, Visual Basic, C#, JavaScript, etc. He never worked with Fortran. His current favorites are Scala and JavaScript even though most of his office hours he spends with Java.

His big passions are Microservices, Continuous Deployment and Test-Driven Development (TDD).

He often speaks at community gatherings and conferences.

He wrote Test-Driven Java Development, Packt Publishing.

www.PacktPub.com

eBooks, discount offers, and more

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www2.packtpub.com/books/subscription/packtlib

Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.

Why subscribe?

Fully searchable across every book published by PacktCopy and paste, print, and bookmark contentOn demand and accessible via a web browser

Preface

I started my career as a developer. During those early days, all I knew (and thought I should know) was to write code. I believed that a great software designer is a person that is proficient in writing code and that the path to the mastery of the craft was to know everything about a single programming language of choice. Later on, that changed and I started taking an interest in different programming languages. I switched from Pascal to Basic and then ASP. When Java and, later on, .Net came into existence, I learned benefits of object oriented programming. Python, Perl, Bash, HTML, JavaScript, Scala. Each programming language brought something new and taught me how to think differently and how to pick the right tool for the task at hand. With each new language I learned, I felt like I was closer to being an expert. All I wanted was to become a senior programmer. That desire changed with time. I learned that if I was to do my job well, I had to become a software craftsman. I had to learn much more than to type code. Testing became my obsession for some time, and now I consider it an integral part of development. Except in very special cases, each line of code I write is done with test-driven development (TDD). It became an indispensable part of my tool-belt. I also learned that I had to be close to the customer and work with him side by side while defining what should be done. All that and many other things led me to software architecture. Understanding the big picture and trying to fit different pieces into one big system was the challenge that I learned to like.

Throughout all the years I've been working in the software industry, there was no single tool, framework or practice that I admired more than continuous integration (CI) and, later on, continuous delivery (CD). The real meaning of that statement hides behind the scope of what CI/CD envelops. In the beginning, I thought that CI/CD means that I knew Jenkins and was able to write scripts. As the time passed I got more and more involved and learned that CI/CD relates to almost every aspect of software development. That knowledge came at a cost.

I failed (more than once) to create a successful CI pipeline with applications I worked with at the time. Even though others considered the result a success, now I know that it was a failure because the approach I took was wrong. CI/CD cannot be done without making architectural decisions. Similar can be said for tests, configurations, environments, fail-over, and so on. To create a successful implementation of CI/CD, we need to make a lot of changes that, on the first look, do not seem to be directly related. We need to apply some patterns and practices from the very beginning. We have to think about architecture, testing, coupling, packaging, fault tolerance, and many other things. CI/CD requires us to influence almost every aspect of software development. That diversity is what made me fall in love with it. By practicing CI/CD we are influencing and improving almost every aspect of the software development life cycle.

To be truly proficient with CI/CD, we need to be much more than experts in operations. The DevOps movement was a significant improvement that combined traditional operations with advantages that development could bring. I think that is not enough. We need to know and influence architecture, testing, development, operations and even customer negotiations if we want to gain all the benefits that CI/CD can bring. Even the name DevOps as the driving force behind the CI/CD is not suitable since it's not only about development and operations but everything related to software development. It should also include architects, testers, and even managers. DevOps was a vast improvement when compared to the traditional operations by combining them with development. The movement understood that manually running operations is not an option given current business demands and that there is no automation without development. I think that the time came to redefine DevOps by extending its scope. Since the name DevOpsArchTestManageAndEverythingElse is too cumbersome to remember and close to impossible to pronounce, I opt for DevOps 2.0. It's the next generation that should drop the heavy do-it-all products for smaller tools designed to do very specific tasks. It's the switch that should go back to the beginning and not only make sure that operations are automated but that the whole system is designed in a way that it can be automated, fast, scalable, fault-tolerant, with zero-downtime, easy to monitor, and so on. We cannot accomplish this by simply automating manual procedures and employing a single do-it-all tool. We need to go much deeper than that and start refactoring the whole system both on the technological as well as the procedural level.

Overview

This book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the whole microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Kubernetes, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, and so on. We'll go through many practices and, even more, tools.

Finally, while there will be a lot of theory, this is a hands-on book. You won't be able to complete it by reading it in a metro on a way to work. You'll have to read this book while in front of a computer getting your hands dirty. Eventually, you might get stuck and in need of help. Or you might want to write a review or comment on the book's content. Please post your thoughts on the The DevOps 2.0 Toolkit channel in Disqus. If you prefer one-on-one discussion, feel free to send me an email to <[email protected]>, or to contact me on HangOuts, and I'll give my best to help you out.

Audience

This book is for professionals interested in the full microservices lifecycle combined with continuous deployment and containers. Due to the very broad scope, target audience could be architects who want to know how to design their systems around microservices. It could be DevOps wanting to know how to apply modern configuration management practices and continuously deploy applications packed in containers. It is for developers who would like to take the process back into their hands as well as for managers who would like to gain a better understanding of the process used to deliver software from the beginning to the end. We'll speak about scaling and monitoring systems. We'll even work on the design (and implementation) of self-healing systems capable of recuperation from failures (be it of hardware or software nature). We'll deploy our applications continuously directly to production without any downtime and with the ability to rollback at any time.

This book is for everyone wanting to know more about the software development lifecycle starting from requirements and design, through development and testing all the way until deployment and post-deployment phases. We'll create the processes taking into account best practices developed by and for some of the biggest companies.

Chapter 1. The DevOps Ideal

Working on small greenfield projects is great. The last one I was involved with was during the summer of 2015 and, even though it had its share of problems, it was a real pleasure. Working with a small and relatively new set of products allowed us to choose technologies, practices, and frameworks we liked. Shall we use microservices? Yes, why not. Shall we try Polymer and GoLang? Sure! Not having baggage that holds you down is a wonderful feeling. A wrong decision would put us back for a week, but it would not put in danger years of work someone else did before us. Simply put, there was no legacy system to think about and be afraid of.

Most of my career was not like that. I had the opportunity, or a curse, to work on big inherited systems. I worked for companies that existed long before I joined them and, for better or worse, already had their systems in place. I had to balance the need for innovation and improvement with obvious requirement that existing business must continue operating uninterrupted. During all those years I was continuously trying to discover new ways to improve those systems. It pains me to admit, but many of those attempts were failures.

We'll explore those failures in order to understand better the motivations that lead to the advancements we'll discuss throughout this books.

Continuous Integration, Delivery, and Deployment

Discovering CI and, later on, CD, was one of the crucial points in my career. It all made perfect sense. The integration phase back in those days could last anything from days to weeks or even months. It was the period we all dreaded. After months of work performed by different teams working on different services or applications, the first day of the integration phase was the definition of hell on earth. If I didn't know better, I'd say that Dante was a developer and wrote Infierno during the integration phase.

On the dreaded first day of the integration phase, we would all come to the office with grim faces. Only whispers could be heard while the integration engineer would announce that the whole system was set up, and the game could begin. He would turn it on and, sometimes, the result would be a blank screen. Months of work in isolation would prove, one more time, to be a disaster. Services and applications could not be integrated, and the long process of fixing problems would begin. In some cases, we would need to redo weeks of work. Requirements defined in advance were, as always, subject to different interpretations and those differences are nowhere more noticeable than in the integration phase.

Then eXtreme Programming (XP) practices came into existence and, with them, continuous integration (CI). The idea that integration should be done continuously today sounds like something obvious. Duh! Of course, you should not wait until the last moment to integrate! Back then, in the waterfall era, such a thing was not so obvious as today. We implemented a continuous integration pipeline and started checking out every commit, running static analysis, unit and functional tests, packaging, deploying and running integration tests. If any of those phases failed, we would abandon what we were doing and made fixing the problem detected by the pipeline our priority. The pipeline itself was fast. Minutes after someone would make a commit to the repository we would get a notification if something failed. Later on, continuous delivery (CD) started to take ground, and we would have confidence that every commit that passed the whole pipeline could be deployed to production. We could do even better and not only attest that each build is production ready, but apply continuous deployment and deploy every build without waiting for (manual) confirmation from anyone. And the best part of all that was that everything was fully automated.

It was a dream come true. Literally! It was a dream. It wasn't something we managed to turn into reality. Why was that? We made mistakes. We thought that CI/CD is a task for the operations department (today we'd call them DevOps). We thought that we could create a process that wraps around applications and services. We thought that CI tools and frameworks are ready. We thought that architecture, testing, business negotiations and other tasks were the job for someone else. We were wrong. I was wrong.

Today I know that successful CI/CD means that no stone can be left unturned. We need to influence everything; from architecture through testing, development and operations all the way until management and business expectations. But let us go back again. What went wrong in those failures of mine?

Architecture

Trying to fit a monolithic application developed by many people throughout the years, without tests, with tight coupling and outdated technology is like an attempt to make an eighty-year-old lady look young again. We can improve her looks, but the best we can do is make her look a bit less old, not young. Some systems are, simply put, too old to be worth the modernization effort. I tried it, many times, and the result was never as expected. Sometimes, the effort in making it young again is not cost effective. On the other hand, I could not go to the client of, let's say, a bank, and say we're going to rewrite your whole system. Risks are too big to rewrite everything and, be it as it might, due to its tight coupling, age, and outdated technology, changing parts of it is not worth the effort. The commonly taken option was to start building the new system and, in parallel, maintain the old one until everything was done. That was always a disaster. It can take years to finish such a project, and we all know what happens with things planned for such a long term. That's not even the waterfall approach. That's like standing at the bottom of Niagara Falls wondering why you get wet. Even doing trivial things like updating the JDK was quite a feat. And those were the cases when I would consider myself lucky. What would you do with, for example, codebase done in Fortran or Cobol?

Then I heard about microservices. It was like music to my ears. The idea that we can build many small independent services that can be maintained by small teams, have codebase that can be understood in no time, being able to change framework, programming language or a database without affecting the rest of the system and being able to deploy it independently from the rest of the system was too good to be true. We could, finally, start taking parts of the monolithic application out without putting the whole system at (significant) risk. It sounded too good to be true. And it was. Benefits came with downsides. Deploying and maintaining a vast number of services turned out to be a heavy burden. We had to compromise and start standardizing services (killing innovation), we created shared libraries (coupling again), we were deploying them in groups (slowing everything), and so on. In other words, we had to remove the benefits microservices were supposed to bring. And let's not even speak of configurations and the mess they created inside servers. Those were the times I try to forget. We had enough problems like that with monoliths. Microservices only multiplied them. It was a failure. However, I was not yet ready to give up. Call me a masochist.

I had to face problems one at a time, and one of the crucial ones was deployments.

Deployments

You know the process. Assemble some artifacts (JAR, WAR, DLL, or whatever is the result of your programming language), deploy it to the server that is already polluted with... I cannot even finish the sentence because, in many cases, we did not even know what was on the servers. With enough time, any server maintained manually becomes full of things. Libraries, executables, configurations, gremlins and trolls. It would start to develop its own personality. Old and grumpy, fast but unreliable, demanding, and so on. The only thing all the servers had in common was that they were all different, and no one could be sure that software tested in, let's say, pre-production environment would behave the same when deployed to production. It was a lottery. You might get lucky, but most likely you won't. Hope dies last.

You might, rightfully, wonder why we didn't use virtual machines in those days. Well, there are two answers to that question, and they depend on the definition of those days. One answer is that in those days we didn't have virtual machines, or they were so new that management was too scared to approve their usage. The other answer is that later on we did use VMs, and that was the real improvement. We could copy production environment and use it as, let's say testing environment. Except that there was still a lot of work to update configurations, networking, and so on.