Containerization with LXC - Konstantin Ivanov - E-Book

Containerization with LXC E-Book

Константин Иванов

0,0
39,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

In recent years, containers have gained wide adoption by businesses running a variety of application loads. This became possible largely due to the advent of kernel namespaces and better resource management with control groups (cgroups). Linux containers (LXC) are a direct implementation of those kernel features that provide operating system level virtualization without the overhead of a hypervisor layer.
This book starts by introducing the foundational concepts behind the implementation of LXC, then moves into the practical aspects of installing and configuring LXC containers. Moving on, you will explore container networking, security, and backups. You will also learn how to deploy LXC with technologies like Open Stack and Vagrant. By the end of the book, you will have a solid grasp of how LXC is implemented and how to run production applications in a highly available and scalable way.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 333

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Containerization with LXC
Credits
About the Author
About the Reviewer
www.PacktPub.com
Why subscribe?
Customer Feedback
Dedication
Preface
What this book covers
What you need for this book
Who this book is for
Conventions
Reader feedback
Customer support
Downloading the example code
Downloading the color images of this book
Errata
Piracy
Questions
1. Introduction to Linux Containers
The OS kernel and its early limitations
The case for Linux containers
Linux namespaces – the foundation of LXC
Mount namespaces
UTS namespaces
IPC namespaces
PID namespaces
User namespaces
Network namespaces
Resource management with cgroups
Limiting I/O throughput
Limiting memory usage
The cpu and cpuset subsystems
The cgroup freezer subsystem
Using userspace tools to manage cgroups and persist changes
Managing resources with systemd
Summary
2. Installing and Running LXC on Linux Systems
Installing LXC
Installing LXC on Ubuntu with apt
Installing LXC on Ubuntu from source
Installing LXC on CentOS with yum
Installing LXC on CentOS from source
LXC directory installation layout
Building and manipulating LXC containers
Building our first container
Making custom containers with debootstrap on Ubuntu
Making custom containers with yum on CentOS
Summary
3. Command-Line Operations Using Native and Libvirt Tools
Using the LVM backing store
Creating LXC containers using the LVM backing store
Creating container snapshots on the LVM backing store
Creating block devices using truncate, dd, and losetup
Using the Btrfs backing store
Creating LXC containers using the Btrfs backing store
Creating container snapshots on the Btrfs backing store
Using the ZFS backing store
Creating LXC containers using the ZFS backing store
Creating container snapshots on the ZFS backing store
Autostarting LXC containers
LXC container hooks
Attaching directories from the host OS and exploring the running filesystem of a container
Freezing a running container
Limiting container resource usage
Building and running LXC containers with libvirt
Installing libvirt from packages on Debian and CentOS
Installing libvirt from source
Defining LXC containers with libvirt
Starting and connecting to LXC containers with libvirt
Attaching block devices to running containers with libvirt
Networking with libvirt LXC
Generating config from an existing LXC container with libvirt
Stopping and removing LXC containers with libvirt
Summary
4. LXC Code Integration with Python
LXC Python bindings
Installing the LXC Python bindings and preparing the development environment on Ubuntu and CentOS
Building our first container with Python
Gathering container information with Python
Starting containers, applying changes, and listing configuration options with Python
Changing container state with Python
Stopping containers with Python
Cloning containers with Python
Destroying containers with Python and cleaning up the virtual environment
Libvirt Python bindings
Installing the libvirt Python development packages
Building LXC containers with libvirt Python
Starting containers and running basic operations with libvirt Python
Collecting container information with libvirt Python
Stopping and deleting LXC containers with libvirt Python
Vagrant and LXC
Configuring Vagrant LXC
Putting it all together – building a simple RESTful API to LXC with Python
API calls to build and configure LXC containers
Cleaning up using the API calls
Summary
5. Networking in LXC with the Linux Bridge and Open vSwitch
Software bridging in Linux
The Linux bridge
The Linux bridge and the LXC package on Ubuntu
The Linux bridge and the LXC package on CentOS
Using dnsmasq service to obtain an IP address in the container
Statically assigning IP addresses in the LXC container
Overview of LXC network configuration options
Manually manipulating the Linux bridge
Open vSwitch
Connecting LXC to the host network
Configuring LXC using none network mode
Configuring LXC using empty network mode
Configuring LXC using veth mode
Configuring LXC using phys mode
Configuring LXC using vlan mode
Configuring LXC using macvlan mode
Summary
6. Clustering and Horizontal Scaling with LXC
Scaling applications with LXC
Scaling Apache in minimal root filesystem with libvirt LXC
Creating the minimal root filesystem for the containers
Defining the Apache libvirt container
Starting the Apache libvirt container
Scaling Apache with libvirt LXC and HAProxy
Scaling Apache with a full LXC root filesystem and OVS GRE tunnels
Configuring the load-balancer host
Creating the load-balancer container
Building the GRE tunnels
Configuring the Apache nodes
Installing Apache and HAProxy, and testing connectivity
Scaling the Apache service
Summary
7. Monitoring and Backups in a Containerized World
Backing up and migrating LXC
Creating LXC backup using tar and rsync
Restoring from the archived backup
Creating container backup using lxc-copy
Migrating LXC containers on an iSCSI target
Setting up the iSCSI target
Setting up the iSCSI initiator
Logging in to the iSCSI target using the presented block device as rootfs for LXC
Building the iSCSI container
Restoring the iSCSI container
LXC active backup with replicated GlusterFS storage
Creating the shared storage
Building the GlusterFS LXC container
Restoring the GlusterFS container
Monitoring and alerting on LXC metrics
Gathering container metrics
Using lxc-monitor to track container state
Using lxc-top to obtain CPU and memory utilization
Using lxc-info to gather container information
Leveraging cgroups to collect memory metrics
Using cgroups to collect CPU statistics
Collecting network metrics
Simple container monitoring and alerting with Monit
Container monitoring and alert triggers with Sensu
Monitoring LXC containers with Sensu agent and server
Monitoring LXC containers using standalone Sensu checks
Simple autoscaling pattern with LXC, Jenkins, and Sensu
Summary
8. Using LXC with OpenStack
Deploying OpenStack with LXC support on Ubuntu
Preparing the host
Installing the database service
Installing the message queue service
Installing the caching service
Installing and configuring the identity service
Installing and configuring the image service
Installing and configuring the compute service
Installing and configuring the networking service
Defining the LXC instance flavor, generating a key pair, and creating security groups
Creating the networks
Provisioning LXC container with OpenStack
Summary
A. LXC Alternatives to Docker and OpenVZ
Building containers with OpenVZ
Building containers with Docker
Running unprivileged LXC containers
Summary

Containerization with LXC

Containerization with LXC

Copyright © 2017 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: February 2017

Production reference: 1220217

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham 

B3 2PB, UK.

ISBN 978-1-78588-894-6

www.packtpub.com

Credits

Author

Konstantin Ivanov

Copy Editor

Tom Jacob

Reviewer

Jay Payne

Project Coordinator

Kinjal Bari

Commissioning Editor

Kartikey Pandey

Proofreader

Safis Editing

Acquisition Editor

Mansi Sanghavi

Indexer

Mariammal Chettiyar

Content Development Editor

Radhika Atitkar

Graphics

Kirk D'Penha

Technical Editors

Devesh Chugh

Bhagyashree Rai

Production Coordinator

Aparna Bhagat

About the Author

Konstantin Ivanov is a Linux systems engineer, an open source developer, and a technology blogger who has been designing, configuring, deploying, and administering large-scale, highly available Linux environments for more than 15 years.

His interests include large distributed systems and task automation, along with solving technical challenges involving multiple technology stacks.

Konstantin received two MS in Computer Science from universities in Bulgaria and the United States, specializing in system and network security and software engineering.

In his spare time, he loves writing technology blogs and spending time with his two boys. He can be reached on LinkedIn at https://www.linkedin.com/in/konstantinivanov or on his blog at http://www.linux-admins.net/.

About the Reviewer

Jay Payne has been a database administrator 5 at Rackspace for over 10 years, working on the design, development, implementation, and operation of storage systems.

Previously, Jay worked on billing and support systems for hosting companies. For the last 20 years, he has primarily focused on the data life cycle, from database architecture, administration, operations, reporting, disaster recovery, and compliance. He has domain experience in hosting, finance, billing, and customer support industries.

www.PacktPub.com

For support files and downloads related to your book, please visit www.PacktPub.com.

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

https://www.packtpub.com/mapt

Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career.

Why subscribe?

Fully searchable across every book published by PacktCopy and paste, print, and bookmark contentOn demand and accessible via a web browser

Customer Feedback

Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial process. To help us improve, please leave us an honest review on this book's Amazon page at https://www.amazon.com/dp/1785888943.

If you'd like to join our team of regular reviewers, you can e-mail us at [email protected]. We award our regular reviewers with free eBooks and videos in exchange for their valuable feedback. Help us be relentless in improving our products!

Dedication

I dedicate this book to my uncle Radoslav, who took me to his work when I was 8 years old in a computer manufacturing facility and sparked a life-long passion for technology and science, and to my parents, Anton and Darinka, who sold the family car to buy me my first computer.

Preface

Not too long ago, we used to deploy applications on a single server, scaling up by adding more hardware resources—we called it "the monolith approach." Achieving high availability was a matter of adding more single purpose servers/monoliths behind load balancers, more often than not ending with a cluster of under-utilized systems. Writing and deploying applications also followed this monolithic approach—the software was usually a large binary that provided most, if not all of the functionality. We either had to compile it from source and use some kind of installer, or package it and ship it to a repository.

With the advent of virtual machines and containers, we got away from the server monolith, fully utilizing the available compute resources by running our applications in isolated, resource-confined instances. Scaling up or down applications become a matter of adding more virtual machines or containers on a fleet of servers, then figuring a way to automatically deploy them. We also broke down the single binary application into microservices that communicate with each other through a message bus/queue, taking full advantage of the low overhead that containers provide. Deploying the full application stack is now just a matter of bundling the services into their own containers, creating a single, fully isolated, dependency-complete work unit that is ready to deploy. Using continuous integration patterns and tools such as Jenkins allowed us to automate the build and deploy process even further.

This book is about LXC containers and how to run your applications inside them. Unlike other container solutions such, as Docker, LXC is designed to run an entire Linux system, not just a single process, though the latter is also possible. Even though an LXC container can contain an entire Linux filesystem, the underlined host kernel is shared, no hypervisor layer needed.

This book takes a direct and practical approach to LXC. You will learn how to install, configure, and operate LXC containers along with multiple examples explaining how to run highly scalable and highly available applications inside LXC. You will use monitoring and deployment applications and other third-party tools. You will also learn how to write your own tools that extend the functionality provided by LXC and its various libraries. Finally, you will see a complete OpenStack deployment that adds the intelligence to managing a fleet of compute resources to easily deploy your application inside LXC containers.

What this book covers

Chapter 1, Introduction to Linux Containers, provides an in-depth exploration of the history of containers in the Linux kernel, along with some fundamental terminology. After going through the basics, you will have a detailed view of how kernel namespaces and control groups (cgroups) are implemented and will be able to experiment with some C system calls.

Chapter 2, Installing and Running LXC on Linux Systems, covers everything that is needed to install, configure, and run LXC on Ubuntu and Red Hat systems. You will learn what packages and tools are required along with different ways of configuring LXC. By the end of this chapter, you will have a Linux system with running LXC containers.

Chapter 3, Command-Line Operations Using Native and Libvirt Tools, is all about running and operating LXC on the command line. The chapter will cover various tools from a list of packages and demonstrate different ways of interacting with your containerized application. The focus will be on the functionality that libvirt and the native LXC libraries provide in controlling the full life cycle of an LXC container.

Chapter 4, LXC Code Integration with Python, will show examples of how to write tools and automate LXC provisioning and management using Python libraries. You will also learn how to create a development environment using Vagrant and LXC.

Chapter 5, Networking in LXC with the Linux Bridge and Open vSwitch, will be a deep dive into networking in the containerized world—connecting LXC to the Linux bridge, using direct connect, NAT, and various other methods. It will also demonstrate more advanced technics of traffic management using Open vSwitch.

Chapter 6, Clustering and Horizontal Scaling with LXC, builds upon the knowledge presented in earlier chapters to build a cluster of Apache containers and demonstrate how to connect them using GRE tunnels with Open vSwitch. The chapter also presents examples of running single process applications inside minimal root filesystem containers.

Chapter 7, Monitoring and Backups in a Containerized World, is about backing up your LXC application containers and deploying monitoring solutions to alert and trigger actions. We are going to see examples of using Sensu and Monit for monitoring, and iSCSI and GlusterFS for creating hot and cold backups.

Chapter 8, Using LXC with OpenStack, demonstrates how to provision LXC containers with OpenStack. It begins by introducing the various components that make OpenStack and how to use the LXC nova driver to automatically provision LXC containers among a pool of compute resources.

Appendix, LXC Alternatives to Docker and OpenVZ, ends the book by demonstrating how other popular container solutions, such as Docker and OpenVZ, came to be and the similarities and differences between them. It also explores practical examples of installing, configuring, and running them alongside LXC.

What you need for this book

A beginner-level knowledge of Linux and the command line should be enough to follow along and run the examples. Some Python and C knowledge is required to fully understand and experiment with the code snippets, though the book is not about software development and you can skip Chapter 4, LXC Code Integration with Python altogether, if not interested.

In terms of hardware and software requirements, most examples in the book have been tested in virtual machines utilizing various cloud providers such as Amazon AWS and Rackspace Cloud. We recommend using the latest version of Ubuntu, given Canonical's involvement with the LXC project, though we provide examples with CentOS whenever the installation/operation methods diverge.

Who this book is for

This book is for anyone who is curious about Linux containers, from Linux administrators who are looking for in-depth understanding of how LXC works, to software developers who need a quick and easy way to prototype code in an isolated environment without the overhead of a full hypervisor. A DevOps engineer is most likely the best job title for those who want to read the book from cover to cover.

Conventions

In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "Manually building the root filesystem and configuration files using tools such as debootstrap and yum."

A block of code is set as follows:

#define _GNU_SOURCE #include<stdlib.h> #include<stdio.h> #include<signal.h> #include<sched.h>   staticintchildFunc(void *arg) { printf("UID inside the namespace is %ld\n", (long) geteuid()); printf("GID inside the namespace is %ld\n", (long) getegid()); }

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

<head> #define _GNU_SOURCE #include #include #include #include staticintchildFunc(void *arg) { printf("UID inside the namespace is %ld\n", (long) geteuid()); printf("GID inside the namespace is %ld\n", (long) getegid()); }

Any command-line input or output is written as follows:

root@ubuntu:~# lsb_release -dc Description: Ubuntu 14.04.5 LTS Codename: trusty root@ubuntu:~#

New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: "Navigate to Networking support | Networking options | 802.1d Ethernet Bridging and select either Y to compile the bridging functionality in the kernel, or M to compile it as a module."

Note

Warnings or important notes appear in a box like this.

Tip

Tips and tricks appear like this.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book-what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of. To send us general feedback, simply e-mail [email protected], and mention the book's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code

You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

You can download the code files by following these steps:

Log in or register to our website using your e-mail address and password.Hover the mouse pointer on the SUPPORT tab at the top.Click on Code Downloads & Errata.Enter the name of the book in the Search box.Select the book for which you're looking to download the code files.Choose from the drop-down menu where you purchased this book from.Click on Code Download.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR / 7-Zip for WindowsZipeg / iZip / UnRarX for Mac7-Zip / PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Containerization-with-LXC. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Downloading the color images of this book

We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/ContainerizationwithLXC_ColorImages.pdf.

Errata

Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

Piracy

Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

Please contact us at [email protected] with a link to the suspected pirated material.

We appreciate your help in protecting our authors and our ability to bring you valuable content.

Questions

If you have a problem with any aspect of this book, you can contact us at [email protected], and we will do our best to address the problem.

Chapter 1. Introduction to Linux Containers

Nowadays, deploying applications inside some sort of a Linux container is a widely adopted practice, primarily due to the evolution of the tooling and the ease of use it presents. Even though Linux containers, or operating-system-level virtualization, in one form or another, have been around for more than a decade, it took some time for the technology to mature and enter mainstream operation. One of the reasons for this is the fact that hypervisor-based technologies such as KVM and Xen were able to solve most of the limitations of the Linux kernel during that period and the overhead it presented was not considered an issue. However, with the advent of kernel namespaces and control groups (cgroups) the notion of a light-weight virtualization became possible through the use of containers.

In this chapter, I'll cover the following topics:

Evolution of the OS kernel and its early limitationsDifferences between containers and platform virtualizationConcepts and terminology related to namespaces and cgroupsAn example use of process resource isolation and management with network namespaces and cgroups

The OS kernel and its early limitations

The current state of Linux containers is a direct result of the problems that early OS designers were trying to solve – managing memory, I/O, and process scheduling in the most efficient way.

In the past, only a single process could be scheduled for work, wasting precious CPU cycles if blocked on an I/O operation. The solution to this problem was to develop better CPU schedulers, so more work can be allocated in a fair way for maximum CPU utilization. Even though the modern schedulers, such as the Completely Fair Scheduler (CFS) in Linux do a great job of allocating fair amounts of time to each process, there's still a strong case for being able to give higher or lower priority to a process and its subprocesses. Traditionally, this can be accomplished by the nice() system call, or real-time scheduling policies, however, there are limitations to the level of granularity or control that can be achieved.

Similarly, before the advent of virtual memory, multiple processes would allocate memory from a shared pool of physical memory. The virtual memory provided some form of memory isolation per process, in the sense that processes would have their own address space, and extend the available memory by means of a swap, but still there wasn't a good way of limiting how much memory each process and its children can use.

To further complicate the matter, running different workloads on the same physical server usually resulted in a negative impact on all running services. A memory leak or a kernel panic could cause one application to bring the entire operating system down. For example, a web server that is mostly memory bound and a database service that is I/O heavy running together became problematic. In an effort to avoid such scenarios, system administrators would separate the various applications between a pool of servers, leaving some machines underutilized, especially at certain times during the day, when there was not much work to be done. This is a similar problem as a single running process blocked on I/O operation is a waste of CPU and memory resources.

The solution to these problems is the use of hypervisor based virtualization, containers, or the combination of both.

The case for Linux containers

The hypervisor as part of the operating system is responsible for managing the life cycle of virtual machines, and has been around since the early days of mainframe machines in the late 1960s. Most modern virtualization implementations, such as Xen and KVM, can trace their origins back to that era. The main reason for the wide adoption of these virtualization technologies around 2005 was the need to better control and utilize the ever-growing clusters of compute resources. The inherited security of having an extra layer between the virtual machine and the host OS was a good selling point for the security minded, though as with any other newly adopted technology there were security incidents.

Nevertheless, the adoption of full virtualization and paravirtulization significantly improved the way servers are utilized and applications provisioned. In fact, virtualization such as KVM and Xen is still widely used today, especially in multitenant clouds and cloud technologies such as OpenStack.

Hypervisors provide the following benefits, in the context of the problems outlined earlier:

Ability to run different operating systems on the same physical serverMore granular control over resource allocationProcess isolation – a kernel panic on the virtual machine will not effect the host OSSeparate network stack and the ability to control traffic per virtual machineReduce capital and operating cost, by simplification of data center management and better utilization of available server resources

Arguably the main reason against using any sort of virtualization technology today is the inherited overhead of using multiple kernels in the same OS. It would be much better, in terms of complexity, if the host OS can provide this level of isolation, without the need for hardware extensions in the CPU, or the use of emulation software such as QEMU, or even kernel modules such as KVM. Running an entire operating system on a virtual machine, just to achieve a level of confinement for a single web server, is not the most efficient allocation of resources.

Over the last decade, various improvements to the Linux kernel were made to allow for similar functionality, but with less overhead – most notably the kernel namespaces and cgroups. One of the first notable technologies to leverage those changes was LXC, since kernel 2.6.24 and around the 2008 time frame. Even though LXC is not the oldest container technology, it helped fuel the container revolution we see today.

The main benefits of using LXC include:

Lesser overheads and complexity than running a hypervisorSmaller footprint per containerStart times in the millisecond rangeNative kernel support

It is worth mentioning that containers are not inherently as secure as having a hypervisor between the virtual machine and the host OS. However, in recent years, great progress has been made to narrow that gap using Mandatory Access Control (MAC) technologies such as SELinux and AppArmor, kernel capabilities, and cgroups, as demonstrated in later chapters.