Docker on Amazon Web Services - Justin Menga - E-Book

Docker on Amazon Web Services E-Book

Justin Menga

0,0
39,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Over the last few years, Docker has been the gold standard for building and distributing container applications. Amazon Web Services (AWS) is a leader in public cloud computing, and was the first to offer a managed container platform in the form of the Elastic Container Service (ECS).
Docker on Amazon Web Services starts with the basics of containers, Docker, and AWS, before teaching you how to install Docker on your local machine and establish access to your AWS account. You'll then dig deeper into the ECS, a native container management platform provided by AWS that simplifies management and operation of your Docker clusters and applications for no additional cost. Once you have got to grips with the basics, you'll solve key operational challenges, including secrets management and auto-scaling your infrastructure and applications. You'll explore alternative strategies for deploying and running your Docker applications on AWS, including Fargate and ECS Service Discovery, Elastic Beanstalk, Docker Swarm and Elastic Kubernetes Service (EKS). In addition to this, there will be a strong focus on adopting an Infrastructure as Code (IaC) approach using AWS CloudFormation.
By the end of this book, you'll not only understand how to run Docker on AWS, but also be able to build real-world, secure, and scalable container platforms in the cloud.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 829

Veröffentlichungsjahr: 2018

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Docker on Amazon Web Services

 

 

 

 

Build, deploy, and manage your container applications at scale

 

 

 

 

 

 

 

Justin Menga

 

 

 

 

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

Docker on Amazon Web Services

Copyright © 2018 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Commissioning Editor: Gebin GeorgeAcquisition Editor: Rohit RajkumarContent Development Editor: Nithin George VargheseTechnical Editor:Mohit HassijaCopy Editor: Safis EditingProject Coordinator: Drashti PanchalProofreader: Safis EditingIndexer: Pratik ShirodkarGraphics: Tom ScariaProduction Coordinator: Shantanu Zagade

First published: August 2018

Production reference: 1280818

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-78862-650-7

www.packtpub.com

 

For Simba and Chandy
 
mapt.io

Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Mapt is fully searchable

Copy and paste, print, and bookmark content

PacktPub.com

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

Contributors

About the author

Justin Menga is a full-stack technologist with over 20 years experience of working with organizations to build large-scale applications and platforms, with a focus on end-to-end application architecture, the cloud, continuous delivery, and infrastructure automation. Justin started his career as an infrastructure and network engineer/architect, working with many large enterprise and service provider customers. In the past few years, Justin has switched his focus to building applications and full-service platforms, working with a wide array of technologies, yet still maintaining and applying his prior infrastructure and network expertise to containers and public clouds. He has programmed in Objective C, C#, ASP.NET, JavaScript, Scala, Python, Java, and Go, and has a keen interest in continuous delivery, Docker, and automation tools that speed the path from development to production.

I would like to thank my family: Tania, Chloe, Jayden, Fluffy, Minky, Simba (RIP), and Chandy (RIP) - who all have persevered through the countless hours and sleepless nights of burning the midnight oil to accumulate the knowledge and experience required to complete such a book.

About the reviewer

Rickard von Essen works as a continuous delivery and cloud consultant at Diabol. He helps companies deliver faster, improve continuously, and worry less. In his spare time, he helps maintain Packer and contributes to numerous other FOSS projects. He has been tinkering with Linux and BSD since the late 1990s, and has been hacking since the Amiga era. He lives with his wife and two children in Stockholm, Sweden, and he has a Master of Computer Science and Engineering from Linköping University.

 

 

 

 

 

 

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Table of Contents

Title Page

Copyright and Credits

Docker on Amazon Web Services

Dedication

Packt Upsell

Why subscribe?

PacktPub.com

Contributors

About the author

About the reviewer

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Download the color images

Code in Action

Conventions used

Get in touch

Reviews

Container and Docker Fundamentals

Technical requirements

Introduction to containers and Docker

Why containers are revolutionary

Docker architecture

Running Docker in AWS

Setting up a local Docker environment

Setting up a macOS environment

Installing other tools

Setting up a Windows 10 environment

Installing the Windows subsystem for Linux

Installing Docker in the Windows subsystem for Linux

Installing other tools in the Windows subsystem for Linux

Setting up a Linux environment

Installing the sample application

Forking the sample application

Running the sample application locally

Installing application dependencies

Running database migrations

Running the local development web server

Testing the sample application locally

Summary

Questions

Further reading

Building Applications Using Docker

Technical requirements

Testing and building the application using Docker

Creating a test stage

Installing system and build dependencies

Installing application dependencies

Copying application source and running tests

Configuring the release stage

Installing system dependencies

Creating an application user

Copying and installing application source code and dependencies

Building and running the release image

Testing and building the application using Docker Compose

Adding a database service using Docker Compose

Running database migrations

Generating static web content

Creating acceptance tests

Automating the workflow

Automating the test stage

Automating the release stage

Refining the workflow

Cleaning up the Docker environment

Using dynamic port mapping

Adding a version target

Testing the end-to-end workflow

Summary

Questions

Further reading

Getting Started with AWS

Technical requirements

Setting up an AWS account

Installing Google Authenticator

Logging in as the root account

Creating IAM users, groups, and roles

Creating IAM roles

Creating an Administrators group

Creating a Users group

Creating an IAM user

Logging in as an IAM user

Enabling MFA for an IAM user

Assuming an IAM role

Creating an EC2 Key Pair

Using the AWS CLI

Installing the AWS CLI

Creating an AWS access key

Configuring the AWS CLI

Configuring the AWS CLI to assume a role

Configuring the AWS CLI to use a named profile

Introduction to AWS CloudFormation

Defining a CloudFormation template

Deploying a CloudFormation stack

Updating a CloudFormation Stack

Deleting a CloudFormation stack

Summary

Questions

Further reading

Introduction to ECS

Technical requirements

ECS architecture

Creating an ECS cluster

Understanding ECS container instances

Joining an ECS cluster

Granting access to join an ECS cluster

Managing ECS container instances

Connecting to ECS container instances

Inspecting the local Docker environment

Inspecting the ECS agent

Verifying the ECS agent

ECS container instance logs

Creating an ECS task definition

Creating an ECS service

Deploying ECS services

Running ECS tasks

Using the ECS CLI

Deleting the Test Cluster

Summary

Questions

Further information

Publishing Docker Images Using ECR

Technical requirements

Understanding ECR

Creating ECR repositories

Creating ECR repositories using the AWS Console

Creating ECR repositories using the AWS CLI

Creating ECR repositories using AWS CloudFormation

Logging into ECR

Publishing Docker images to ECR

Publishing Docker images using the Docker CLI

Publishing Docker images using Docker Compose

Automating the publish workflow

Automating login and logout

Automating the publishing of Docker images

Pulling Docker images from ECR

ECS container instance access to ECR from the same account

ECS container instance access to ECR from a different account

Configuring ECR resource policies using the AWS Console

Configuring ECR resource policies using the AWS CLI

Configuring ECR resource policies using AWS CloudFormation

Configuring IAM policies in remote accounts

AWS service access to ECR

Configuring lifecycle policies

Configuring lifecycle policies using the AWS Console

Configuring lifecycle policies using the AWS CLI

Configuring lifecycle policies using AWS CloudFormation

Summary

Questions

Further reading

Building Custom ECS Container Instances

Technical requirements

Designing a custom Amazon Machine Image

Building a custom AMI using Packer

Installing Packer

Creating a Packer template

Packer template structure

Configuring a builder

Configuring variables

Configuring provisioners

Configuring post-processors

Building a machine image

Generating dynamic session credentials

Automating generation of dynamic session credentials

Building the image

Building custom ECS container instance images using Packer

Defining a custom storage configuration

Adding EBS volumes

Formatting and mounting volumes

Installing additional packages and configuring system settings

Installing additional packages

Configuring system settings

Configuring timezone settings

Modifying default cloud-init behavior

Configuring a cleanup script

Creating a first-run script

Configuring ECS cluster membership

Configuring HTTP proxy support

Configuring the CloudWatch logs agent

Starting required services

Performing required health checks

Testing your custom ECS container instance image

Summary

Questions

Further reading

Creating ECS Clusters

Technical requirements

Deployment overview

Defining an ECS cluster

Configuring an EC2 Auto Scaling group

Creating an EC2 Auto Scaling group

Configuring CloudFormation Input Parameters

Defining an EC2 Auto Scaling launch configuration

Configuring CloudFormation Init Metadata

Configuring Auto Scaling group creation policies

Configuring EC2 instance profiles

Configuring EC2 security groups

Deploying and testing an ECS cluster

Summary

Questions

Further reading

Deploying Applications Using ECS

Technical requirements

Creating an application database using RDS

Configuring supporting RDS resources

Deploying RDS resources using CloudFormation

Configuring Application Load Balancers

Application Load Balancer architecture

Configuring an Application Load Balancer

Creating an Application Load Balancer

Configuring Application Load Balancer security groups

Creating a listener

Creating a target group

Deploying an Application Load Balancer using CloudFormation

Creating ECS task definitions

Configuring ECS task definition families

Configuring ECS task definition volumes

Configuring ECS task definition containers

Deploying ECS task definitions using CloudFormation

Deploying ECS services

Deploying an ECS service using CloudFormation

ECS rolling deployments

Executing a rolling deployment

Creating a CloudFormation custom resource

Understanding CloudFormation custom resources

Creating a custom resource Lambda function

Understanding the custom resource function code

Understanding the custom resource Lambda function resources

Creating custom resources

Deploying custom resources

Verifying the application

Summary

Questions

Further reading

Managing Secrets

Technical requirements

Creating KMS keys

Encrypting and decrypting data using KMS

Creating secrets using the AWS Secrets Manager

Creating secrets using the AWS console

Creating secrets using the AWS CLI

Retrieving secrets using the AWS CLI

Updating secrets using the AWS CLI

Deleting and restoring secrets using the AWS CLI

Injecting secrets at container startup

Creating an entrypoint script

Adding an entrypoint script to a Dockerfile

Provisioning secrets using CloudFormation

Configuring ECS task definitions to use secrets

Exposing secrets to other resources

Creating a Secrets Manager Lambda function

Creating a secrets custom resource

Deploying secrets to AWS

Summary

Questions

Further reading

Isolating Network Access

Technical requirements

Understanding ECS task networking

Docker bridge networking

ECS task networking

Configuring a NAT gateway

Configuring private subnets and route tables

Configuring NAT gateways

Configuring routing for your private subnets

Configuring ECS task networking

Configuring ECS task definitions for task networking

Configuring ECS services for task networking

Configuring supporting resources for task networking

Deploying and testing ECS task networking

Summary

Questions

Further reading

Managing ECS Infrastructure Life Cycle

Technical requirements

Understanding ECS life cycle management

EC2 Auto Scaling life cycle hooks

ECS container instance draining

ECS life cycle management solution

Building a new ECS container instance AMI

Configuring EC2 Auto Scaling rolling updates

Creating EC2 Auto Scaling life cycle hooks

Creating a Lambda function for consuming life cycle hooks

Configuring permissions for the life cycle hook Lambda function

Deploying and testing Auto Scaling life cycle hooks

Summary

Questions

Further reading

ECS Auto Scaling

Technical requirements

Understanding ECS cluster resources

CPU resources

Memory resources

Network resources

Calculating the ECS cluster capacity

Calculating the container capacity

Deciding when to scale out

Calculating the idle host capacity

Idle host capacity example

Implementing an ECS Auto Scaling solution

Configuring CloudWatch events for ECS

Programming the Lambda function that calculates the cluster capacity

Adding IAM permissions for calculating the cluster capacity

Testing cluster-capacity calculations

Publishing custom CloudWatch metrics

Creating CloudWatch alarms for cluster-capacity management

Creating EC2 Auto Scaling policies

Testing ECS cluster-capacity management

Testing scale out

Testing scale in

Configuring the AWS application Auto Scaling service

Configuring CloudWatch alarms

Defining an Auto Scaling target

Creating an Auto Scaling IAM role

Configuring scale-out and scale-in policies

Deploying application Auto Scaling

Summary

Questions

Further reading

Continuously Delivering ECS Applications

Technical requirements

Introducing CodePipeline and CodeBuild

Creating a custom CodeBuild container

Defining a custom CodeBuild container

Creating a repository for the custom CodeBuild container

Adding CodeBuild support to your application repository

Creating a continuous integration pipeline using CodePipeline

Creating a CodePipeline pipeline using the AWS console

Creating a continuous delivery pipeline using CodePipeline

Publishing version information in your source repository

Adding CodePipeline support to the deployment repository

Creating an IAM role for CloudFormation deployments

Adding a deployment repository to CodePipeline

Adding an output artifact to the build stage

Adding a deployment stage to the pipeline

Continuously delivering to production using CodePipeline

Adding a new environment configuration file to your deployment repository

Adding a create change set action to the pipeline

Adding a manual approval action to the pipeline

Adding a deploy change set action to the pipeline

Deploying to production

Summary

Questions

Further reading

Fargate and ECS Service Discovery

Technical requirements

When to use Fargate?

Adding support for AWS X-Ray to applications

Creating an X-Ray daemon Docker image

Configuring ECS service discovery resources

Configuring a service discovery namespace

Configuring a service discovery service

Configuring an ECS task definition for Fargate

Configuring IAM roles for Fargate

Configuring an ECS service for Fargate

Deploying and testing the X-Ray daemon

Configuring the todobackend stack for X-Ray support

Testing the X-Ray service

Summary

Questions

Further reading

Elastic Beanstalk

Technical requirements

Introduction to Elastic Beanstalk

Elastic Beanstalk concepts

Creating an Elastic Beanstalk application

Creating a Dockerrun.aws.json file

Creating an Elastic Beanstalk application using the AWS console

Configuring the EC2 instance profile

Configuring Elastic Beanstalk applications using the CLI

Managing Elastic Beanstalk EC2 instances

Customizing Elastic Beanstalk applications

Resolving Docker volume permissions issues

Configuring database settings

Running database migrations

Summary

Questions

Further reading

Docker Swarm in AWS

Technical requirements

Docker Swarm introduction

Docker Swarm versus Kubernetes

Installing Docker for AWS

Resources created by the Docker for AWS CloudFormation stack

Accessing the Swarm cluster

Setting up local access to Docker Swarm

Configuring SSH agent forwarding

Configuring SSH tunneling

Deploying applications to Docker Swarm

Docker services

Docker stacks

Deploying the sample application to Docker Swarm

Integrating Docker Swarm with the Elastic Container Registry

Defining a stack

Creating shared storage for hosting static content

Creating a collectstatic service

Creating persistent storage for storing the application database

Relocating an EBS volume

Secrets management using Docker secrets

Configuring applications to consume secrets

Running database migrations

Summary

Questions

Further reading

Elastic Kubernetes Service

Technical requirements

Introduction to Kubernetes

Kubernetes versus Docker Swarm

Kubernetes architecture

Getting started with Kubernetes

Creating a pod

Creating a deployment

Creating a service

Exposing a service

Adding volumes to your pods

Adding init containers to your pods

Adding a database service

Creating persistent storage

Creating a database service

Creating and consuming secrets

Consuming secrets for the database service

Consuming secrets for the application

Running jobs

Creating an EKS cluster

Installing client components

Creating cluster resources

Configuring kubectl for EKS

Creating worker nodes

Joining worker nodes to your EKS cluster

Deploying the Kubernetes dashboard

Deploying the sample application to EKS

Configuring support for persistent volumes using AWS EBS

Configuring support for AWS Elastic Load Balancers

Deploying the sample application

Creating secrets

Deploying the database service

Deploying the application service

Tearing down down the sample application

Summary

Questions

Further reading

Assessments

Chapter 1, Container and Docker Fundamentals

Chapter 2, Building Applications Using Docker

Chapter 3, Getting Started with AWS

Chapter 4, Introduction to ECS

Chapter 5, Publishing Docker Images Using ECR

Chapter 6, Building Custom ECS Container Instances

Chapter 7, Creating ECS Clusters

Chapter 8, Deploying Applications Using ECS

Chapter 9, Managing Secrets

Chapter 10, Isolating Network Access

Chapter 11, Managing the ECS Infrastructure Life Cycle

Chapter 12, ECS Auto Scaling

Chapter 13, Continuously Delivering ECS Applications

Chapter 14, Fargate and ECS Service Discovery

Chapter 15, Elastic Beanstalk

Chapter 16, Docker Swarm in AWS

Chapter 17, Elastic Kubernetes Service

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

Welcome to Docker on Amazon Web Services!  I'm very excited to have written this book and to share how to leverage the wonderful technologies that the Docker and Amazon Web Services (AWS) ecosystems provide to build truly world-class solutions for deploying and operating your applications in production.

Docker has become the modern standard for building, packaging, publishing, and operating applications, leveraging the power of containers to increase the speed of application delivery, increase security, and reduce costs.  This book will show you how to supercharge your process of building Docker applications, using the best practices of continuous delivery to provide a fully automated, consistent, reliable, and portable workflow for testing, building, and publishing your Docker applications. In my view, this is a fundamental prerequisite before you even consider deploying your application to the cloud, and the first few chapters will focus on establishing a local Docker environment and creating a local continuous delivery workflow for a sample application that we will be using throughout the book.

AWS is the world's leading public cloud provider, and provides a rich set of solutions for managing and operating your Docker applications. This book will cover all of the major services that AWS provides to support Docker and containers, including the Elastic Container Service (ECS), Fargate, Elastic Beanstalk, and Elastic Kubernetes Service (EKS), and also will discuss how you can leverage the Docker for AWS solution provided by Docker Inc to deploy Docker Swarm clusters. 

Running a complete application environment in AWS comprises much more than your container platform, and this book will also describe best practices for managing access to your AWS account and leveraging other AWS services to support the requirements of your applications. For example, you will learn how to set up AWS application load balancers to publish highly available, load-balanced endpoints for your application, create AWS Relational Database Service (RDS) instances to provide a managed application database, integrate your applications with the AWS Secrets Manager to provide a secure secrets management solution, and create a complete continuous delivery pipeline using the AWS CodePipeline, CodeBuild, and CloudFormation services that will automatically test, build, and publish Docker images for any new changes to your application, and then automatically deploy it into development and production environments.

You will build all of this supporting infrastructure using the AWS CloudFormation service, which provides powerful infrastructure-as-code templates that allow you define all of the AWS services and resources I have mentioned in a single manifest that you can deploy to AWS with a single click of a button.

I'm sure by now you are just as excited as I am to learn about all of these wonderful technologies, and I'm sure by the end of this book, you will have developed the expert knowledge and skills required to be able to deploy and manage your Docker applications, using the latest cutting-edge techniques and best practices.

Who this book is for

Docker on Amazon Web Services is for anybody who wants to build, deploy, and operate applications using the power of containers, Docker, and AWS.

Readers ideally should have a basic understanding of Docker and containers, and have worked with AWS or another cloud provider, although no previous experience with containers or AWS is required, as this book takes a step-by-step approach and explains key concepts as you progress. An understanding of how to use the Linux command line, Git, and basic Python scripting knowledge will be useful, but is not required.

See the To get the most out of this book section for a complete list of the recommended prerequisite skills.

What this book covers

Chapter 1, Container and Docker Fundamentals, will provide a brief introduction to Docker and containers, and provide an overview of the various services and options available in AWS to run your Docker applications. You will set up your local environment, installing Docker, Docker Compose, and various other tools that are required to complete the examples in each chapter. Finally, you will download the sample application and learn how to test, build, and run the application locally, so that you have a good understanding of how the application works and specific tasks you need to perform to get the application up and running.

Chapter 2, Building Applications Using Docker, will describe how to build a fully automated Docker-based workflow for testing, building, packaging, and publishing your applications as production-ready Docker release images, using Docker, Docker Compose, and other tools. This will establish the foundation of a portable continuous delivery workflow that you can consistently execute across multiple machines without having to install application-specific dependencies in each local environment. 

Chapter 3, Getting Started with AWS, will describe how to create a free AWS account and start using a variety of free-tier services that allow you to get familiar with the wide array of AWS services on offer. You will learn how to establish best practice administrative and user access patterns to your account, configuring multi-factor authentication (MFA) for enhanced security and installing the AWS command-line interface, which can be used for a wide variety of operational and automation use cases. You will also be introduced to CloudFormation, which is a management tool and service provided free by AWS that you will use throughout this book that allows you to deploy complex environments with a single click of a button, using a powerful and expressive infrastructure as code template format.

Chapter 4, Introduction to ECS, will get you up and running with the Elastic Container Service (ECS), which is the flagship service for running your Docker applications in AWS. You will learn about the architecture of ECS, create your first ECS cluster, define your container configurations using ECS task definitions, and then deploy a Docker application as an ECS service.  Finally, you will be briefly introduced to the ECS command-line interface (CLI), which allows you to interact with local Docker Compose files and automatically deploy Docker Compose resources to AWS using ECS.

Chapter 5, Publishing Docker Images Using ECR, will teach you how to establish a private Docker registry using the Elastic Container Registry (ECR), authenticate to your registry using IAM credentials, and then publish Docker images to private repositories within your registry. You will also learn how to share your Docker images with other accounts and AWS services, and how to configure life cycle policies to automatically clean up orphaned images, ensuring you only pay for active and current images.

Chapter 6, Building Custom ECS Container Instances, will show you how to use a popular open source tool called Packer to build and publish custom Amazon Machine Images (AMIs) for the EC2 instances (ECS container instances) that run your container workloads in ECS clusters. You will install a set of helper scripts that enable your instances to integrate with CloudFormation and download custom provisioning actions at instance creation time, allowing you to dynamically configure the ECS cluster your instances will join, configure the CloudWatch logs groups your instances should publish logging information to, and finally, signal back to CloudFormation that provisioning has succeeded or failed. 

Chapter 7, Creating ECS Clusters, will teach you how to build ECS clusters based upon EC2 auto-scaling groups that leverage the features of the custom AMI you created in the previous chapter. You will define your EC2 auto-scaling group, ECS cluster, and other supporting resources using CloudFormation, and configure CloudFormation Init metadata to perform custom runtime configuration and provisioning of the ECS container instances that make up your ECS cluster.

Chapter 8, Deploying Applications Using ECS, will expand the environment created in the previous chapter, adding supporting resources such as Relational Database Service (RDS) instances and AWS Application Load Balancers (ALBs) to your CloudFormation template. You will then define an ECS task definition and ECS service for the sample application, and learn how ECS can perform automated rolling deployments and updates for your applications. To orchestrate required deployment tasks such as running database migrations, you will extend CloudFormation and write your own Lambda function to create an ECS task runner custom resource, providing the powerful capability to run any provisioning action that can be executed as an ECS task.

Chapter 9, Managing Secrets, will introduce the AWS Secrets Manager, which is a fully managed service that stores secret data in an encrypted format that can be easily and securely accessed by authorized parties such as your users, AWS resources, and applications. You will interact with Secrets Manager using the AWS CLI, creating secrets for sensitive credentials such as database passwords, and then learn how to use an entrypoint script for your containers that injects secret values as internal environment variables at container startup before handing off to the main application. Finally, you will create a CloudFormation custom resource that exposes secrets to other AWS services that do not support AWS Secrets Manager, such as providing an administrative password for Relational Database Service (RDS) instances.

Chapter 10, Isolating Network Access, describes how to use the awsvpc networking mode in your ECS task definitions to isolate network access and separate ECS control plane communications from your container and application communications. This will allow you to adopt best practice security patterns such as deploying your containers on private networks, and implement solutions for providing internet access, including the AWS VPC NAT Gateway service.  

Chapter 11, Managing the ECS Infrastructure Life Cycle, will provide you with an understanding of operational challenges when running ECS clusters, which includes taking your ECS container instances out of service, whether it be to scale in your auto-scaling groups or to replace your ECS container instances with a new Amazon machine image. You will learn how to leverage EC2 auto-scaling life cycle hooks to invoke an AWS Lambda function whenever an ECS container instance is about to be terminated, which allows you to perform graceful shutdown operations such as draining active containers to other instances in the cluster, before signaling EC2 auto-scaling to proceed with instance termination.

Chapter 12, ECS Auto Scaling, will describe how ECS clusters manage resources such as CPU, memory, and network ports, and how this affects the capacity of your clusters. If you want to be able to dynamically auto-scale your clusters, you need to dynamically monitor ECS cluster capacity, and scale out or scale in the cluster at capacity thresholds that ensure you will meet the service level expectations of your organization or use case. You will be implement a solution that calculates ECS cluster capacity whenever an ECS container instance state change event is generated via the AWS CloudWatch Events service, publishes capacity metrics to CloudWatch, and dynamically scales your cluster up or down using CloudWatch alarms. With a dynamic cluster capacity solution in place, you will then be able to configure the AWS application auto-scaling service to dynamically adjust the number of instances of service based upon appropriate metrics, such as CPU utilization or active connections, without needing to concern yourself with the effect on underlying cluster capacity.

Chapter 13,  Continuously Delivering ECS Applications, will establish a continuous delivery pipeline using the AWS CodePipeline service that integrates with GitHub to detect changes to your application source code and infrastructure deployment scripts, use the AWS CodeBuild service to run unit tests, build application artifacts and publish a Docker image using the sample application Docker workflow, and continuously deploy your application changes to AWS using the CloudFormation templates you have used so far in this book.

This will automatically deploy into an AWS development environment that you test, and then create a change set and manual approval action for deployment into production, providing you with a rapid and repeatable path to production for all of your applications new features and bug fixes.

Chapter 14, Fargate and ECS Service Discovery, will introduce AWS Fargate, which provides a solution that fully manages both the ECS service control plane and ECS clusters that you traditionally have to manage using the regular ECS service. You will deploy the AWS X-Ray daemon as an ECS service using Fargate, and configure ECS service discovery to dynamically publish your service endpoints using DNS and Route 53. This will allow you to add support for X-Ray tracing to your sample application, which can be used to trace incoming HTTP requests to your application and monitor AWS service calls, database calls, and other types of calls that are made to service each incoming request.

Chapter 15, Elastic Beanstalk, will provide an overview of the popular Platform-as-a-Service (PaaS) offering, which includes support for Docker applications. You will learn how to create an Elastic Beanstalk multi-container Docker application, establish an environment that consists of a managed EC2 instance, an RDS database instance, and an Application Load Balancer (ALB), and then extend the environment using various techniques to support the requirements of your Docker applications, such as volume mounts and running single-shot tasks per application deployment.

Chapter 16, Docker Swarm in AWS, will focus on how to run Docker Swarm clusters in AWS, using the Docker for AWS blueprint provided for Docker Swarm community edition. This blueprint provides you with a CloudFormation template that establishes a pre-configured Docker Swarm cluster in AWS within minutes, and features integrations with key AWS services such as the Elastic Load Balancing (ELB), Elastic File System (EFS) and Elastic Block Store (EBS) services. You will define a stack using Docker Compose, which configures a multi-service environment expressed in the familiar Docker Compose specification format, and learn how to configure key Docker Swarm resources such as services, volumes, and Docker secrets. You will learn how to create shared Docker volumes that are backed by EFS, relocatable Docker volumes backed by EBS that Docker Swarm will automatically reattach to new containers redeployed after a node failure, and publish an external service endpoint for your application using an ELB that is automatically created and managed for you by Docker Swarm.  

Chapter 17, Elastic Kubernetes Service, introduces the newest container management platform offering from AWS, which is based on the popular open source Kubernetes platform. You will first set up Kubernetes in your local Docker Desktop environment, which includes native support for Kubernetes with the Docker 18.06 CE release, and learn how to create a complete environment for your Docker applications using a number of Kubernetes resources, including pods, deployments, services, secrets, persistent volumes, and jobs. You will then establish an EKS cluster in AWS, create an EC2 auto-scaling group that connects worker nodes to your cluster, and ensure your local Kubernetes client can authenticate and connect to the EKS control plane, after which you will deploy the Kubernetes dashboard to provide a comprehensive management interface for your cluster.  Finally, you will define a default storage class that uses the Elastic Block Store (EBS) service for persistent volumes and then deploy your Docker applications to AWS, leveraging the same Kubernetes definitions you created earlier for your local environment, providing you with a powerful solution to quickly deploy Docker applications locally for development purposes, and then deploy straight to production using EKS.

To get the most out of this book

A basic, working knowledge of Docker

- if you haven't used Docker before, you should learn about the basic concepts of Docker at 

https://docs.docker.com/engine/docker-overview/

and then step through Parts 1 (

https://docs.docker.com/get-started/

) and 2 (

https://docs.docker.com/get-started/part2

)  of the Docker Get Started tutorial. For a more comprehensive understanding of Docker, check out the

Learn Docker - Fundamentals of Docker 18.x

book from Packt Publishing.

A basic, working knowledge of Git

- if you haven't used Git before, you should run through the Beginner and Getting Started tutorials at

https://www.atlassian.com/git/tutorials

. For a more comprehensive understanding of Git, check out the

Git Essentials

book from Packt Publishing.

Familiarity with AWS

- if you haven't used AWS before, running through the Launch a Linux Virtual Machine tutorial at 

https://aws.amazon.com/getting-started/tutorials/launch-a-virtual-machine/

will provide a useful introduction.

Familiarity with the Linux/Unix command line

- if you haven't used the Linux/Unix command line before, I recommend running through a basic tutorial such as 

https://maker.pro/linux/tutorial/basic-linux-commands-for-beginners

, using the Linux Virtual Machine you created when you went through the Launch a Linux Virtual Machine tutorial.

Basic understanding of Python

- the sample application for this book is written in Python, and some of the examples in later chapters include basic Python scripts.  If you have not worked with Python before, you may want to read through the first few lessons at 

https://docs.python.org/3/tutorial/

.

Download the example code files

You can download the example code files for this book from your account at www.packtpub.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

Log in or register at

www.packtpub.com

.

Select the

SUPPORT

tab.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR/7-Zip for Windows

Zipeg/iZip/UnRarX for Mac

7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Docker-on-Amazon-Web-Services. In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://www.packtpub.com/sites/default/files/downloads/DockeronAmazonWebServices_ColorImages.pdf

Code in Action

Visit the following link to check out videos of the code being run:http://bit.ly/2Noqdpn

Conventions used

There are a number of text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "Note that the gist includes a placeholder called PASTE_ACCOUNT_NUMBER within the policy document, so you will need to replace this with your actual AWS account ID."

A block of code is set as follows:

AWSTemplateFormatVersion: "2010-09-09"Description: Cloud9 Management StationParameters: EC2InstanceType: Type: String Description: EC2 instance type Default: t2.micro SubnetId: Type:

AWS::EC2::Subnet::Id

Description: Target subnet for instance

Any command-line input or output is written as follows:

>

aws configure

AWS Access Key ID [None]:

Bold: Indicates a new term, an important word, or words that you see onscreen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "To create the admin role, select Services | IAM from the AWS console, select Roles from the left-hand menu, and click on the Create role button."

Warnings or important notes appear like this.
Tips and tricks appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: Email [email protected] and mention the book title in the subject of your message. If you have questions about any aspect of this book, please email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packtpub.com.

Container and Docker Fundamentals

Docker and Amazon Web Services are two of the hottest and most popular technologies available right now.  Docker is the most popular container platform on the planet right now, while Amazon Web Services is the number 1 public cloud provider.  Organizations both large and small are adopting containers en masse, and the public cloud is no longer the playground of start-ups, with large enterprises and organizations migrating to the cloud in droves. The good news is that this book will give you practical, real-world insights and knowledge of how to use both Docker and AWS together to help you test, build, publish, and deploy your applications faster and more efficiently than ever before.

In this chapter, we will briefly discuss the history of Docker, why Docker is so revolutionary, and the high level architecture of Docker.  We will describe the various services that support running Docker in AWS, and discuss why you might choose one service over another based upon the requirements of your organization.

We will then focus on getting your local environment up-and-running with Docker, and install the various software prerequisites required to run the sample application for this book.  The sample application is a simple web application written in Python that stores data in a  MySQL database, and this book will use the sample application to help you solve real-world challenges such as testing, building, and publishing Docker images, as well as deploying and running Docker applications in a variety of container management platforms on AWS. Before you can package the sample application as a Docker image, you need to understand the application's external dependencies and the key tasks that are required to test, build, deploy, and run the application, and you will learn how to install application dependencies, run unit tests, start the application up locally, and orchestrate key operational tasks such as establishing the initial database schema and tables required for the sample application to run.  

The following topics will be covered in this chapter:

Introduction to containers and Docker

Why containers are revolutionary

Docker architecture

Docker in AWS

Setting up a local Docker environment

Installing the sample application

Technical requirements

The following lists the technical requirements to complete this chapter:

A computer environment that meets the minimum specifications as defined in the software and hardware list  

The following GitHub URL contains the code samples used in this chapter: https://github.com/docker-in-aws/docker-in-aws/tree/master/ch1.

Check out the following video to see the Code in Action:http://bit.ly/2PEKlVQ

Introduction to containers and Docker

In recent times, containers have become a common lingua franca in the technology world, and it's difficult to imagine a world where, just a mere few years ago, only a small portion of the technology community had even heard about containers.

To trace the origins of containers, you need to rewind way back to 1979, when Unix V7 introduced the chroot system call.  The chroot system call provided the ability to change the root directory of a running process to a different location in the file system, and was the first mechanism available to provide some form of process isolation. chroot was added to the Berkeley Software Distribution (BSD) in 1982 (this is an ancestor of the modern macOS operating system), and not much more happened in terms of containerization and isolation for a number of years, until a feature called FreeBSD Jails was released in 2000, which provided separate environments called "jails" that could each be assigned their own IP address and communicate independently on the network.

Later, in 2004, Solaris launched the first public beta of Solaris Containers (which eventually became known as Solaris Zones), which provided system resource separation by creating zones. This was a technology I remember using back in 2007 to help overcome a lack of expensive physical Sun SPARC infrastructure and run multiple versions of an application on a single SPARC server.

In the mid 2000s, a lot more progress in the march toward containers occurred, with Open Virtuozzo (Open VZ) being released in 2005, which patched the Linux kernel to provide operating system level virtualization and isolation.  In 2006, Google launched a feature called process containers (which was eventually renamed to control groups or cgroups) that provided the ability to restrict CPU, memory, network, and disk usage for a set of processes. In 2008, a feature called Linux namespaces, which provided the ability to isolate different types of resources from each other, was combined with cgroups to create Linux Containers (LXC), forming the initial foundation to modern containers as we know them today.  

In 2010, as cloud computing was starting to gain popularity, a number of Platform-as-a-Service (PaaS) start-ups appeared, which provided fully managed runtime environments for specific application frameworks such as Java Tomcat or Ruby on Rails.  One start-up called dotCloud was quite different, in that it was the first "polyglot" PaaS provider, meaning that you could run virtually any application environment you wanted using their service.  The technology underpinning this was Linux Containers, and dotCloud added a number or proprietary features to provide a fully managed container platform for their customers.  By 2013, the PaaS market had well and truly entered the Gartner hype cycle (https://en.wikipedia.org/wiki/Hype_cycle) trough of disillusionment, and dotCloud was on the brink of financial collapse. One of the co-founders of the company, Solomon Hykes, pitched an idea to the board to open source their container management technology, sensing that there was huge potential.  The board disagreed, however Solomon and his technical team proceeded regardless, and the rest, as they say, is history. 

After announcing Docker as a new open source container management platform to the world in 2013, Docker quickly rose in prominence, becoming the darling of the open source world and vendor community alike, and is likely one of the fastest growing  technologies in history.  By the end of 2014, during which time Docker 1.0 was released, over 100 million Docker containers had been downloaded – fast forward to March 2018, and that number sat at 37billion downloads. At the end of 2017, container usage amongst Fortune 100 companies sat at 71%, indicating that Docker and containers have become universally accepted for both start-ups and enterprises alike.  Today, if you are building modern, distributed applications based upon microservice architectures, chances are that your technology stack will be underpinned by Docker and containers.

Why containers are revolutionary

The brief and successful history of containers speaks for itself, which leads to the question, why are containers so popular?  The following provides some of the more important answers to this question:

Lightweight

: Containers are often compared to virtual machines, and in this context, containers are much more lightweight that virtual machines.  A container can start up an isolated and secure runtime environment for your application in seconds, compared with the handful of minutes a typical virtual machine takes to start. Container images are also much smaller than their virtual machine counterparts.

Speed

: Containers are fast

they can be downloaded and started within seconds, and within a few minutes you can test, build, and publish your Docker image for immediate download.  This allows organizations to innovate faster, which is critical in today's ever increasing competitive landscape.  

Portable

: Docker makes it easier than ever to run your applications on your local machine, in your data center, and in the public cloud.  Because Docker packages are complete runtime environments for your application complete with operating system dependencies and third-party packages, your container hosts don't required any special prior setup or configuration specific to each individual application 

all of these specific dependencies and requirements are self-contained within the Docker image, making comments like "But it worked on my machine!" relics of the past. 

Security

: There has been a lot of debate about the security of containers, but in my opinion, if implemented correctly, containers actually offer greater security than non-container alternative approaches.  The main reason for this is that containers express security context very well

applying security controls at the container level typically represents the right level of context for those controls. A lot of these security controls are provided by "default"

for example, namespaces are inherently a security mechanism in that they provide isolation.  A more explicit example is that they can apply SELinux or AppArmor profiles on a per container basis, making it very easy to define different profiles depending on specific security requirements of each container.

Automation

: Organizations are adopting software delivery practices such as continuous delivery, where automation is a fundamental requirement.  Docker natively supports automation

at its core, a Dockerfile is an automation specification of sorts that allows the Docker client to automatically build your containers, and other Docker tools such as Docker Compose allow you express connected multi-container environments that you can automatically create and tear down in seconds.

Docker architecture

As discussed in the preface of this book, I assume that you have at least a basic working knowledge of Docker. If you are new to Docker, then I recommend that you supplement your learning in this chapter by reading the Docker overview at https://docs.docker.com/engine/docker-overview/, and running through some of the Docker tutorials at https://docs.docker.com/get-started/.

The Docker architecture includes several core components, as follows:

Docker Engine

: This provides several server code components for running your container workloads, including an API server for communications with Docker clients, and the Docker daemon that provides the core runtime of Docker.  The daemon is responsible for the complete life cycle of your containers and other resources, and also ships with built-in clustering support to allow you to build clusters or swarms of your Docker Engines.  

Docker client

: This provides a client for building Docker images, running Docker containers, and managing other resources such as Docker volumes and Docker networks. The Docker client is the primary tool you will work with when using Docker, and interacts with both the Docker Engine and Docker registry components.

Docker registry

: This is responsible for storing and distributing Docker images for your application.  Docker supports both public and private registries, and the ability to package and distribute your applications via a Docker registry is one of the major reasons for Docker's success.  In this book, you will download third-party images from Docker Hub, and you will store your own application images in the private AWS registry service called

Elastic Container Registry

(

ECR

).

Docker Swarm

: A swarm is a collection of Docker Engines that form a self-managing and self-healing cluster, allowing you to horizontally scale your container workloads and provide resiliency in the event of Docker Engine failures. A Docker Swarm cluster includes a number of master nodes that form the cluster control plane, and a number of worker nodes that actually run your container workloads.

When you work with the preceding components, you interact with a number of different types of objects in the Docker architecture:

Images

: An image is built using a Dockerfile, which includes a number of instructions on how to build the runtime environment for your containers.  The result of executing each of these build instructions is stored as a set of layers and is distributed as a downloadable and installable image, and the Docker Engine reads the instructions in each layer to construct a runtime environment for all containers based on a given image.

Containers

: A container is the runtime manifestation of a Docker image. Under the hood, a container is comprised of a collection of Linux namespaces, control groups, and storage that collectively create an isolated runtime environment form which you can run a given application process.  

Volumes

: By default, the underlying storage mechanism for containers is based upon the union file system, which allows a virtual file system to be constructed from the various layers in a Docker image. This approach is very efficient in that you can share layers and build up multiple containers from these shared layers, however this does have a performance penalty and does not support persistence.  Docker volumes provide access to a dedicated pluggable storage medium, which your containers can use for IO intensive applications and to persist data.

Networks

: By default, Docker containers each operate in their own network namespace, which provides isolation between containers. However, they must still provide network connectivity to other containers and the outside world.  Docker supports a variety of network plugins that support connectivity between containers, which can even extend across Docker Swarm clusters.

Services

: A service provides an abstraction that allows you to scale your applications by spinning up multiple containers or replicas of your service that can be load balanced across multiple Docker Engines in a Docker Swarm cluster.

Running Docker in AWS

Along with Docker, the other major technology platform we will target in this book is AWS.   

AWS is the world's leading public cloud provider, and as such offers a variety of ways to run your Docker applications:

Elastic Container Service (ECS)

: In 2014, AWS launched ECS, which was the first dedicated public cloud offering that supported Docker.  ECS provides a hybrid managed service of sorts, where ECS is responsible for orchestrating and deploying your container applications (such as the control plane of a container management platform), and you are responsible for providing the Docker Engines (referred to as ECS container instances) that your containers will actually run on.  ECS is free to use (you only pay for the ECS container instances that run your containers), and removes much of the complexity of managing container orchestration and ensuring your applications are always up and running. However, this does require you to manage the EC2 infrastructure that runs your ECS container instances.  ECS

 is considered Amazon's flagship Docker service and as such will be the primary service that we will focus on in this book.

Fargate

: Fargate was launched in late 2017 and provides a fully managed container platform that manages both the ECS control plane and ECS container instances for you.  With Fargate, your container applications are deployed onto shared ECS container instance infrastructures that you have no visibility of which AWS manages, allowing you to focus on building, testing, and deploying your container applications without having to worry about any underlying infrastructure. Fargate is a fairly new service that, at the time of writing this book, has limited regional availability, and has some constraints that mean it is not suitable for all use cases.  We will cover the Fargate service in Chapter 14, 

Fargate and ECS Service Discovery.

Elastic Kubernetes Service (EKS)

: EKS launched in June 2018 and supports the popular open source Kubernetes container management platform. EKS is similar to ECS in that it is a hybrid managed service where Amazon provides fully managed Kubernetes master nodes (the Kubernetes control plane), and you provide Kubernetes worker nodes in the form of EC2 autoscaling groups that run your container workloads.  Unlike ECS, EKS is not free and at the time of writing this book costs 0.20c USD per hour, plus any EC2 infrastructure costs associated with your worker nodes.  Given the ever growing popularity of Kubernetes as a cloud/infrastructure agnostic container management platform, along with its open source community, EKS is sure to become very popular, and we will provide an introduction to Kubernetes and EKS in

Chapter 17

Elastic Kubernetes Service

.

Elastic Beanstalk (EBS)

: Elastic Beanstalk is a popular Platform as a Service (PaaS) offering provided by AWS that provides a complete and fully managed environment that targets different types of popular programming languages and application frameworks such as Java, Python, Ruby, and Node.js. Elastic Beanstalk also supports Docker applications, allowing you to support a wide variety of applications written in the programming language of your choice. You will learn how to deploy a multi-container Docker application in Chapter 15, 

Elastic Beanstalk

.

Docker Swarm in AWS

: Docker Swarm is the native container management and clustering platform built into Docker that leverages the native Docker and Docker Compose tool chain to manage and deploy your container applications.  At the time of writing this book, AWS does not provide a managed offering for Docker Swarm, however Docker provides a CloudFormation template (CloudFormation is a free Infrastructure as Code automation and management service provided by AWS) that allows you to quickly deploy a Docker Swarm cluster in AWS that integrates with native AWS offerings include the Elastic Load Balancing (ELB) and Elastic Block Store (EBS) services.  We will cover all of this and more in the chapter

Docker Swarm in AWS

.

CodeBuild

: AWS CodeBuild is a fully managed build service that supports continuous delivery use cases by providing a container-based build agent that you can use to test, build, and deploy your applications without having to manage any of the infrastructure traditionally associated with continuous delivery systems.  CodeBuild uses Docker as its container platform for spinning up build agents on demand, and you will be introduced to CodeBuild along with other continuous delivery tools such as CodePipeline in the chapter

Continuously Delivering ECS Applications

.

Batch

: AWS Batch provides a fully managed service based upon ECS that allows you to run container-based batch workloads without needing to worry about managing or maintaining any supporting infrastructure.  We will not be covering AWS Batch in this book, however you can learn more about this service at 

https://aws.amazon.com/batch/

.

With such a wide variety of options to run your Docker applications on AWS, it is important to be able to choose the right solution based upon the requirements of your organization or specific use cases.

If you are a small to medium sized organization that wants to get up and running quickly with Docker on AWS, and you don't want to manage any supporting infrastructure, then Fargate or Elastic Beanstalk are options that you may prefer.  Fargate supports native integration with key AWS services, and is a building block component that doesn't dictate how your build, deploy, or operate your applications.  At the time of writing this book, Fargate is not available in all regions, is comparatively expensive when compared to other solutions, and has some limitations such as not being able to support persistent storage.  Elastic Beanstalk provides a comprehensive end-to-end solution for managing your Docker applications, providing a variety of integrations out of the box, and includes operational tooling to manage the complete life cycle of your applications. Elastic Beanstalk does require you to buy into a very opinionated framework and methodology of how to build, deploy, and run your applications, and can be difficult to customize to meet your needs.