Implementing AWS: Design, Build, and Manage your Infrastructure - Yohan Wadia - E-Book

Implementing AWS: Design, Build, and Manage your Infrastructure E-Book

Yohan Wadia

0,0
41,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Work through exciting recipes to administer your AWS cloud

Key Features

  • Build secure environments using AWS components and services
  • Explore core AWS features with real-world applications and best practices
  • Design and build Lambda functions using real-world examples

Book Description

With this Learning Path, you’ll explore techniques to easily manage applications on the AWS cloud.

You’ll begin with an introduction to serverless computing, its advantages, and the fundamentals of AWS. The following chapters will guide you on how to manage multiple accounts by setting up consolidated billing, enhancing your application delivery skills, with the latest AWS services such as CodeCommit, CodeDeploy, and CodePipeline to provide continuous delivery and deployment, while also securing and monitoring your environment's workflow. It’ll also add to your understanding of the services AWS Lambda provides to developers. To refine your skills further, it demonstrates how to design, write, test, monitor, and troubleshoot Lambda functions.

By the end of this Learning Path, you’ll be able to create a highly secure, fault-tolerant, and scalable environment for your applications.

This Learning Path includes content from the following Packt products:

  • AWS Administration: The Definitive Guide, Second Edition by Yohan Wadia
  • AWS Administration Cookbook by Rowan Udell, Lucas Chan
  • Mastering AWS Lambda by Yohan Wadia, Udita Gupta

What you will learn

  • Explore the benefits of serverless computing and applications
  • Deploy apps with AWS Elastic Beanstalk and Amazon Elastic File System
  • Secure environments with AWS CloudTrail, AWSConfig, and AWS Shield
  • Run big data analytics with Amazon EMR and Amazon Redshift
  • Back up and safeguard data using AWS Data Pipeline
  • Create monitoring and alerting dashboards using CloudWatch
  • Effectively monitor and troubleshoot serverless applications with AWS
  • Design serverless apps via AWS Lambda, DynamoDB, and API Gateway

Who this book is for

This Learning Path is specifically designed for IT system and network administrators, AWS architects, and DevOps engineers who want to effectively implement AWS in their organization and easily manage daily activities. Familiarity with Linux, web services, cloud computing platforms, virtualization, networking, and other administration-related tasks will assist in understanding the concepts in the book. Prior hands-on experience with AWS core services such as EC2, IAM, S3, and programming languages, such as Node.Js, Java, and C#, will also prove beneficial.

Yohan Wadia is a client-focused evangelist and technologist with an experience of more than 8 years in the cloud industry. He focuses on helping customers succeed with cloud adoption. As a technical consultant, he guides customers with pragmatic solutions that leverage cloud computing through either Amazon Web Services, Windows Azure, or Google Cloud Platform and make practical and business sense. Rowan Udell has been working in development and operations for 15 years. He has held a variety of positions, such as SRE, front-end developer, back-end developer, consultant, technical lead, and team leader. His travels have seen him work in start-ups and enterprises in the finance, education, and web industries in Australia and Canada. He currently works as a senior consultant with Versent, an AWS advanced partner in Sydney. He specializes in serverless applications and architectures on AWS and contributes actively in the Serverless Framework community. Lucas Chan has been working in the field of technology since 1995 as a developer, systems admin, DevOps engineer, and a variety of other roles. He is currently a senior consultant and engineer at Versent and a technical director at Stax. He's been running production workloads on AWS for over 10 years. He's also a member of the APAC AWS Warriors program and holds all five of the available AWS certifications. Udita Gupta is an AWS Certified Solutions Architect and an experienced cloud engineer with a passion for developing customized solutions, especially on the Amazon Web Services Cloud platform. She loves developing and exploring new technologies and designing reusable components and solutions around them. She particularly likes using the serverless paradigm, along with other upcoming technologies such as IoT and AI. A highly animated creature and an avid reader, Udita likes to spend her time reading all kinds of books, with a particular interest in Sheryl Sandberg and Khaled Hosseini.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 674

Veröffentlichungsjahr: 2019

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Implementing AWS: Design, Build, and Manage your Infrastructure

 

 

 

 

 

Leverage AWS features to build highly secure, fault-tolerant, and scalable cloud environments

 

 

 

 

 

 

Yohan Wadia
Rowan Udell
Lucas Chan Udita Gupta

 

 

 

 

 

BIRMINGHAM - MUMBAI

Implementing AWS: Design, Build, and Manage your Infrastructure

 

Copyright © 2019 Packt Publishing

 

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: January 2019

Production reference: 1290119

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-78883-577-0

www.packtpub.com

 
mapt.io

Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Mapt is fully searchable

Copy and paste, print, and bookmark content

Packt.com

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks. 

Contributors

About the authors

Yohan Wadia is a client-focused evangelist and technologist with an experience of more than 8 years in the cloud industry. He focuses on helping customers succeed with cloud adoption. As a technical consultant, he guides customers with pragmatic solutions that leverage cloud computing through either Amazon Web Services, Windows Azure, or Google Cloud Platform and make practical and business sense.

 

 

Rowan Udell has been working in development and operations for 15 years. He has held a variety of positions, such as SRE, front-end developer, back-end developer, consultant, technical lead, and team leader. His travels have seen him work in start-ups and enterprises in the finance, education, and web industries in Australia and Canada. He currently works as a senior consultant with Versent, an AWS advanced partner in Sydney. He specializes in serverless applications and architectures on AWS and contributes actively in the Serverless Framework community.

 

 

Lucas Chan has been working in the field of technology since 1995 as a developer, systems admin, DevOps engineer, and a variety of other roles. He is currently a senior consultant and engineer at Versent and a technical director at Stax. He's been running production workloads on AWS for over 10 years. He's also a member of the APAC AWS Warriors program and holds all five of the available AWS certifications.

 

 

Udita Gupta is an AWS Certified Solutions Architect and an experienced cloud engineer with a passion for developing customized solutions, especially on the Amazon Web Services Cloud platform. She loves developing and exploring new technologies and designing reusable components and solutions around them. She particularly likes using the serverless paradigm, along with other upcoming technologies such as IoT and AI. A highly animated creature and an avid reader, Udita likes to spend her time reading all kinds of books, with a particular interest in Sheryl Sandberg and Khaled Hosseini.

 

 

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Table of Contents

Title Page

Copyright and Credits

Implementing AWS: Design, Build, and Manage your Infrastructure

About Packt

Why subscribe?

Packt.com

Contributors

About the authors

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Conventions used

Get in touch

Reviews

What is New in AWS?

Improvements in existing services

Elastic Compute Cloud

Availability of FPGAs and GPUs

Simple Storage Service

Virtual Private Cloud

CloudWatch

Elastic Load Balancer

Introduction of newer services

Managing EC2 with Systems Manager

Introducing EC2 Systems Manager

Getting started with the SSM agent

Configuring IAM Roles and policies for SSM

Installing the SSM agent

Configuring the SSM agent to stream logs to CloudWatch

Introducing Run Command

Working with State Manager

Simplifying instance maintenance using System Manager Automation

Working with automation documents

Patching instances using automation

Triggering automation using CloudWatch schedules and events

Managing instance patches using patch baseline and compliance

Getting started with Inventory Management

Introducing Elastic Beanstalk and Elastic File System

Introducing Amazon Elastic Beanstalk

Concepts and terminologies

Getting started with Elastic Beanstalk

Creating the Dev environment

Working with the Elastic Beanstalk CLI

Understanding the environment dashboard

Cloning environments

Configuring the production environment

Introducing Amazon Elastic File System

How does it work?

Creating an Elastic File System

Extending EFS to Elastic Beanstalk

Securing Workloads Using AWS WAF

Introducing AWS Web Application Firewall

Concepts and terminologies

Getting started with WAF

Creating the web ACL

Creating the conditions

Creating rules

Assigning a WAF Web ACL to CloudFront distributions

Working with SQL injection and cross-site scripting conditions

Automating WAF Web ACL deployments using CloudFormation

Monitoring WAF using CloudWatch

Introduction to AWS Shield

Governing Your Environments Using AWS CloudTrail and AWS Config

Introducing AWS CloudTrail

Working with AWS CloudTrail

Creating your first CloudTrail Trail

Viewing and filtering captured CloudTrail Logs and Events

Modifying a CloudTrail Trail using the AWS CLI

Monitoring CloudTrail Logs using CloudWatch

Creating custom metric filters and alarms for monitoring CloudTrail Logs

Automating deployment of CloudWatch alarms for AWS CloudTrail

Analyzing CloudTrail Logs using Amazon Elasticsearch

Introducing AWS Config

Concepts and terminologies

Getting started with AWS Config

Creating custom config rules

Tips and best practices

Access Control Using AWS IAM and AWS Organizations

What's new with AWS IAM

Using the visual editor to create IAM policies

Testing IAM policies using the IAM Policy Simulator

Introducing AWS Organizations

Getting started with AWS Organizations

Transforming Application Development Using the AWS Code Suite

Understanding the AWS Code Suite

Getting Started with AWS CodeCommit

Working with branches, commits, and triggers

Introducing AWS CodeDeploy

Concepts and terminologies

Installing and configuring the CodeDeploy agent

Setting up the AppSpec file

Creating a CodeDeploy application and deployment group

Introducing AWS CodePipeline

Creating your own continuous delivery pipeline

Putting it all together

Powering Analytics Using Amazon EMR and Amazon Redshift

Understanding the AWS analytics suite of services

Introducing Amazon EMR

Concepts and terminologies

Getting started with Amazon EMR

Connecting to your EMR cluster

Running a job on the cluster

Monitoring EMR clusters

Introducing Amazon Redshift

Getting started with Amazon Redshift

Connecting to your Redshift cluster

Working with Redshift databases and tables

Orchestrating Data using AWS Data Pipeline

Introducing AWS Data Pipeline

Getting started with AWS Data Pipeline

Working with data pipeline definition Files

Executing remote commands using AWS Data Pipeline

Backing up data using AWS Data Pipeline

Managing AWS Accounts

Introduction

Setting up a master account

How to do it...

How it works...

There's more...

Multi-factor authentication

Using the CLI

See also

Creating a member account

Getting ready

How to do it...

How it works...

There's more...

Accessing the member account

Service control policies

Root credentials

Deleting accounts

See also

Inviting an account

Getting ready

How to do it...

How it works...

There's more...

Removing accounts

Consolidated billing

See also

Managing your accounts

Getting ready

How to do it...

Getting the root ID for your organization

Creating an OU

Getting the ID of an OU

Adding an account to an OU

Removing an account from an OU

Deleting an OU

How it works...

There's more...

See also

Adding a service control policy

Getting ready

How to do it...

How it works...

There's more...

Using AWS Compute

Introduction

Creating a key pair

Getting ready

How to do it...

How it works...

Launching an instance

Getting ready

How to do it...

How it works...

There's more...

See also

Attaching storage

Getting ready

How to do it...

How it works...

Securely accessing private instances

Getting ready

How to do it...

Configuration

How it works...

There's more...

Auto scaling an application server

Getting ready

How to do it...

How it works...

Scaling policies

Alarms

Creating machine images

Getting ready

How to do it...

How it works...

Template

Validate the template

Build the AMI

There's more...

Debugging

Orphaned resources

Deregistering AMIs

Other platforms

Creating security groups

Getting ready

How to do it...

There's more...

Differences from traditional firewalls

Creating a load balancer

How to do it...

How it works...

There's more...

HTTPS/SSL

Path-based routing

Management Tools

Introduction

Auditing your AWS account

How to do it...

How it works...

There's more...

Recommendations with Trusted Advisor

How to do it...

How it works...

There's more...

Creating e-mail alarms

How to do it...

How it works...

There's more...

Existing topics

Other subscriptions

Publishing custom metrics in CloudWatch

Getting ready

How to do it...

How it works...

There's more...

Cron

Auto scaling

Backfilling

Creating monitoring dashboards

Getting ready

How to do it...

There's more...

Widget types

Creating a budget

Getting ready

How to do it...

How it works...

Feeding log files into CloudWatch logs

Getting ready

How to do it...

How it works...

There's more...

Database Services

Introduction

Creating a database with automatic failover

Getting ready

How to do it...

How it works...

There's more...

Creating a NAT gateway

Getting ready

How to do it...

How it works...

Creating a database read-replica

Getting ready

How to do it...

How it works...

There's more...

Promoting a read-replica to master

Getting ready

How to do it...

Creating a one-time database backup

Getting ready

How to do it...

Restoring a database from a snapshot

Getting ready

How to do it...

There's more...

Migrating a database

Getting ready

How to do it...

How it works...

There's more...

Database engines

Ongoing replication

Multi-AZ

Calculating DyanmoDB performance

Getting ready

How to do it...

How it works...

There's more...

Burst capacity

Metrics

Eventually consistent reads

Introducing AWS Lambda

What is serverless computing?

Pros and cons of serverless computing

Introducing AWS Lambda

How it works

Getting started with AWS Lambda

Using the AWS Management Console

Using the CLI

Writing Lambda Functions

The Lambda programming model

Handler

The context object

Logging

Exceptions and error handling

Versioning and aliases

Environment variables

Packaging and deploying

APEX

Claudia.js

Testing Lambda Functions

The need for testing Lambda function

Manually testing your functions with the AWS Management Console

Testing functions with Mocha and Chai

Testing functions using the npm modules

Testing with a simple serverless test harness

Event-Driven Model

Introducing event-driven architectures

Understanding events and AWS Lambda

Lambda architecture patterns

Exploring Lambda and event mapping

Mapping Lambda with S3

Mapping Lambda with DynamoDB

Mapping Lambda with SNS

Mapping Lambda with CloudWatch events

Mapping Lambda with Kinesis

Creating the Kinesis Stream

Setting up the log streaming

Packaging and uploading the function

Extending AWS Lambda with External Services

Introducing Webhooks

Integrating GitHub with AWS Lambda

Integrating Slack with AWS Lambda

Invoking Lambda using an external application

Build and Deploy Serverless Applications with AWS Lambda

Introducing SAM

Writing SAM templates

AWS::Serverless::Function

AWS::Serverless::Api

AWS::Serverless::SimpleTable

Building serverless applications with SAM

Introducing AWS step functions

Under the hood

Getting started with step functions

Building distributed applications with step functions

Monitoring and Troubleshooting AWS Lambda

Monitoring Lambda functions using CloudWatch

Introducing AWS X-Ray

Monitoring Lambda functions using Datadog

Logging your functions with Loggly

AWS Lambda - Use Cases

Infrastructure management

Scheduled startup and shutdown of instances

Periodic snapshots of EBS volumes using Lambda

Enabling governance using EC2 tags and Lambda

Data transformation

Next Steps with AWS Lambda

Processing content at the edge with Lambda@Edge

Building next generation chatbots with Lambda and Lex

Processing data at the edge with Greengrass and Lambda

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

AWS is one of the biggest market leaders for cloud computing. With this Learning Path, you'll explore techniques to easily manage applications on the AWS cloud.

You'll begin with an introduction to serverless computing, its advantages, and the fundamentals of AWS. The following chapters will guide you on how to manage multiple accounts by setting up consolidated billing. You'll learn to set up reliable and fast hosting for static websites, share data between running instances, and back up your data for compliance. The Learning Path holds much promise when it comes to enhancing your application delivery skills, with the latest AWS services such as CodeCommit, CodeDeploy, and CodePipeline to provide continuous delivery and deployment, while also securing and monitoring your environment's workflow. It'll also add to your understanding of the services AWS Lambda provides to developers. To refine your skills further, it demonstrates how to design, write, test, monitor, and troubleshoot Lambda functions.

By the end of this Learning Path, you'll be able to create a highly secure, fault-tolerant, and scalable environment for your applications.

This Learning Path includes content from the following Packt products:

AWS Administration: The Definitive Guide, Second Edition by Yohan Wadia

AWS Administration Cookbook by Rowan Udell, Lucas Chan

Mastering AWS Lambda b

y Yohan Wadia, Udita Gupta

Who this book is for

If you are an IT professional or a system architect who wants to improve infrastructure using AWS, then this course is for you. It is also for programmers who are new to AWS and want to build highly efficient, scalable applications.

What this book covers

Chapter 1, What's New in AWS?, contains a brief introduction to some of the key enhancements and announcements made to the existing line of AWS services and products.

Chapter 2, Managing EC2 with Systems Manager, provides a brief introduction to using EC2 Systems Manager to manage your fleet of EC2 instances. It also covers an in-depth look at how to work with SSM agents, Run Command, as well as other systems manager features, such as automation, patching, and inventory management.

Chapter 3, Introducing Elastic Beanstalk and Elastic File System, explains how to leverage both Elastic Beanstalk and the Elastic File Systems services to build and scale out web applications and deploy them with absolute ease.

Chapter 4, Securing Workloads Using AWS WAF, discusses some of the key aspects that you can leverage to provide added security for your web applications using AWS WAF and AWS Shield. The chapter also provides some keen insights into how you can protect your web applications against commonly occurring attacks such as cross-site scripting and SQL injections.

Chapter 5, Governing Your Environments Using AWS CloudTrail and AWS Config, introduces you to the concept and benefits provided by leveraging AWS CloudTrail and AWS Config. The chapter covers in-depth scenarios using which you can standardize governance and security for your AWS environments.

Chapter 6, Access Control Using AWS IAM and AWS Organizations, takes a look at some of the latest enhancements made to the AWS IAM service. It also walks you through how you can manage your AWS accounts with better efficiency and control using AWS organizations as a Service.

Chapter 7, Transforming Application Development Using the AWS Code Suite, covers an indepth look at how you can leverage CodeCommit, CodeDeploy, and CodePipeline to design and build complete CICD pipelines for your applications.

Chapter 8, Powering Analytics Using Amazon EMR and Amazon Redshift, provides practical knowledge and hands-on approach to process and a run large-scale analytics and data warehousing in the AWS Cloud.

Chapter 9, Orchestrating Data Using AWS Data Pipeline, covers how you can effectively orchestrate the movement of data from one AWS service to another using simple, reusable pipeline definitions.

Chapter 10, Managing AWS Accounts, covers everything you need to know to manage your accounts and get started with AWS organizations.

Chapter 11, Using AWS Compute, dives deep into how to run VMs (EC2 instances) on AWS, how to auto scale them, and how to create and manage load balancers.

Chapter 12, Management Tools, provides an overview of how to audit your account and monitor your infrastructure.

Chapter 13, Database Services, shows you how to create, manage, and scale databases on the AWS platform.

Chapter 14, Introducing AWS Lambda, covers the introductory concepts and general benefits of serverless computing, along with an in-depth look at AWS Lambda. The chapter also walks you through your first steps with AWS Lambda, including deploying your first functions using the AWS Management Console and the AWS CLI.

Chapter 15, Writing Lambda Functions, covers the fundamentals of writing and composing your Lambda functions. The chapter introduces you to concepts such as versioning, aliases, and variables, along with an easy-to-follow code sample.

Chapter 16, Testing Lambda Functions, discusses the overall importance of testing your function for code defects and bugs. It also introduces you to some out-of-the-box testing frameworks in the form of Mocha and Chai, and summarizes it all by demonstrating how you can test your functions locally before actual deployments to Lambda.

Chapter 17, Event-Driven Model, introduces the concept of the event-based system and how it actually works. The chapter also provides a deep dive into how Lambda's event-based model works with the help of event mappings and a few easy-to-replicate, real-world use cases.

Chapter 18, Extending AWS Lambda with External Services, discusses the concept and importance of Webhooks and how they can be leveraged to connect your serverless functions with any third-party services. The chapter also provides a few real-world use cases, where Lambda functions are integrated with other services such as Teamwork, GitHub, and Slack.

Chapter 19, Build and Deploy Serverless Applications with AWS Lambda, provides you with a hands-on approach to building scalable serverless applications using AWS services such as SAM and Step Functions with a few handy deployment examples.

Chapter 20, Monitoring and Troubleshooting AWS Lambda, covers how you can leverage AWS CloudWatch and X-ray to monitor your serverless applications. The chapter also introduces other third-party tools, such as Datadog and Loggly, for effectively logging and monitoring your functions.

Chapter 21, AWS Lambda - Use Cases, provides a comprehensive set of real-world serverless use cases with some easy-to-follow code examples and snippets.

Chapter 22, Next Steps with AWS Lambda, summarizes the next phase in the evolution of serverless applications and discusses how new and improved enhancements in Lambda are expected to come about in the near future.

To get the most out of this book

To start using this book, you will need the following software installed on your local desktop:

An SSH client such as PuTTY, a key generator such as PuTTYgen, and a file transferring tool such as WinSCP

Any modern web browser, preferably Mozilla Firefox.

You'll need at least one AWS account with full administrative access.

You'll also need a text editor to edit YAML/JSON CloudFormation templates, and the AWS CLI tools, which are supported on common operating systems (macOS/Linux/Windows).

Download the example code files

 

You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

Log in or register at

www.packt.com

.

Select the

SUPPORT

tab.

Click on

Code Downloads & Errata

.

Enter the name of the book in the

Search

box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR/7-Zip for Windows

Zipeg/iZip/UnRarX for Mac

7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Implementing-AWS-Design-Build-and-Manage-your-Infrastructure. In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packt.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packt.com.

What is New in AWS?

Having spent many years in the IT industry, you get to see a lot of new technologies, products, and platforms that start to evolve, gradually mature, and eventually be replaced by something that's faster and better! I guess in some ways, this concept applies to this book as well. 

I still remember the time when I first started exploring AWS way back in 2009, when it was the early days for the likes of EC2 and CloudFront, still adding new features to them, SimpleDB and VPC just starting to take shape, and so on; the thing that really amazes me is how far the platform has come today! With more than 50 different solutions and service offerings ranging from big data analytics, to serverless computing, to data warehousing and ETL solutions, digital workspaces and code development services, AWS has got it all! Which is one of the reasons why I have always been a huge fan of it! It's not only about revenue and the number of customers, but how well do you adapt and evolve to changing times and demands.

So here we are, back at it again! A new book with a lot of new things to learn and explore! But before we begin with the deep dives into some really interesting and powerful services, let's take this time to traverse a little way back in time and understand what has been happening in AWS over this past year, and how the services that we explored in the first edition are shaping up today!

In this chapter, we will be covering the following topics:

Improvements in existing AWS services.

A brief introduction to newer AWS services and what they are used for.

Improvements in existing services

There have been quite a few improvements in the services that were covered back in the first edition of AWS Administration - The Definitive Guide. In this section, we will highlight a few of these essential improvements and understand their uses. To start off, let's look at some of the key enhancements made in EC2 over the past year or two.

Elastic Compute Cloud

Elastic Compute Cloud (EC2) is by far one of the oldest running services in AWS, and yet it still continues to evolve and add new features as the years progress. Some of the notable feature improvements and additions are mentioned here:

Introduction of the t2.xlarge and t2.2xlarge instances

: The

t2

workloads are a special type of workload, as they offer a low-cost burstable compute that is ideal for running general purpose applications that don't require the use of CPU all the time, such as web servers, application servers, LOB applications, development, to name a few. The

t2.xlarge

and

t2.2xlarge

instance types provide 16 GB of memory and 4 vCPU, and 32 GB of memory and 8 vCPU respectively.

Introduction of the I3 instance family

: Although EC2 provides a comprehensive set of instance families, there was a growing demand for a specialized storage-optimized instance family that was ideal for running workloads such as relational or NoSQL databases, analytical workloads, data warehousing, Elasticsearch applications, and so on. Enter I3 instances! I3 instances are run using non-volatile memory express (NVMe) based SSDs that are suited to provide extremely optimized high I/O operations. The maximum resource capacity provided is up to 64 vCPUs with 488 GB of memory, and 15.2 TB of locally attached SSD storage.

This is not an exhaustive list in any way. If you would like to know more about the changes brought about in AWS, check this out, at https://aws.amazon.com/about-aws/whats-new/2016/.

Availability of FPGAs and GPUs

One of the key use cases for customers adopting the public cloud has been the availability of high-end processing units that are required to run HPC applications. One such new instance type added last year was the F1 instance, which comes equipped with field programmable gate arrays (FPGAs) that you can program to create custom hardware accelerations for your applications. Another awesome feature to be added to the EC2 instance family was the introduction of the Elastic GPUs concept. This allows you to easily provide graphics acceleration support to your applications at significantly lower costs but with greater performance levels. Elastic GPUs are ideal if you need a small amount of GPU for graphics acceleration, or have applications that could benefit from some GPU, but also require high amounts of compute, memory, or storage.

Simple Storage Service

Similar to EC2, Simple Storage Service (S3) has had its own share of new features and support added to it. Some of these are explained here:

S3 Object Tagging

: S3 Object Tagging is like any other tagging mechanism provided by AWS, used commonly for managing and controlling access to your S3 resources. The tags are simple key-value pairs that you can use for creating and associating IAM policies for your S3 resources, to set up S3 life cycle policies, and to

manage

 transitions of objects between various storage classes.

S3 Inventory

: S3 Inventory was a special feature provided with the sole purpose of cataloging the various objects and providing that as a useable CSV file for further analysis and inventorying. Using S3 Inventory, you can now extract a list of all objects present in your bucket, along with its metadata, on a daily or weekly basis.

S3 Analytics

: A lot of work and effort has been put into S3 so that it is not only used just as another infinitely scalable storage. S3 Analytics provides end users with a medium for analyzing storage access patterns and defines the right set of storage class based on these analytical results. You can enable this feature by simply setting a storage class analysis policy, either on an object, prefix, or the entire bucket as well. Once enabled, the policy monitors the storage access patterns and provides daily visualizations of your storage usage in the AWS Management Console. You can even export these results to an S3 bucket for analyzing them using other business intelligence tools of your choice, such as Amazon QuickSight.

S3 CloudWatch metrics

: It has been a long time coming, but it is finally here! You can now leverage 13 new CloudWatch metrics specifically designed to work with your S3 buckets objects. You can receive one minute CloudWatch metrics, set CloudWatch alarms, and access CloudWatch dashboards to view real-time operations and the performance of your S3 resources, such as total bytes downloaded, number of 4xx HTTP response counts, and so on.

Brand new

 

dashboard

: Although the dashboards and structures of the AWS Management Console change from time to time, it is the new S3 dashboard that I'm really fond of. The object tagging and the storage analysis policy features are all now provided using the new S3 dashboard, along with other impressive and long-awaited features, such as searching for buckets using keywords and the ability to copy bucket properties from an existing bucket while creating new buckets, as depicted in the following screenshot:

Amazon S3 transfer acceleration

: This feature allows you to move large workloads across geographies into S3 at really fast speeds. It leverages Amazon CloudFront endpoints in conjunction with S3 to enable up to 300 times faster data uploads without having to worry about any firewall rules or upfront fees to pay.

Virtual Private Cloud

Similar to other services, Virtual Private Cloud (VPC) has seen quite a few functionalities added to it over the past years; a few important ones are highlighted here:

Support for IPv6

: With the exponential growth of the IT industry as well as the internet, it was only a matter of time before VPC too started support for IPv6. Today, IPv6 is extended and available across all AWS regions. It even works with services such as EC2 and S3. Enabling IPv6 for your applications and instances is an extremely easy process. All you need to do is enable the

IPv6 CIDR block

option, as depicted in the VPC creation wizard:

Each IPv6 enabled VPC comes with its own /56 address prefix, whereas the individual subnets created in this VPC support a /64 CIDR block.

DNS resolution for VPC Peering

: With DNS resolution enabled for your VPC peering, you can now resolve public DNS hostnames to private IP addresses when queried from any of your peered VPCs. This actually simplifies the DNS setup for your VPCs and enables the seamless extension of your network environments to the cloud.

VPC endpoints for DynamoDB

: Yet another amazing feature to be provided for VPCs later this year is the support for endpoints for your DynamoDB tables. Why is this so important all of a sudden? Well, for starters, you don't require internet gateways or NAT instances attached to your VPCs if you are leveraging the endpoints for DynamoDB. This essentially saves costs and makes the traffic between your application to the DB stay local to the AWS internal network, unlike previously where the traffic from your app would have to bypass the internet in order to reach your DynamoDB instance. Secondly, endpoints for DynamoDB virtually eliminate the need for maintaining complex firewall rules to secure your VPC. And thirdly, and most importantly, it's free!

CloudWatch

CloudWatch has undergone a lot of new and exciting changes and feature additions compared to what it originally provided as a service a few years back. Here's a quick look at some of its latest announcements:

CloudWatch events

: One of the most anticipated and useful features added to CloudWatch is CloudWatch events! Events are a way for you to respond to changes in your AWS environment in near real time. This is made possible with the use of event rules that you need to configure, along with a corresponding set of actionable steps that must be performed when that particular event is triggered. For example, designing a simple back-up or clean-up script to be invoked when an instance is powered off at the end of the day, and so on. You can, alternatively, schedule your event rules to be triggered at a particular interval of time during the day, week, month, or even year! Now that's really awesome!

High-resolution custom metrics

: We have all felt the need to monitor our applications and resources running on AWS at near real time, however, with the least amount of configurable monitoring interval set at 10 seconds, this was always going to be a challenge. But not now! With the introduction of the high-resolution custom metrics, you can now monitor your applications down to a 1-second resolution! The best part of all this is that there is no special difference between the configuration or use of a standard alarm and that of a high resolution one. Both alarms can perform the exact same functions, however, the latter is much faster than the other.

CloudWatch dashboard widgets

: A lot of users have had trouble adopting CloudWatch as their centralized monitoring solution due to its inability to create custom dashboards. But all that has now changed as CloudWatch today supports the creation of highly-customizable dashboards based on your application's needs. It also supports out-of-the box widgets in the form of the

number

 widget, which provides a view of the latest data point of the monitored metric, such as the number of EC2 instances being monitored, or the

stacked graph

, which provides a handy visualization of individual metrics and their impact in totality.

Elastic Load Balancer

One of the most significant and useful additions to ELB over the past year has been the introduction of the Application Load Balancer. Unlike its predecessor, the ELB, the Application Load Balancer is a strict Layer 7 (application) load balancer designed to support content-based routing and applications that run on containers as well. The ALB is also designed to provide additional visibility of the health of the target EC2 instances as well as the containers. Ideally, such ALBs would be used to dynamically balance loads across a fleet of containers running scalable web and mobile applications.

This is just the tip of the iceberg compared to the vast plethora of services and functionality that AWS has added to its services in just a span of one year! Let's quickly glance through the various services that we will be covering in this book.

Introduction of newer services

We will be exploring and learning things a bit differently by exploring a lot of the services and functionalities that work in conjunction with the core services:

EC2 Systems Manager

: EC2 Systems Manager is a service that basically provides a lot of add-on features for managing your compute infrastructure. Each compute entity that's managed by EC2 Systems Manager is called a

managed instance

 and this can be either an EC2 instance or an on-premise machine! EC2 Systems Manager provides out-of-the-box capabilities to create and baseline patches for operating systems, automate the creation of AMIs, run configuration scripts, and much more!

Elastic Beanstalk

: Beanstalk is a powerful yet simple service designed for developers to easily deploy and scale their web applications. At the moment, Beanstalk supports web applications developed using Java, .NET, PHP, Node.js, Python, Ruby, and Go. Developers simply design and upload their code to Beanstalk ,which automatically takes care of the application's load balancing, auto-scaling, monitoring, and so on. At the time of writing, Elastic Beanstalk supports the deployment of your apps using either Docker containers or even directly over EC2 instances, and the best part of using this service is that it's completely free! You only need to pay for the underlying AWS resources that you consume.

Elastic File System

: The simplest way to define

Elastic File System

, or

EFS

, is an NFS share on steroids! EFS provides simple and highly scalable file storage as a service designed to be used with your EC2 instances. You can have multiple EC2 instances attach themselves to a single EFS mount point which can provide a common data store for your applications and workloads.

WAF and Shield

: In this book, we will be exploring quite a few security and compliance providing services that provide an additional layer of security besides your standard VPC. Two such services we will learn about are WAF and Shield.

WAF

, or

Web Application Firewall

, is designed to safeguard your applications against web exploits that could potentially impact their availability and security maliciously. Using WAF you can create custom rules that safeguard your web applications against common attack patterns, such as SQL injection, cross-site scripting, and so on.

Similar to WAF, Shield is also a managed service that provides security against DDoS attacks that target your website or web application:

CloudTrail and Config

: CloudTrail is yet another service that we will learn about in the coming chapters. It is designed to log and monitor your AWS account and infrastructure activities. This service comes in really handy when you need to govern your AWS accounts against compliances, audits, and standards, and take necessary action to mitigate against them. Config, on the other hand, provides a very similar set of features, however, it specializes in assessing and auditing the configurations of your AWS resources. Both services are used synonymously to provide compliance and governance, which help in operational analysis, troubleshooting issues, and meeting security demands.

Cognito

: Cognito is an awesome service which simplifies the build and creation of sign-up pages for your web and even mobile applications. You also get options to integrate social identity providers, such as Facebook, Twitter, and Amazon, using SAML identity solutions.

CodeCommit, CodeBuild, and CodeDeploy

: AWS provides a really rich set of tools and services for developers, which are designed to deliver software rapidly and securely. At the core of this are three services that we will be learning and exploring in this book, namely CodeCommit, CodeBuild, and CodeDeploy. As the names suggest, the services provide you with the ability to securely store and version control your application's source code, as well as to automatically build, test, and deploy your application to AWS or your on-premises environment.

SQS and SNS

:

SQS

, or

Simple Queue Service

, is a fully-managed queuing service provided by AWS, designed to decouple your microservices-based or distributed applications. You can even use SQS to send, store, and receive messages between different applications at high volumes without any infrastructure management as well.

SNS

is a

Simple Notification Service

used primarily as a pub/ sub messaging service or as a notification service. You can additionally use SNS to trigger custom events for other AWS services, such as EC2, S3, and CloudWatch.

EMR

:

Elastic MapReduce

is a managed

Hadoop as a Service

that provides a clustered platform on EC2 instances for running Apache Hadoop and Apache Spark frameworks. EMR is highly useful for crunching massive amounts of data as well as to transform and move large quantities of data from one AWS data source to another. EMR also provides a lot of flexibility and scalability to your workloads with the ability to resize your cluster depending on the amount of data being processed at a given point in time. It is also designed to integrate effortlessly with other AWS services, such as S3 for storing the data, CloudWatch for monitoring your cluster, CloudTrail to audit the requests made to your cluster, and so on.

Redshift:

Redshift is a petabyte scale, managed data warehousing service in the cloud. Similar to its counterpart, EMR, Redshift also works on the concept of clustered EC2 instances on which you upload large datasets and run your analytical queries.

Data Pipeline

: Data Pipeline is a managed service that provides end users with an ability to process and move datasets from one AWS service to another as well as from on-premise datastores into AWS storage services, such as RDS, S3, DynamoDB, and even EMR! You can schedule data migration jobs, track dependencies and errors, and even write and create preconditions and activities that define what actions Data Pipeline has to take against the data, such as run it through an EMR cluster, perform a SQL query over it, and so on.

IoT and Greengrass:

AWS IoT and Greengrass are two really amazing services that are designed to collect and aggregate various device sensor data and stream that data into the AWS cloud for processing and analysis. AWS IoT provides a scalable and secure platform, using which you can connect billions of sensor devices to the cloud or other AWS services and leverage the same for gathering, processing, and analyzing the data without having to worry about the underlying infrastructure or scalability needs. Greengrass is an extension of the AWS IoT platform and essentially provides a mechanism that allows you to run and manage executions of data pre-processing jobs directly on the sensor devices.

Managing EC2 with Systems Manager

EC2 instances have long been a core service provided by AWS and EC2 still continues to evolve with newer sets of features and instance types added every year. One such really awesome service added during AWS re:Invent 2016 was the EC2 Systems Manager!

In this chapter, we will be learning a lot about the EC2 Systems Manager and its associated sub-services; namely:

Run Command

: Service that allows you to execute commands directly on an EC2 Systems Manager enabled EC2 instance

State Manager

: Allows you to specify a desired state for an EC2 Systems Manager enabled EC2 instance

Patch management

: Provides administrators with the ability to manage the deployment of patches over EC2 instances

Automations

: Allows administrators to automate the deployment of certain tasks

Inventory

: Service that collects and manages a list of software inventory from your managed EC2 instances

Sound exciting? Then what are we waiting for? Let's get started!

Introducing EC2 Systems Manager

As the name suggests, EC2 Systems Manager is a management service that provides administrators and end users with the ability to perform a rich set of tasks on their EC2 instance fleet such as periodically patching the instances with a predefined set of baseline patches, tracking the instances' configurational state, and ensuring that the instance stays compliant with a state template, runs scripts and commands over your instance fleet with a single utility, and much, much more! The EC2 Systems Manager is also specifically designed to help administrators manage hybrid computing environments, all from the comfort and ease of the EC2 Systems Manager dashboard. This makes it super efficient and cost effective as it doesn't require a specialized set of software or third-party services, which cost a fortune, to manage your hybrid environments!

But how does AWS achieve all of this in the first place? Well, it all begins with the concept of managed instances. A managed instance is a special EC2 instance that is governed and managed by the EC2 Systems Manager service. Each managed instance contains a Systems Manager (SSM) agent that is responsible for communicating and configuring the instance state back to the Systems Manager utility. Windows Server 2003–2012 R2 AMIs, Windows Server 2003–2012 R2 AMIs will automatically have the SSM agent installed. For Linux instances, however, the SSM agent is not installed by default. Let's quickly look at how to install this agent and set up our first Dev instance in AWS as a managed instance.

Getting started with the SSM agent

In this section, we are going to install and configure an SSM agent on a new Linux instance, which we shall call as a Dev instance, and then verify it's working by streaming the agent's log files to Amazon CloudWatch Logs. So let's get busy!

Configuring IAM Roles and policies for SSM

First, we need to create and configure IAM Roles for our EC2 Systems Manager to process and execute commands over our EC2 instances. You can either use the Systems Manager's managed policies or alternatively create your own custom roles with specific permissions. For this part, we will be creating a custom role and policy.

To get started, we first create a custom IAM policy for Systems Manager managed instances:

Log in to your AWS account and select the

IAM

option from the main dashboard, or alternatively, open the IAM console at

https://console.aws.amazon.com/iam/

.

Next, from the navigation pane, select

Policies

. This will bring up a list of existing policies currently provided and supported by AWS out of the box.

Type

SSM

in the

Policy Filter

to view the list of policies currently provided for SSM.

Select the

AmazonEC2RoleforSSM

 policy and copy its contents to form a new policy document. Here is a snippet of the policy document for your reference:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ssm:DescribeAssociation", ..... SSM actions list ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2messages:AcknowledgeMessage", "ec2messages:DeleteMessage", "ec2messages:FailMessage", "ec2messages:GetEndpoint", "ec2messages:GetMessages", "ec2messages:SendReply" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:DescribeInstanceStatus" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ds:CreateComputer", "ds:DescribeDirectories" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", ..... CloudWatch Log actions ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts", "s3:ListBucketMultipartUploads" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": "arn:aws:s3:::amazon-ssm-packages-*" } ] }

Once the policy is copied, go back to the

Policies

dashboard and click on the 

Create policy

 option. In the

Create policy

 wizard, select the

Create Your Own Policy

 option.

Provide a suitable

Policy Name

 and paste the copied contents of the

AmazonEC2RoleforSSM

 policy into the

Policy Document

 section. You can now tweak the policy as per your requirements, but once completed, remember to select the

Validate Policy

 option to ensure the policy is semantically correct.

Once completed, select

Create Policy

 to complete the process.

With this step completed, you now have a custom IAM policy for System Manager managed instances.

The next important policy that we need to create is the custom IAM user policy for our Systems Manager. This policy will essentially scope out which particular user can view the System Manager documents as well as perform actions on the selected managed instances using the System Manager's APIs:

Once again, log in to your AWS IAM dashboard and select the

Policies

 option as performed in the earlier steps.

Type

SSM

 again in the

Policy Filter

and select the

AmazonSSMFullAccess

 policy. Copy its contents and create a custom SSM access policy by pasting the following snippet in the new policy's

Policy Document

 section:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "ds:CreateComputer", "ds:DescribeDirectories", "ec2:DescribeInstanceStatus", "logs:*", "ssm:*", "ec2messages:*" ], "Resource": "*" } ] }

Remember to

validate

 the policy before completing the creation process. You should now have two custom policies, as shown in the following screenshot:

With the policies created, we now simply create a new instance profile role, attach the full access policy to the new role, and finally verify the trust relationship between Systems Manager and the newly created role:

To create a new role, from the IAM management dashboard, select the

Roles

 option from the navigation pane.

In the

Create Role

 wizard, select the

EC2

 option from the

AWS service

 role type, as shown in the following screenshot. Next, select the

EC2

 option as the

use case

 for this activity and click on the 

Next: Permissions

 button to continue:

In the

Attach permissions policy

 page, filter and select the

ssm-managedInstances

 policy that we created at the beginning of this exercise. Click on

Review

 once done.

Finally, provide a suitable

Role name

 in the

Review

 page and click on

Create role

 to complete the procedure!

With the role in place, we now need to verify that the IAM policy for your instance profile role includes ssm.amazonaws.com as a trusted entity:

To verify this, select the newly created role from the

IAM Roles

page and click on the

Trust relationships

 tab.

Here, choose the

Edit Trust Relationship

 option and paste the following snippet in the policy editor, as shown. Remember to add both

EC2 and SSM

as the trusted services and not just one of them:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com", "ssm.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] }

With the new trust policy in place, click on

Update Trust Policy

 to complete the process. Congratulations!

You are almost done with configuring the Systems Manager! A final step remains, where we need to attach the second policy that we created (SSM full access) to one of our IAM users. In this case, I've attached the policy to one of my existing users in my AWS environment, however, you can always create a completely new user dedicated to the Systems Manager and assign it the SSM access policy as well.

With the policies out of the way, we can now proceed with the installation and configuration of the SSM agent on our simple Dev instance.

Installing the SSM agent

As discussed at the beginning of the chapter, the Systems Manager or the SSM agent is a vital piece of software that needs to be installed and configured on your EC2 instances in order for Systems Manager to manage it. At the time of writing, SSM agent is supported on the following sets of operating systems:

Windows

:

Windows Server 2003 (including R2)

Windows Server 2008 (including R2)

Windows Server 2012 (including R2)

Windows Server 2016

Linux

(64-bit and 32-bit):

Amazon Linux 2014.09, 2014.03 or later

Ubuntu Server 16.04 LTS, 14.04 LTS, or 12.04 LTS

Red Hat Enterprise Linux (RHEL) 6.5 or later

CentOS 6.3 or later

Linux

(64-bit only):

Amazon Linux 2015.09, 2015.03 or later

Red Hat Enterprise Linux 7.x or later

CentOS 7.1 or later

SUSE Linux Enterprise Server 12 or higher

To install the agent on a brand new instance, such as the one we will create shortly, you simply need to ensure that the instance is provided with the necessary SSM IAM role that we created in the previous section, as well as to provide the following code snippet in the User data section of your instance's configuration:

#!/bin/bash cd /tmp wget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.deb sudo dpkg -i amazon-ssm-agent.deb sudo start amazon-ssm-agent

The user data script varies from OS to OS. In my case, the script is intended to run on an Ubuntu Server 14.04 LTS (HVM) instance. You can check your SSM agent install script at http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-ssm-agent.html#sysman-install-startup-linux.

Once the instance is up and running, SSH into the instance and verify whether your SSM agent is up and running or not using the following command. Remember, the following command will also vary based on the operating system that you select at launch time:

# sudo status amazon-ssm-agent

You should see the agent running, as shown in the following screenshot:

You can, optionally, even install the agent on an existing running EC2 instance by completing the following set of commands.

For an instance running on the Ubuntu 16.04 LTS operating system, we first create a temporary directory to house the SSM agent installer:

# mkdir /tmp/ssm

Next, download the operating-specific SSM agent installer using the wget utility:

# wget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.deb

Finally, execute the installer using the following command:

# sudo dpkg -i amazon-ssm-agent.deb

You can additionally verify the agent's execution by tailing either of these log files as well:

# sudo tail -f /var/log/amazon/ssm/amazon-ssm-agent.log # sudo tail -f /var/log/amazon/ssm/errors.log

Configuring the SSM agent to stream logs to CloudWatch

This is a particularly useful option provided by the SSM agent, especially when you don't want to log in to each and every instance and troubleshoot issues. Integrating the SSM agent's logs with CloudWatch enables you to have all your logs captured and analyzed at one central location, which undoubtedly ends up saving a lot of time, but it also brings additional benefits such as the ability to configure alarms, view the various metrics using CloudWatch dashboard, and retain the logs for a much longer duration.

But before we get to configuring the agent, we first need to create a separate log group within CloudWatch that will stream the agent logs from individual instances here:

To do so, from the

AWS Management Console

, select the

CloudWatch

option, or alternatively, click on the following link to open your CloudWatch dashboard from

https://console.aws.amazon.com/cloudwatch/

.

Next, select the

Logs

 option from the navigation pane. Here, click on

Create log group

 and provide a suitable name for your log group, as shown in the following screenshot:

Once completed, SSH back into your Dev instance and run the following command:

# sudo cp /etc/amazon/ssm/seelog.xml.template /etc/amazon/ssm/seelog.xml

Next, using your favorite editor, open the newly copied file and paste the following content in it. Remember to swap out the

<CLOUDWATCH_LOG_GROUP_NAME>

field with the name of your own log group:

# sudo vi /etc/amazon/ssm/seelog.xml <seelog minlevel="info" critmsgcount="500" maxinterval="100000000" mininterval="2000000" type="adaptive"> <exceptions> <exception minlevel="error" filepattern="test*"/> </exceptions> <outputs formatid="fmtinfo"> <console formatid="fmtinfo"/> <rollingfile type="size" maxrolls="5" maxsize="30000000" filename="{{LOCALAPPDATA}}\Amazon\SSM\Logs\amazon-ssm-agent.log"/> <filter formatid="fmterror" levels="error,critical"> <rollingfile type="size" maxrolls="5" maxsize="10000000" filename="{{LOCALAPPDATA}}\Amazon\SSM\Logs\errors.log"/> </filter> <custom name="cloudwatch_receiver" formatid="fmtdebug" data-log-group="<CLOUDWATCH_LOG_GROUP_NAME>"/> </outputs> CODE:

With the changes made, save and exit the editor. Now have a look at your newly created log group using the CloudWatch dashboard; you should see your SSM agent's error logs, if any, displayed there for easy troubleshooting.

With this step completed, we have now successfully installed and configured our EC2 instance as a Managed Instance in Systems Manager. To verify whether your instance has indeed been added, select the Managed Instance option provided under the Systems Manager Shared Resources section from the navigation pane of your EC2 dashboard; you should see your instance listed, as shown here:

In the next section, we will deep dive into the various features provided as a part of the Systems Manager, starting off with one of the most widely used: Run Command!

Introducing Run Command

Run Command is an awesome feature of Systems Manager, which basically allows you to execute remote commands over your managed fleet of EC2 instances. You can perform a vast variety of automated administrative tasks, such as installing software or patching your operating systems, executing shell commands, managing local groups and users, and much more! But that's not all! The best part of using this feature is that it allows you to have a seamless experience when executing scripts, even over your on-premises Windows and Linux operating systems, whether they be running on VMware ESXi, Microsoft Hyper-V, or any other platforms. And the cost of all this? Well, it's absolutely free! You only pay for the EC2 instances and other AWS resources that you create and nothing more!

Here's a brief list of a few commonly predefined commands provided by Run Command along with a short description:

AWS-RunShellScript

: Executes shell scripts on remote Linux instances

AWS-UpdateSSMAgent

: Used to update the Amazon SSM agent

AWS-JoinDirectoryServiceDomain

: Used to join an instance to an AWS Directory

AWS-RunPowerShellScript