20,99 €
Create dynamic cloud-based websites with Amazon Web Services and this friendly guide! As the largest cloud computing platform in the world, Amazon Web Services (AWS) provides one of the most popular web services options available. This easy-to-understand guide is the perfect introduction to the Amazon Web Services platform and all it can do for you. You'll learn about the Amazon Web Services tool set; how different web services (including S3, Amazon EC2, and Amazon Flexible Payments) and Glacier work; and how you can implement AWS in your organization. * Explains how to use Amazon Web Services to store objects, take payments, manage large quantities of data, send e-mails, deploy push notifications, and more from your website * Details how AWS can reduce costs, improve efficiency, increase productivity, and cut down on expensive hardware investments - and administrative headaches - in your organization * Includes practical examples and helpful step-by-step lists to help you experiment with different AWS features and create a robust website that meets your needs Amazon Web Services For Dummies is exactly what you need to get your head in the cloud with Amazon Web Services!
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 578
Veröffentlichungsjahr: 2013
Cover
Title Page
Copyright
Table of Contents
Introduction
About This Book
Using This Book
Foolish Assumptions
Icons Used in This Book
Beyond the Book
Part I:
Getting Started with AWS
Chapter 1: Amazon Web Services Philosophy and Design
Cloud Computing Defined
Understanding the Amazon Business Philosophy
The AWS Infrastructure
The AWS Ecosystem
Counting Up the Network Effects Benefit
AWS versus Other Cloud Providers
Getting Ready for the 21st Century
Chapter 2: Introducing the AWS API
APIs: Understanding the Basics
Benefiting from Web Services
An Overview of the AWS API
AWS API Security
Chapter 3: Introducing the AWS Management Console
Setting Up Your Amazon Web Services Account
Accessing Your First AWS Service
Loading Data into S3 Buckets
S3 URL Naming Conventions
Last Words on the AWS Management Console
Part II:
Diving into AWS Offerings
Chapter 4: Setting Up AWS Storage
Differentiating the Amazon Storage Options
Storing Items in the Simple Storage Service (S3) Bucket
Managing Volumes of Information with Elastic Block Storage (EBS)
Managing Archive Material with the Glacier Storage Service
Scaling Key-Value Data with DynamoDB
Selecting an AWS Storage Service
Chapter 5: Stretching Out with Elastic Compute Cloud
Introducing EC2
Seeing EC2’s Unique Nature
Working with an EC2 Example
Chapter 6: AWS Networking
Brushing Up on Networking Basics
AWS Network IP Addressing
AWS IP Address Mapping
AWS Direct Connect
High-Performance AWS Networking
AWS Elastic IP Addresses
AWS Instance Metadata
Instance IP Address Communication
Chapter 7: AWS Security
Clouds Can Have Boundaries, Too
The Deperimeterization of Security
AWS Security Groups
Using Security Groups to Partition Applications
Security Group Best Practices
AWS Virtual Private Cloud (VPC)
AWS Application Security
Chapter 8: Additional Core AWS Services
Understanding the Other AWS Services
CloudFront
Relational Database Service (RDS)
ElastiCache
Integrating Additional AWS Services into Your Application
Choosing the Right Additional AWS Service Integration Approach
Dealing with AWS Lock-in
Part III:
Using AWS
Chapter 9: AWS Platform Services
Searching with CloudSearch
Managing Video Conversions with Elastic Transcoder
Simple Queue Service
Simple Notification Service
Simple E-Mail Service
Simple Workflow Service
Dealing with Big Data with the Help of Elastic MapReduce
Redshift
Chapter 10: AWS Management Services
Managing Your AWS Applications
Which AWS Management Service Should I Use?
Chapter 11: Managing AWS Costs
AWS Costs — It’s Complicated
Taking Advantage of Cost and Utilization Tracking
Managing Your AWS Costs
Chapter 12: Bringing It All Together: An AWS Application
Putting the Pieces Together
Improving Application Robustness with Geographical Redundancy
Part IV:
The Part of Tens
Chapter 13: Ten Reasons to Use Amazon Web Services
AWS Provides IT Agility
AWS Provides Business Agility
AWS Offers a Rich Services Ecosystem
AWS Simplifies IT Operations
AWS Spans the Globe
AWS Is the Leading Cloud-Computing Service Provider
AWS Enables Innovation
AWS Is Cost Effective
AWS Aligns Your Organization with the Future of Technology
AWS Is Good for Your Career
Chapter 14: Ten Design Principles for Cloud Applications
Everything Fails All the Time
Redundancy Protects Against Resource Failure
Geographic Distribution Protects Against Infrastructure Failure
Monitoring Prevents Problems
Utilization Review Prevents Waste
Application Management Automates Administration
Security Design Prevents Breaches and Data Loss
Encryption Ensures Privacy
Tier-Based Design Increases Efficiency
Good Application Architecture Prevents Technical Debt
About the Author
Dedication
Acknowledgments
Publisher’s Acknowledgments
Cheat Sheet
Connect with Dummies
End User License Agreement
Chapter 1: Amazon Web Services Philosophy and Design
Figure 1-1: Counting S3 objects over the years.
Chapter 2: Introducing the AWS API
Figure 2-1: The AWS interface tools.
Chapter 3: Introducing the AWS Management Console
Figure 3-1: The main AWS landing page.
Figure 3-2: The initial account-creation page.
Figure 3-3: Creating your login credentials.
Figure 3-4: The contact information and customer agreement page.
Figure 3-5: Payment information.
Figure 3-6: Verifying your identity using the telephone.
Figure 3-7: Identity verification complete.
Figure 3-8: The AWS Management Console landing page.
Figure 3-9: The S3 home page.
Figure 3-10: Name your bucket.
Figure 3-11: The S3 management page, with your first bucket now listed.
Figure 3-12: The Upload Files dialog box.
Figure 3-13: Your bucket now shows the file you just uploaded.
Figure 3-14: Adding permissions to an S3 object.
Figure 3-15: A picture of our cat Star, snoozing in a chair, straight from S3.
Chapter 4: Setting Up AWS Storage
Figure 4-1: The AWS main landing page.
Figure 4-2: The EBS volume page.
Figure 4-3: The Create Volume Wizard.
Figure 4-4: The created volume.
Figure 4-5: The Management Console home page, with Glacier highlighted
Figure 4-6: Creating the Glacier vault.
Figure 4-7: The Glacier Vault Creation Wizard.
Figure 4-8: The Glacier Vault, at the ready.
Figure 4-9: The AWS Management Console landing page.
Figure 4-10: An invitation to create the first DynamoDB.
Figure 4-11: Panel 1 in the DynamoDB Create Wizard.
Figure 4-12: Defining the DynamoDB read and write units.
Figure 4-13: Creating the DynamoDB table.
Figure 4-14: The DynamoDB table is ready.
Chapter 5: Stretching Out with Elastic Compute Cloud
Figure 5-1: The EC2 Amazon Machine Image panel.
Figure 5-2: The EC2 dashboard.
Figure 5-3: The AMI selection screen.
Figure 5-4: The Request Instances Wizard details screen.
Figure 5-5: The advanced instance options.
Figure 5-6: The EBS volume screen.
Figure 5-7: The Tag screen.
Figure 5-8: The Key Pair screen.
Figure 5-9: The Security Groups screen.
Figure 5-10: The Summary screen.
Figure 5-11: The Conclusion screen.
Figure 5-12: The EC2 instance page.
Figure 5-13: The Terminate option.
Chapter 6: AWS Networking
Figure 6-1: The public DNS and private IP address for a single instance.
Figure 6-2: AWS IP addresses and network traffic.
Figure 6-3: Intraregional and interregional AWS traffic.
Figure 6-4: Instance activity flow in a configuration management mechanism application.
Chapter 7: AWS Security
Figure 7-1: The AWS computing environment architecture.
Figure 7-2: Setting security group rules.
Figure 7-3: The port range field.
Figure 7-4: Using security groups to partition applications.
Figure 7-5: The AWS virtual private cloud.
Figure 7-6: A more complex VPC configuration.
Chapter 10: AWS Management Services
Figure 10-1: Enabling CloudWatch for an EC2 instance.
Figure 10-2: Creating a CloudWatch alarm.
Figure 10-3: EC2 instance CloudWatch alarms.
Figure 10-4: The AWS Management Console CloudWatch dashboard.
Figure 10-5: Creating the Tomcat environment.
Figure 10-6: The Elastic Beanstalk operating environment.
Figure 10-7: Elastic Beanstalk application monitoring.
Figure 10-8: The Elastic Beanstalk application.
Figure 10-9: The Cloud-Formation main page.
Figure 10-10: The Cloud-Formation template selection panel.
Figure 10-11: Setting the Cloud-Formation template parameters.
Figure 10-12: The stack’s Summary panel.
Figure 10-13: The Resources tab in the Stack Resources panel.
Figure 10-14: The Running Stack landing page.
Figure 10-15: A template snippet.
Figure 10-16: The OpsWorks landing page.
Figure 10-17: The stack configuration page.
Figure 10-18: Configuring the stack itself.
Figure 10-19: Creating a stack layer.
Figure 10-20: Creating a stack instance.
Figure 10-21: The stack instance as it’s running.
Figure 10-22: Adding the application code.
Figure 10-23: Deploying the application code.
Figure 10-24: A successful application deployment.
Figure 10-25: All systems go.
Chapter 11: Managing AWS Costs
Figure 11-1: Comparing the costs of the AWS pricing models.
Chapter 12: Bringing It All Together: An AWS Application
Figure 12-1: Searching for a Bitnami WordPress AMI.
Figure 12-2: The 64-bit 3.2.1-5 Ubuntu WordPress AMI.
Figure 12-3: The AWS Launch Wizard.
Figure 12-4: Setting advanced instance options — Kernel, User Data, and IAM Roles.
Figure 12-5: Where to configure additional EBS volumes.
Figure 12-6: Entering tags, if you so desire.
Figure 12-7: Creating an SSH key pair.
Figure 12-8: Specifying your security group.
Figure 12-9: Your EC2 instance is launching.
Figure 12-10: The running EC2 instance.
Figure 12-11: The initial landing page in the Bitnami WordPress application.
Figure 12-12: The WordPress landing page.
Figure 12-13: The WordPress login panel.
Figure 12-14: The WordPress administrative interface.
Figure 12-15: The ssh connection command.
Figure 12-16: The Bitnami Instance terminal splash screen.
Figure 12-17: The WordPress configuration file database section.
Figure 12-18: The MySQL database dump command.
Figure 12-19: The main RDS page.
Figure 12-20: The first panel of the RDS wizard.
Figure 12-21: The DB Instance Details panel in the wizard.
Figure 12-22: Additional configuration information in the RDS wizard.
Figure 12-23: The RDS wizard Manage-ment Options screen.
Figure 12-24: The RDS Wizard Confirmation screen.
Figure 12-25: The final panel in the RDS wizard.
Figure 12-26: Your RDS DB instance is available.
Figure 12-27: RDS security groups.
Figure 12-28: Creating the RDS database instance WordPress database.
Figure 12-29: Modifying the WordPress configuration file.
Figure 12-30: The vertically partitioned WordPress application.
Figure 12-31: Starting the AMI creation process.
Figure 12-32: Configuring your AMI.
Figure 12-33: The AMI creation confirmation panel.
Figure 12-34: The new, private AMI.
Figure 12-35: The RDS Instance Actions options.
Figure 12-36: The RDS Read Replica panel in the wizard.
Figure 12-37: RDS Read Replica Up and Running
Figure 12-38: Multiple WordPress instances.
Figure 12-39: The Elastic Load Balancer main page.
Figure 12-40: Naming your Elastic Load Balancer.
Figure 12-41: Setting the health-check criteria.
Figure 12-42: Manually adding instances to your Elastic Load Balancer.
Figure 12-43: The wizard’s Summary panel.
Figure 12-44: The operational Elastic Load Balancer.
Figure 12-45: The Bitnami landing page.
Chapter 1: Amazon Web Services Philosophy and Design
Table 1-1 Total AWS Servers
Chapter 5: Stretching Out with Elastic Compute Cloud
Table 5-1 Size Range of AWS Instance Resources
Chapter 11: Managing AWS Costs
Table 11-1 AWS Annual Expenditure Survey Pool
Table 11-2 Distribution of Expenditure by AWS Service
Table 11-3 Use of the EC2 Pricing Model
Cover
Table of Contents
Begin Reading
iii
iv
v
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
339
340
341
342
343
344
345
346
365
366
367
Bernard Golden
Amazon Web Services™ For Dummies®
Published by: John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030-5774, www.wiley.com
Copyright © 2013 by John Wiley & Sons, Inc., Hoboken, New Jersey
Published simultaneously in Canada
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.
Trademarks: Wiley, For Dummies, the Dummies Man logo, Dummies.com, Making Everything Easier, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and may not be used without written permission. Amazon Web Services is a trademark of Amazon Technologies, Inc. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Website is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or Website may provide or recommendations it may make. Further, readers should be aware that Internet Websites listed in this work may have changed or disappeared between when this work was written and when it is read.
For general information on our other products and services, please contact our Customer Care Department within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993, or fax 317-572-4002. For technical support, please visit www.wiley.com/techsupport.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.
Library of Congress Control Number: 2013942773
ISBN 978-1-118-57183-5 (pbk); ISBN 978-1-118-65198-8 (ebk); ISBN 978-1-118-65226-8 (ebk)
Manufactured in the United States of America
10 9 8 7 6 5 4 3 2 1
This is a great resource for anyone considering the jump into cloud computing. Golden accurately explores the roster of AWS services while clearly illustrating ways for developers to make applications easier to build and manage. He manages to address both business requirements and technical content in a way that will appeal to almost any audience.
— Jeff Barr, Sr. Technology Evangelist, Amazon Web Services
One of the challenges Bitnami users face is understanding the breadth and power of AWS. Amazon Web Services For Dummies helps our users build a great foundation of AWS skills. Anyone who is new to AWS and wants to be successful should start with this book.
— Erica Brescia, COO and co-founder of Bitnami
Netflix is all-in on AWS. We believe it is the richest, most scalable, most innovative cloud platform in the industry. Building AWS skills is critical for careers today — and Amazon Web Services For Dummies is the best resource I know of to learn AWS from the ground up. Buy this book to learn what your future will look like.
— Adrian Cockcroft, Netflix Cloud Architect
This book is designed with one purpose in mind: to make it easy for you, the reader, to understand and begin using Amazon Web Services (AWS) — an emerging technology platform that is profoundly disrupting the technology industry and enabling hundreds of thousands of individuals, businesses, and nonprofit organizations to gain easy access to on-demand computing resources.
In a sense, this book is an extension of my earlier book Virtualization For Dummies (Wiley Publishing), which hasa chapter describing “The Future of Virtualization.” In my research to identify which direction virtualization would take, I came across Amazon Web Services, a then-new offering was referred to by Amazon employees as Infrastructure as a Service. Toindicate how briefly this new type of computing has been available, the term cloud computing was still more than a year away when Virtualization For Dummies was published.
As I spoke to Amazon representatives about the company’s new offering, I experienced the same reaction I had when first exposed to open source software — a visceral response that made me ask out loud: “If this service is available to users, who will stick with the old way of doing things?”
Nothing in the subsequent years has changed my mind — in fact, that experience strengthens my conviction that cloud computing in general, and Amazon Web Services in particular, will transform the way applications are designed and built. I’ve worked with people from many companies who have resigned themselves to the length of the usual IT resource provisioning process — taking six weeks or more to obtain a virtual machine. When I demonstrate the ability of AWS to provision an instance (Amazon’s term for a virtual machine) in ten minutes or less, these people regard what they’re seeing with disbelief, staggered that the conventional (lengthy) provisioning process isn’t somehow set in stone.
Amazon continues to challenge the incumbent community of technology vendors, releasing new services and cutting prices at an unrelenting pace. I fully expect that a decade from now, AWS will be one of the top two or three global technology vendors, and that a number of today’s giants will be gone, driven out of business, or into forced mergers by their inability to compete on Amazon’s terms.
But (there’s always a but, isn’t there?) how to get started is a challenge that many people face when they consider using AWS. AWS documentation is quite thorough, but you won’t find there a general guide for beginners to start from scratch and develop new skills.
For this reason, I proposed this book to the publisher. I’ve heard from many people who are excited about using AWS but frustrated about how to learn about and use AWS. The Powers That Be at Wiley and I agreed that an introductory book about AWS that helps newbies begin using it productively would be extremely useful — and so we set to work to create the book that you now hold in your hands. I hope that you’ll find it a useful and helpful roadmap for your AWS journey.
This book contains a mix of text, URLs, and terminal commands that you can execute. Please note these stylistic tidbits:
Text that you type just as it appears in the book is in
bold
. The exception is when you’re working through a step list: Because each step is bold, the text to type is not in bold.
Web addresses and programming code appear in
monofont
type. If you're reading a digital version of this book on a device connected to the Internet, you can click the web address to visit that website, such as this one:
www.dummies.com
.
This book is designed to address a range of readers. Part I is an overview of AWS and an introduction to how the service works. It’s appropriate for executives, project managers, and IT managers wanting to gain a basic understanding of the service so that they have a context for the benefits their organization can realize by using AWS. No particular technical background is assumed or necessary in Part I.
If you plan to work with AWS in a hands-on manner, Parts II and III provide a comprehensive review of all AWS offerings. I devote a full chapter to the use of the AWS technology, with a set of exercises that begin with a simple example and progressively build into a more complex application that leverages a number of AWS products. A technical background is necessary to comprehend Parts II and III; however, none of the information or exercises is particularly difficult from a technology perspective.
The Tip icon marks tips (duh!) and shortcuts that you can use to make using Amazon Web Services easier.
Remember icons mark information that’s especially important to know. To siphon off the most important information in each chapter, just skim these icons.
The Technical Stuff icon marks information of a highly technical nature that you can normally skip over.
The Warning icon tells you to watch out! It marks important information that may save you headaches.
The technology industry continues to invent and evolve rapidly — and that goes double for cloud computing. It’s important to have up-to-the-minute information on important new technology trends, and we’re committed to providing new information as AWS evolves over time.
Here are three places you can look for information and help outside of this book:
Cheat Sheet:
You can find the Cheat Sheet for this book at
www.dummies.com/cheatsheet/amazonwebservices
. It describes the family of AWS services and provides guidelines for using them. Given how complex AWS is turning out to be, a general set of recommendations is useful indeed!
Dummies.com online articles:
Be sure to check out
www.dummies.com/extras/amazonwebservices
for additional online content dealing with AWS. Not everything I wanted to say could fit within the pages of this book, so I parceled out some content for the World Wide Web.
Updates:
Amazon Web Services continues to evolve rapidly. Amazon rolls out new services extremely quickly. I'll post updates about new AWS services at
www.bernardgolden.com
. Look there to learn the latest about AWS.
Unlike a novel, which requires you to begin at the beginning and carry on methodically throughout the book, Amazon Web Services For Dummies is designed to support what I like to call “random access” — if you hear about a particular AWS product and want to find out more, well, dig right in to that section of the book. If you want to understand the phenomenon of AWS, read the first part and then pick and choose among other areas that seem intriguing. This book supports your learning pattern and imposes no “official” reading approach. Dive in anywhere that makes sense to you.
Visit www.dummies.com for great Dummies content online.
See how Amazon designed Amazon Web Services from the beginning to be extremely scalable, modular in design, and highly robust.
Find out how AWS reflects Amazon’s unique approach to operating its business.
Get an introduction to AWS, its business and technology underpinnings, and even get a small taste of hands-on use.
Visit
www.dummies.com
for great Dummies content online.
Figuring out the cloud
Watching Amazon grow from retailer to the world’s first cloud provider
Understanding the foundation of Amazon Web Services
Introducing the Amazon Web Services ecosystem
Seeing how the network effect helps you
Comparing Amazon Web Services to other cloud computing providers
You may be forgiven if you’re puzzled about how Amazon, which started out as an online bookstore, has become the leading cloud computing provider. This chapter solves that mystery by discussing the circumstances that led Amazon into the cloud computing services arena and why Amazon Web Services, far from being an oddly different offering from a retailer, is a logical outgrowth of Amazon’s business.
This chapter also compares Amazon’s cloud offering to other competitors in the market and explains how its approach differs. As part of this comparison, I present some statistics on the size and growth of Amazon’s offering, while describing why it’s difficult to get a handle on its exact size.
The chapter concludes with a brief discussion about the Amazon Web Services ecosystem and why it is far richer than what Amazon itself provides — and why it offers more value for users of Amazon’s cloud service.
But before I reveal all the answers to the Amazon mystery, I answer an even more fundamental question: What is all this cloud computing stuff, anyway?
I believe that skill is built on a foundation of knowledge. Anyone who wants to work with Amazon Web Services (AWS, from now on) should have a firm understanding of cloud computing — what it is and what it provides.
As a general overview, cloud computing refers to the delivery of computing services from a remote location over a network. The National Institute of Standards and Technology (NIST), a U.S. government agency, has a definition of cloud computing that is generally considered the gold standard. Rather than trying to create my own definition, I always defer to NIST’s definition. The following information is drawn directly from it.
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
This cloud model is composed of five essential characteristics:
On-demand self-service:
A consumer can unilaterally provision computing capabilities, such as server time and network storage, automatically as needed without requiring human interaction with each service provider.
Broad network access:
Capabilities are available over the network and accessed via standard mechanisms that promote use by heterogeneous thin or thick client platforms (such as mobile phones, tablets, laptops, and workstations).
Resource pooling:
The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There’s a sense of so-called
location independence,
in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (by country, state, or data center, for example). Examples of resources are storage, processing, memory, and network bandwidth.
Rapid elasticity:
Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
Measured service:
Cloud systems automatically control and optimize resource use by leveraging a metering capability at a level of abstraction that’s appropriate to the type of service (storage, processing, bandwidth, or active user accounts, for example). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
Cloud computing is commonly characterized as providing three types of functionality, referred to IaaS,PaaS, and SaaS, where aaS is shorthand for “as a service” and service implies that the functionality isn’t local to the user but rather originates elsewhere (a location in a remote location accessed via a network). The letters I,P, and S in the acronyms refer to different types of functionality, as the following list makes clear:
Infrastructure as a Service (Iaas):
Offers users the basic building blocks of computing: processing, network connectivity, and storage. (Of course, you also need other capabilities in order to fully support IaaS functionality — such as user accounts, usage tracking, and security.) You would use an IaaS cloud provider if you want to build an application from scratch and need access to fairly low-level functionality within the operating system.
Platform as a Service (PaaS):
Instead of offering low-level functions within the operating system, offers higher-level programming frameworks that a developer interacts with to obtain computing services. For example, rather than open a file and write a collection of bits to it, in a PaaS environment the developer simply calls a function and then provides the function with the collection of bits. The PaaS framework then handles the grunt work, such as opening a file, writing the bits to it, and ensuring that the bits have been successfully received by the file system. The PaaS framework provider takes care of backing up the data and managing the collection of backups, for example, thus relieving the user of having to complete further burdensome administrative tasks.
Software as a Service (SaaS):
Has clambered to an even higher rung on the evolutionary ladder than PaaS. With SaaS, all application functionality is delivered over a network in a pretty package. The user need do nothing more than use the application; the SaaS provider deals with the hassle associated with creating and operating the application, segregating user data, providing security for each user as well as the overall SaaS environment, and handling a myriad of other details.
As with every model, this division into I,P, and S provides a certain explanatory leverage and seeks to make neat and clean an element that in real life can be rather complicated. In the case of IPS, the model is presented as though the types are cleanly defined though they no longer are. Many cloud providers offer services of more than one type. Amazon, in particular, has begun to provide many platform-like services as it has built out its offerings, and has even ventured into a few full-blown application services that you’d associate with SaaS. You could say that Amazon provides all three types of cloud computing.
If you find the mix of I,P, and S in the preceding section confusing, wait ’til you hear about the whole private-versus-public cloud computing distinction. Note the sequence of events:
as the first cloud computing provider, offers
public cloud computing
— anyone can use it.
IT organizations, when contemplating this new Amazon Web Services creature, asked why they couldn’t create and offer a service like AWS to their own users, hosted in their own data centers. This on-premise version became known as
private cloud computing.
the trend, several hosting providers thought they could offer their IT customers a segregated part of their data centers and let customers build clouds there. This concept can also be considered private cloud computing because it’s dedicated to one user. On the other hand, because the data to and from this private cloud runs over a shared network, is the cloud truly private?
after one bright bulb noted that companies may not choose only public or private, the term
hybrid
was coined to refer to companies using both private and public cloud environments.
As you go further on your journey in the cloud, you’ll likely witness vociferous discussions devoted to which of these particular cloud environments is the better option. My own position is that no matter where you stand on the private/public/hybrid issue, public cloud computing will undoubtedly become a significant part of every company’s IT environment. Moreover, Amazon will almost certainly be the largest provider of public cloud computing, so it makes sense to plan for a future that includes AWS. (Reading this book is part of that planning effort, so you get a gold star for already being well on your way!)
If you want to drill down further into cloud computing definitions, check out NIST's full description at http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf. The U.S. federal government has been an early adopter of, and hard charger in, cloud adoption, and NIST has been assigned to create this (excellent) government-wide cloud computing resource.
Amazon Web Services was officially revealed to the world on March 13, 2006. On that day, AWS offered the Simple Storage Service, its first service. (As you may imagine, Simple Storage Services was soon shortened to S3.) The idea behind S3 was simple: It could offer the concept of object storage over the web, a setup where anyone could put an object — essentially, any bunch of bytes — into S3. Those bytes may comprise a digital photo or a file backup or a software package or a video or audio recording or a spreadsheet file or — well, you get the idea.
S3 was relatively limited when it first started out. Though objects could, admittedly, be written or read from anywhere, they could be stored in only one region: the United States. Moreover, objects could be no larger than 5 gigabytes — not tiny by any means, but certainly smaller than many files that people may want to store in S3. The actions available for objects were also quite limited: You could write, read, and delete them, and that was it.
In its first six years, S3 has grown in all dimensions. The service is now offered throughout the world in a number of different regions. Objects can now be as large as 5 terabytes. S3 can also offer many more capabilities regarding objects. An object can now have a termination date, for example: You can set a date and time after which an object is no longer available for access. (This capability may be useful if you want to make a video available for viewing for only a certain period, such as the next two weeks.) S3 can now also be used to host websites — in other words, individual pages can be stored as objects, and your domain name (say, www.example.com) can point to S3, which serves up the pages.
S3 did not remain the lone AWS example for long. Just a few months after it was launched, Amazon began offering Simple Queue Service (SQS), which provides a way to pass messages between different programs. SQS can accept or deliver messages within the AWS environment or outside the environment to other programs (your web browser, for example) and can be used to build highly scalable distributed applications.
Later in 2006 came Elastic Compute Cloud (known affectionately as EC2). As the AWS computing service, EC2 offers computing capacity on demand, with immediate availability and no set commitment to length of use.
Don’t worry if this description of AWS seems overwhelming at first — in the rest of this book, you can find out all about the various pieces of AWS, how they work, and how you can use them to address your computing requirements. This chapter provides a framework in which to understand the genesis of AWS, with details to follow. The important thing for you to understand is how AWS got started, how big of a change it represents in the way computing is done, and why it’s important to your future.
The overall pattern of AWS has been to add additional services steadily, and then quickly improve each service over time. AWS is now composed of more than 25 different services, many offered with different capabilities via different configurations or formats. This rich set of services can be mixed and matched to create interesting and unique applications, limited only by your imagination or needs.
So, from one simple service (S3) to more than 25 in just over six years, and throughout the world — and growing and improving all the time! You’re probably impressed by how fast all of this has happened. You’re not alone. Within the industry, Amazon is regarded with a mixture of awe and envy because of how rapidly it delivers new AWS functionality. If you’re interested, you can keep up with changes to AWS via its What’s New web page on the AWS site, at
http://aws.amazon.com/about-aws/whats-new
This torrid pace of improvement is great news for you because it means that AWS continually presents new things you can do — things you probably couldn’t do in the past because the AWS functionality would be too difficult to implement or too expensive to afford even if you could implement it.
Amazon is the pioneer of cloud computing and, because you’d have to have been living under a rock not to have heard about “the cloud,” being the pioneer in this area is a big deal. The obvious question is this: If AWS is the big dog in the market and if cloud computing is the hottest thing since sliced bread, how big are we talking about?
That’s an interesting question because Amazon reveals little about the extent of its business. Rather than break out AWS revenues, the company lumps them into an Other category in its financial reports.
Nevertheless, we have some clues to its size, based on information from the company itself and on informed speculation by industry pundits.
Amazon itself provides a proxy for the growth of the AWS service. Every so often, it announces how many objects are stored in the S3 service. Take a peek at Figure 1-1, which shows how the number of objects stored in S3 has increased at an enormous pace, jumping from 2.9 billion at the end of 2006 to over 2 trillion objects by the end of the second quarter of 2012. Given that pace of growth, it’s obvious that the business of AWS is booming.
Other estimates of the size of the AWS service exist as well. A very clever consultant named Huan Liu examined AWS IP addresses and projected the total number of server racks held by AWS, based on an estimate of how many servers reside in a rack. Table 1-1 breaks down the numbers by region.
Figure 1-1: Counting S3 objects over the years.
Table 1-1 Total AWS Servers
AWS Region
Number of Server Racks
Number of Servers
US East
5,030
321,920
US West (Oregon)
41
2,624
US West (California)
630
40,320
EU West (Ireland)
814
52,096
AP Northeast (Japan)
314
20,096
AP Southeast (Singapore)
246
15,744
SA East (Brazil)
25
1,600
Total
7,100
454,400
That's a lot of servers. (To see the original document outlining Liu's estimates, along with his methodology, go to http://huanliu.wordpress.com/2012/03/13/amazon-data-center-size). If you consider that each server can support a number of virtual machines (the number would vary, of course, according to the size of the virtual machines), AWS could support several million running virtual machines.
Amazon publishes a list of public IP addresses; as of May 2013, there are over four million available in AWS. This number is not inconsistent with Liu's estimated number of physical servers; it's also a convenient place to look to track how much AWS is growing. If you're interested, you can look at the AWS numbers at https://forums.aws.amazon.com/ann.jspa?annID=1701.
If you’re not familiar with the term virtual machines, don’t worry: I describe AWS technology in depth in Chapter 4. For an even more detailed discussion of virtual machines and virtualization proper, check out Virtualization For Dummies, by yours truly (published by John Wiley & Sons, Inc.).
Though Amazon doesn’t announce how many dollars AWS pulls in, that hasn’t stopped others from making their own estimates of the size of AWS business — and their estimates make it clear that AWS is a very large business indeed.
Early in 2012, several analysts from Morgan Stanley analyzed the AWS business and judged that the service pulled in $1.19 billion in 2011. (You gotta love the precision that these pundits come up with, eh?) Other analysts from JP Morgan Chase and UBS have calculated that AWS will achieve 2015 revenues of around $2.5 billion.
The bottom line: AWS is big and getting bigger (and better) every day. It really is no exaggeration to say that AWS represents a revolution in computing. People are doing amazing things with it, and this book shows you how you can take advantage of it.
If what Amazon is doing with AWS represents a revolution, as I describe in the previous section, how is the company bringing it about? In other words, how is it delivering this amazing service? Throughout this book, I go into the specifics of how the service operates, but for now I outline the general approach that Amazon has taken in building AWS.
First and foremost, Amazon has approached the job in a unique fashion, befitting a company that changed the face of retail. Amazon specializes in a low-margin approach to business, and it carries that perspective into AWS. Unlike almost every other player in the cloud computing market, Amazon has focused on creating a low-margin, highly efficient offering, and that offering starts with the way Amazon has built out its infrastructure.
Unlike most of its competitors, Amazon builds its hardware infrastructure from commodity components. Commodity, in this case, refers to using equipment from lesser-known manufacturers who charge less than their brand-name competitors. For components for which commodity offerings aren’t available, Amazon (known as a ferocious negotiator) gets rock-bottom prices.
On the hardware side of the AWS offering, Amazon’s approach is clear: Buy equipment as cheaply as possible. But wait, you may say, won’t the commodity approach result in a less reliable infrastructure? After all, the brand-name hardware providers assert that one benefit of paying premium prices is that you get higher-quality gear. Well . . . yes and no. It may be true that premium-priced equipment (traditionally called enterprise equipment because of the assumption that large enterprises require more reliability and are willing to pay extra to obtain it) is more reliable in an apples-to-apples comparison. That is, an enterprise-grade server lasts longer and suffers fewer outages than its commodity-class counterpart.
The issue, from Amazon’s perspective, is how much more reliable the enterprise gear is than the commodity version, and how much that improved reliability is worth. In other words, it needs to know the cost-benefit ratio of enterprise-versus-commodity.
Making this evaluation more challenging is a fundamental fact: At the scale on which an Amazon operates (remember that it has nearly half a million servers running in its AWS service), equipment — no matter who provides it — is breaking all the time.
If you’re a cloud provider with an infrastructure the size of Amazon’s, you have to assume, for every type of hardware you use, an endless round of crashed disk drives, fried motherboards, packet-dropping network switches, and on and on.
Therefore, even if you buy the highest-quality, most expensive gear available, you’ll still end up (if you’re fortunate enough to grow into a very large cloud computing provider like, say, Amazon) with an unreliable infrastructure. Put another way, at a very large scale, even highly reliable individual components still result in an unreliable overall infrastructure because of the failure of components, as rare as the failure of a specific piece of equipment may be.
The scale at which Amazon operates affects other aspects of its hardware infrastructure as well. Besides components such as servers, networks, and storage, data centers also have power supplies, cooling, generators, and backup batteries. Depending on the specific component, Amazon may have to use custom-designed equipment to operate at the scale required.
Think of AWS hardware infrastructure this way: If you had to design and operate data centers to deal with massive scale and in a way that aligns with a corporate mandate to operate inexpensively, you’d probably end up with a solution much like Amazon’s. You’d use commodity computing equipment whenever possible, jawbone prices down when you couldn’t obtain commodity offerings, and custom-design equipment to manage your unusually large-scale operation.
For more detail on Amazon's data center approach, check out James Hamilton's blog at http://perspectives.mvdirona.com. (He's one of Amazon's premier data center architects.) The blog includes links to videos of his extremely interesting and educational presentations on how Amazon has approached its hardware environment.
Because of Amazon’s low-margin, highly scaled requirements, you’d probably expect it to have a unique approach to the cloud computing software infrastructure running on top of its hardware environment, right?
You’d be correct.
Amazon has created a unique, highly specialized software environment in order to provide its cloud computing services. I stress the word unique because, at first glance, people often find AWS different and confusing — it is unlike any other computing environment they’ve previously encountered.
After users understand how AWS operates, however, they generally find that its design makes sense and that it’s appropriate for what it delivers — and, more important, for how people use the service.
Though Amazon has an unusual approach to its hardware environment, it’s in the software infrastructure that its uniqueness truly stands out. Let me give you a quick overview of its features. The software infrastructure is
Based on virtualization:
Virtualization — a technology that abstracts software components from dependence on their underlying hardware — lies at the heart of AWS. Being able to create virtual machines, start them, terminate them, and restart them quickly makes the AWS service possible.
As you might expect, Amazon has approached virtualization in a unique fashion. Naturally, it wanted a low-cost way to use virtualization, so it chose the open source Xen Hypervisor as its software foundation. Then it made significant changes to the “vanilla” Xen product so that it could fulfill the requirements of AWS.
The result is that Amazon leverages virtualization, but the virtualization solution it came up with is extended in ways that support vast scale and a plethora of services built atop it.
Operated as a service:
I know what you’re going to say: “Of course it’s operated as a service — that’s why it’s called Amazon Web Services!”
That’s true, but Amazon had to create a tremendous software infrastructure in order to be able to offer its computing capability as a service.
For example, Amazon had to create a way for users to operate their AWS resources from a distance and with no requirement for local hands-on interaction. And it had to segregate a user’s resources from everyone else’s resources in a way that ensures security, because no one wants other users to be able to see, access, or change his resources.
Amazon had to provide a set of interfaces — an Application Programming Interface (API) — to allow users to manage every aspect of AWS. (I cover the AWS API in Chapter 5.)
Designed for flexibility:
Amazon designed AWS to address users like itself — users that need rich computing services available at a moment’s notice to support their application needs and constantly changing business conditions.
In other words, just as Amazon can’t predict what its computing requirements will be in a year or two, neither can the market for which Amazon built AWS.
In that situation, it makes sense to implement few constraints on the service. Consequently, rather than offer a tightly integrated set of services that provides only a few ways to use them, Amazon provides a highly granular set of services that can be “mixed and matched” by the user to create an application that meets its exact needs.
By designing the service in a highly flexible fashion, Amazon enables its customers to be creative, thereby supporting innovation. Throughout the book, I’ll offer examples of some of the interesting things companies are doing with AWS.
Not only are the computing services themselves highly flexible, the conditions of use of AWS are flexible as well. You need nothing more to get started than an e-mail address and a credit card.
Highly resilient:
If you took the message from earlier in the chapter about the inherent unreliability of hardware to heart, you now recognize that there is no way to implement resiliency via hardware. The obvious alternative is with software, and that is the path Amazon has chosen.
Amazon makes AWS highly resilient by implementing resource redundancy — essentially using multiple copies of a resource to ensure that failure of a single resource does not cause the service to fail. For example, if you were to store just one copy of each of your objects within its S3 service, that object may sometimes be unavailable because the disk drive on which it resides has broken down. Instead, AWS keeps multiple copies of an object, ensuring that even if one — or two! — objects become unavailable because of hardware failure, users can still access the object, thereby improving S3 reliability and durability.
In summary, Amazon has implemented a rich software infrastructure to allow users access to large quantities of computing resources at rock-bottom prices. And if you take another look at Figure 1-1 (the one outlining the number of objects stored in S3), you’d have to draw the conclusion that a large number of users are increasingly benefiting from AWS.
Thus far, I haven’t delved too deeply into the various pieces of the AWS puzzle, but it should be clear (if you’re reading this chapter from start to finish) that Amazon offers a number of services to its users. However, AWS hosts a far richer set of services than only the ones it provides. In fact, users can find nearly everything they need within the confines of AWS to create almost any application they may want to implement. These services are available via the AWS ecosystem — the offerings of Amazon partners and third parties that host their offerings on AWS.
So, in addition to the 25+ services AWS itself offers, users can find services that
Offer preconfigured virtual machines with software components already installed and configured, to enable quick use
Manipulate images
Transmit or stream video
Integrate applications with one another
Monitor application performance
Ensure application security
Operate billing and subscriptions
Manage healthcare claims
Offer real estate for sale
Analyze genomic data
Host websites
Provide customer support
And really, this list barely scratches the surface of what’s available within AWS. In a way, AWS is a modern-day bazaar, providing an incredibly rich set of computing capabilities from anyone who chooses to set up shop to anyone who chooses to purchase what’s being offered.
On closer inspection, you can see that the AWS ecosystem is made up of three distinct subsystems:
AWS computing services provided by Amazon:
As noted earlier, Amazon currently provides more than 25 AWS services and is launching more all the time. AWS provides a large range of cloud computing services — you’ll be introduced to many of them over the course of this book.
Computing services provided by third parties that operate on AWS:
These services tend to offer functionality that enables you to build applications of a type that AWS doesn’t strictly offer. For example, AWS offers some billing capability to enable users to build applications and charge people to use them, but the AWS service doesn’t support many billing use cases — user-specific discounts based on the size of the company, for example. Many companies (and even individuals) offer services complementary to AWS that then allow users to build richer applications more quickly. (If you carry out the AWS exercises I set out for you later in this book, you’ll use one such service offered by Bitnami.)
Complete applications offered by third parties that run on AWS:
You can use these services, often referred to as SaaS (Software as a Service), over a network without having to install them on your own hardware. (Check out the “IaaS, Paas, SaaS” section, earlier in this chapter, for more on SaaS.) Many, many companies host their applications on AWS, drawn to it for the same reasons that end users are drawn to it: low cost, easy access, and high scalability. An interesting trend within AWS is the increasing move by traditional software vendors to migrate their applications to AWS and provide them as SaaS offerings rather than as applications that users install from a CD or DVD on their own machines.
As you go forward with using AWS, be careful to recognize the differences between these three offerings within the AWS ecosystem, especially Amazon’s role (or lack thereof) in all three. Though third-party services or SaaS applications can be incredibly valuable to your computing efforts, Amazon, quite reasonably, offers no support or guarantee about their functionality or performance. It’s up to you to decide whether a given non-AWS service is fit for your needs.
Amazon, always working to make it ever easier to locate and integrate third-party services into your application, has created the Amazon Marketplace as your go-to place for finding AWS-enabled applications. Moreover, being part of the Marketplace implies an endorsement by AWS, which will make you more confident about using a Marketplace application. You can read more about the Marketplace at
https://aws.amazon.com/marketplace
The reason the AWS ecosystem has become the computing marketplace for all and sundry can be captured in the phrase network effect, which can be thought of as the value derived from a network because other network participants are part of the network. The classic case of a network effect is the telephone: The more people who use telephones, the more value there is to someone getting a telephone — because the larger the number of telephones being used, the easier it is to communicate with a large number of people. Conversely, if you’re the only person in town with a telephone, well, you’re going to be pretty lonely — and not very talkative! Said another way, for a service with network effects, the more people who use it, the more attractive it is to potential users, and the more value they receive when they use the service.
From the AWS perspective, the network effect means that, if you’re providing a new cloud-based service, it makes sense to offer it where lots of other cloud users are located — someplace like AWS, for example. This network effect benefits AWS greatly, simply because many people, when they start to think about doing something with cloud computing, naturally gravitate to AWS because it’s a brand name that they recognize.
However, with respect to AWS, there’s an even greater network effect than the fact that lots of people are using it: The technical aspects of AWS play a part as well.
When one service talks to another over the Internet, a certain amount of time passes when the communication between the services travels over the Internet network — even at the speed of light, information traveling long distances takes a certain amount of time. Also, while information is traveling across the Internet, it’s constantly being shunted through routers to ensure that it’s being sent in the right direction. This combination of network length and device interaction is called latency, a measure of how much of a delay is imposed by network traffic distance.
In concrete terms, if you use a web browser to access data from a website hosted within 50 miles of you, it will likely respond faster than if the same website were hosted 7,000 miles away.
To continue this concept, using a service that’s located nearby makes your application run faster — always a good thing. So if your service runs on AWS, you’d like any services you depend on to also run on AWS — because the latency affecting your application is much lower than if those services originated somewhere else.
Folks who build services tend to be smart, so they’ll notice that their potential customers like the idea of having services nearby. If you’re setting up a new service, you’ll be attracted to AWS because lots of other services are already located there. And if you’re considering using a cloud service, you’re likely to choose AWS because the number of services there will make it easier to build your application, from the perspective of service availability and low-latency performance.
The network effects associated with AWS give you a rich set of services to leverage as you create applications to run on Amazon’s cloud offering. They can work to reduce your workload and speed your application development delivery by relieving you of much of the burden traditionally associated with integrating external software components and services into your application.
Here are some benefits of being able to leverage the network effects of the AWS ecosystem in your application:
The service is already up and running within AWS.
You don’t have to obtain the software, install it, configure it, test it, and then integrate it into your application. Because it’s already operational in the AWS environment, you can skip directly to the last step — perform the technical integration.
The services have a cloud-friendly licensing model.
Vendors have already figured out how to offer their software and charge for it in the AWS environment. Vendors often align with the AWS billing methodology, charging per hour of use or offering a subscription for monthly access. But one thing you don’t have to do is approach a vendor that has a large, upfront license fee and negotiate to operate in the AWS environment — it’s already taken care of.
Support is available for the service.
You don’t have to figure out why a software component you want to use doesn’t work properly in the AWS environment — the vendor takes responsibility for it. In the parlance of the world of support, you have, as the technology industry rather indelicately puts it, a throat to choke.
Performance improves.
Because the service operates in the same environment that your application runs in, it provides low latency and helps your application perform better.