75,99 €
Cloud Computing: Business Trends and Technologies provides a broad introduction to Cloud computing technologies and their applications to IT and telecommunications businesses (i.e., the network function virtualization, NFV). To this end, the book is expected to serve as a textbook in a graduate course on Cloud computing.
The book examines the business cases and then concentrates on the technologies necessary for supporting them. In the process, the book addresses the principles of – as well as the known problems with – the underlying technologies, such as virtualization, data communications, network and operations management, security and identity management. It introduces, through open-source case studies (based on OpenStack), an extensive illustration of lifecycle management.
The book also looks at the existing and emerging standards, demonstrating their respective relation to each topic.
Overall, this is an authoritative textbook on this emerging and still-developing discipline, which
•Guides the reader through basic concepts, to current practices, to state-of-the-art applications.
•Considers technical standards bodies involved in Cloud computing standardization.
•Is written by innovation experts in operating systems and data communications, each with over 20 years’ experience in business, research, and teaching.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 785
Veröffentlichungsjahr: 2015
Igor Faynberg Hui-Lan Lu Dor Skuler
This edition first published 2016 © 2016 Alcatel-Lucent. All rights reserved.
Registered officeJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.
The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought
The advice and strategies contained herein may not be suitable for every situation. In view of ongoing research, equipment modifications, changes in governmental regulations, and the constant flow of information relating to the use of experimental reagents, equipment, and devices, the reader is urged to review and evaluate the information provided in the package insert or instructions for each chemical, piece of equipment, reagent, or device for, among other things, any changes in the instructions or indication of usage and for added warnings and precautions. The fact that an organization or Website is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or Website may provide or recommendations it may make. Further, readers should be aware that Internet Websites listed in this work may have changed or disappeared between when this work was written and when it is read. No warranty may be created or extended by any promotional statements for this work. Neither the publisher nor the author shall be liable for any damages arising herefrom.
Library of Congress Cataloging-in-Publication Data
Faynberg, Igor. Cloud computing : business trends and technologies / Igor Faynberg, Hui-Lan Lu, Dor Skuler, Alacatel-Lucent. pages cm Includes bibliographical references and index. ISBN 978-1-118-50121-4 (cloth)
1. Cloud computing. I. Lu, Hui-Lan. II. Skuler, Dor. III. Title. QA76.585.F38 2016 004.67′82--dc23
2015022953
A catalogue record for this book is available from the British Library.
ISBN: 9781118501214
About the Authors
Acknowledgments
1 Introduction
Notes
References
2 The Business of Cloud Computing
2.1 IT Industry Transformation through Virtualization and Cloud
2.2 The Business Model Around Cloud
2.3 Taking Cloud to the Network Operators
Notes
References
3 CPU Virtualization
3.1 Motivation and History
3.2 A Computer Architecture Primer
3.3 Virtualization and Hypervisors
Notes
References
4 Data Networks—The Nervous System of the Cloud
4.1 The OSI Reference Model
4.2 The Internet Protocol Suite
4.3 Quality of Service in IP Networks
4.4 WAN Virtualization Technologies
4.5 Software-Defined Network
4.6 Security of IP
Notes
References
5 Networking Appliances
5.1 Domain Name System
5.2 Firewalls
5.3 NAT Boxes
5.4 Load Balancers
Notes
References
6 Cloud Storage and the Structure of a Modern Data Center
6.1 Data Center Basics
6.2 Storage-Related Matters
Notes
References
7 Operations, Management, and Orchestration in the Cloud
7.1 Orchestration in the Enterprise
7.2 Network and Operations Management
7.3 Orchestration and Management in the Cloud
7.4 Identity and Access Management
Notes
References
Appendix: Selected Topics
A.1 The IETF Operations and Management Standards
A.2 Orchestration with TOSCA
A.3 The REST Architectural Style
A.4 Identity and Access Management Mechanisms
Notes
References
Index
EULA
Chapter 1
Figure 1.1
Dialectics in the development of Cloud Computing: (a) from mainframe to Cloud; (b) from IT data center to Private Cloud.
Figure 1.2
Essential characteristics of Cloud Computing.
Source:
NIST SP 800-145, p. 2.
Chapter 2
Figure 2.1
Investment in an application deployment—before and after.
Chapter 3
Figure 3.1
A computing environment before and after virtualization.
Figure 3.2
Simplified computer architecture.
Figure 3.3
A simplified CPU loop (first approximation).
Figure 3.4
The process stack and the procedure call.
Figure 3.5
Setting a breakpoint.
Figure 3.6
The second approximation of the CPU loop.
Figure 3.7
Go( ) and the interrupt service routines.
Figure 3.8
The process table.
Figure 3.9
The CPU mode state machine.
Figure 3.10
The modified CPU and the two process stacks.
Figure 3.11
Introducing the MMU.
Figure 3.12
Segmentation: The MMU translation processing.
Figure 3.13
Paging—establishing contiguous memory.
Figure 3.14
Storing pages on the disk to achieve the “infinite” memory illusion.
Figure 3.15
Page table and virtual address in-memory translation.
Figure 3.16
CPU loop—the final version.
Figure 3.17
System call processing: (a) the
Service_A
routine—user part; (b) the
TRAP 1
service routine; (c) the
Service_A
routine—system part.
Figure 3.18
Graham's security rings (hardware support).
Figure 3.19
Optimization with non-disjoint security rings.
Figure 3.20
Hypervisor structure—after Popek and Goldberg. Data from [14].
Figure 3.21
General virtualization architecture.
Figure 3.22
Type-1 and Type-2 hypervisors.
Figure 3.23
Intel privilege level rings.
Figure 3.24
Direct Memory Access (DMA).
Figure 3.25
I/O MMU.
Figure 3.26
Virtual machine I/O support in Xen.
Figure 3.27
Xen network I/O optimization using shared memory.
Figure 3.28
The state transition diagram for the KVM modes.
Figure 3.29
NOVA architecture (simplified).
Chapter 4
Figure 4.1
Dual aspects of networking in Cloud Computing.
Figure 4.2
Private and virtual private networks.
Figure 4.3
The OSI reference model.
Figure 4.4
Requests
and
indications
as methods of the layer class.
Figure 4.5
Summary of the overall computational model.
Figure 4.6
Session multiplexing in the OSI transport layer.
Figure 4.7
The case for error correction at the link layer.
Figure 4.8
Broadcast media configurations.
Figure 4.9
The IPv4 packet header.
Figure 4.10
Jon Postel's map of the Internet in 1982.
Source:
http://commons.wikimedia.org/wiki/File%3AInternet_map_in_February_82.png. By Jon Postel [Public domain], via Wikimedia Commons.
Figure 4.11
CIDR aggregation.
Figure 4.12
“Subnetting” a Class B network.
Figure 4.13
The IPv6 basic packet header (after RFC 2460).
Figure 4.14
Routing protocol classification: (a) LAN, no routing needed; (b) routing within and among autonomous systems.
Figure 4.15
Autonomous systems and border gateways.
Figure 4.16
Transit and (settlement-free) peering relationships.
Figure 4.17
The Internet hourglass.
Figure 4.18
Multi-homing with SCTP.
Figure 4.19
Packet scheduling disciplines: (a) best effort; (b) fair queuing; (c) weighted fair queuing.
Figure 4.20
Traffic specification models: (a) leaky bucket; (b) token bucket.
Figure 4.21
The integrated services model (after RFC 1631).
Figure 4.22
The end-to-end worst-case delay
D
(after RFC 2212).
Figure 4.23
An example of the RSVP exchange.
Figure 4.24
Summary of the RSVP messages.
Figure 4.25
Traffic conditioning at the edges of DS domains.
Source:
Reprinted from [3] with permission of Alcatel-Lucent, USA, Inc.
Figure 4.26
An example of an AF specification.
Figure 4.27
The inside view of DS.
Source:
RFC 2475.
Figure 4.28
Routing and switching.
Figure 4.29
The location and structure of the MPLS label.
Figure 4.30
An example of label assignment to flows and LSPs.
Figure 4.31
Examples of the explicit route setup with (a) CR-LDP and (b) RSVP-TE.
Figure 4.32
Layer-1 VPN framework (after RFC 4847).
Figure 4.33
The VLAN concept: (a) physical configuration; (b) logical configuration.
Figure 4.34
Pseudo-wire emulation edge-to-edge network reference model (after Figure 2 of RFC 3985).
Figure 4.35
Label stacking in provider-supported VPN: (a) LSP in a single network; (b) LSP traversing a provider network.
Figure 4.36
The ForCES architecture (after RFC 3746).
Figure 4.37
The OpenFlow switch.
Figure 4.38
Relationship among the IPSec specifications (after RFC 6071).
Figure 4.39
IPSec scenarios.
Figure 4.40
IPsec in transport mode in IPv4.
Figure 4.41
IPsec in tunnel mode in IPv4.
Chapter 5
Figure 5.1
DNS components.
Figure 5.2
The domain name space tree.
Figure 5.3
A recursive name server.
Figure 5.4
The DNS query/response format. Source: RFC 1035.
Figure 5.5
The RR structure. Source: RFC 1035.
Figure 5.6
Circular dependencies.
Figure 5.7
A sample name resolution.
Figure 5.8
Root name systems. Source: Internet Assigned Number Authority, www.iana.org
Figure 5.9
Domain name internationalization components. Source: RFC 3490.
Figure 5.10
Examples of internationalized country code top domain names. Source: Internet Assigned Number Authority, www.iana.org
Figure 5.11
Firewalls: (a) a firewall between two networks; (b) a firewall protecting a single host.
Figure 5.12
Interconnecting networks with different security postures.
Figure 5.13
An application gateway.
Figure 5.14
Ingress and egress filtering: (a) interfaces; (b) split CPE.
Figure 5.15
Layer-3 VPN with firewalls.
Figure 5.16
A
smurf
attack.
Figure 5.17
A reflective DNS attack.
Figure 5.18
TCP connection establishment. Source: RFC 675.
Figure 5.19
Stateful firewall (an example of a TCP connection establishment).
Figure 5.20
Network zoning: (a) with a single firewall; (b) with two firewalls.
Figure 5.21
NAT in a nutshell.
Figure 5.22
Private and public addressing networks
A
and
B
.
Figure 5.23
A NAT box—outgoing traffic.
Figure 5.24
A NAT box—incoming traffic.
Figure 5.25
An unsolicited “response.”
Figure 5.26
Application-Level Gateway (ALG).
Figure 5.27
A
rendez-vous
relay.
Figure 5.28
Traversal using relays around NAT (TURN).
Figure 5.29
Learning the reflective address from a STUN server.
Figure 5.30
Different NATs for different paths.
Figure 5.31
Candidate transport addresses (after Figure 2 of RFC 5245).
Figure 5.32
ICE operation.
Figure 5.33
Carrier-grade (large-scale) NAT.
Figure 5.34
A load balancing example: choosing a call center with the 800 service.
Figure 5.35
A server farm.
Figure 5.36
Saving session state at the client (a cookie).
Figure 5.37
An example of an Nginx-based load-balanced web service.
Figure 5.38
Configuring the load balancer.
Figure 5.39
Load balancing with DNS.
Chapter 6
Figure 6.1
Traditional data center.
Figure 6.2
Next-generation data center.
Figure 6.3
SNIA shared storage model.
Figure 6.4
An example of direct-attached storage.
Figure 6.5
A schematic direct-attachment interface.
Figure 6.6
An example SCSI configuration.
Figure 6.7
SCSI addressing for an 8-bit data bus.
Figure 6.8
SCSI client–server model.
Figure 6.9
Comparison of different SCSI versions.
Figure 6.10
Organization of SCSI standards.
Figure 6.11
SCSI interlayer relationship.
Figure 6.12
An example of SAS configuration.
Figure 6.13
Serial attached SCSI architecture.
Figure 6.14
A network file system.
Figure 6.15
A hierarchical directory.
Figure 6.16
Structure of a magnetic disk drive.
Figure 6.17
Organization of a platter's surface.
Figure 6.18
File system abstraction.
Figure 6.19
A functional view of NFS.
Figure 6.20
An example remote file system.
Figure 6.21
Examples of remote file operations through NFS.
Figure 6.22
FC structure.
Figure 6.23
FC and line coding.
Figure 6.24
An example of fabric topology.
Figure 6.25
An example of the arbitrated loop.
Figure 6.26
A weighted-path network.
Figure 6.27
Examples of converged storage protocol options.
Figure 6.28
FCoE frame structure.
Figure 6.29
A conceptual FCoE architecture.
Figure 6.30
High-level FIP operations.
Figure 6.31
iSCSI conceptual model.
Figure 6.32
iSCSI names.
Figure 6.33
Format of the iSCSI protocol data unit.
Figure 6.34
A work flow for the
write
operation.
Figure 6.35
Object storage access control model.
Figure 6.36
File-level storage virtualization.
Figure 6.37
In-band storage virtualization.
Figure 6.38
Out-of-band storage virtualization.
Figure 6.39
A comparison of storage technologies.
Figure 6.40
The memory hierarchy.
Figure 6.41
Hypothetical state of a NAND flash memory.
Figure 6.42
A circle in consistent hashing.
Chapter 7
Figure 7.1
Service orchestration (after NIST SP 500-292).
Figure 7.2
Distributed object-oriented computing model.
Figure 7.3
An example of the three-tier enterprise model.
Figure 7.4
Flow-based computing examples.
Figure 7.5
Workflow as a directed graph of activities.
Figure 7.6
Path analysis.
Figure 7.7
The basic network management model.
Figure 7.8
The service life cycle.
Figure 7.9
Operations on a stack (an example).
Figure 7.10
The AWS CloudFormation template.
Figure 7.11
Mapping the OpenStack components into a physical architecture: an example.
Figure 7.12
A high-availability cluster.
Figure 7.13
The Heat computing architecture.
Figure 7.14
The Ceilometer computing architecture.
Figure 7.15
Interworking Heat and Ceilometer: an auto-scaling example.
Figure 7.16
Integrated orchestration architecture.
Figure 7.17
Networking with OpenStack nodes.
Figure 7.18
Relative administrative privilege.
Figure 7.19
Scope of identity and access management.
Figure 7.20
Credentials for user authentication.
Figure 7.21
Public-key-based authentication (a simplified view).
Figure 7.22
Conceptual illustration of the chain of trust.
Figure 7.23
Access control matrix.
Figure 7.24
Conceptual OAuth 1.0 workflow.
Figure 7.25
OAuth 2.0 conceptual workflow.
Figure 7.26
A simplified workflow for VM provisioning.
Figure 7.27
Additional steps for VM provisioning.
Figure 7.28
An example token (unsigned).
Figure 7.30
Additional steps for auto-scaling.
Figure 7.29
A simplified workflow for auto-scaling.
Figure 7.31
An example trust.
Appendix
Figure A.1
The tree of SMI ASN.1 object identifiers.
Figure A.2
SNMP entity (after RFC 3411).
Figure A.3
Policy control architecture (after RFC 2753).
Figure A.4
Policy control in an RSVP router (after RFC 2753).
Figure A.5
NETCONF architecture.
Figure A.6
NETCONF layers.
Figure A.7
(a) Invocation of the
deck-the-halls
RPC method; (b) reply with the positive result.
Figure A.8
Orchestration layering framework (courtesy of Sivan Barzilay).
Figure A.9
The structure of a TOSCA template.
Figure A.10
A topology template example.
Figure A.11
An example of translation of (a) the TOSCA template into (b) the corresponding HOT template (courtesy of Sivan Barzilay).
Figure A.12
URI examples.
Figure A.13
Caching with proxies (an example).
Figure A.14
Eliminating the transient state: (a) a service with a transient state; (b) the same service with a permanent state.
Figure A.15
Kerberos at work (simplified).
Figure A.16
Kerberos at work (improved).
Figure A.17
Access control lists.
Figure A.18
Capability lists.
Figure A.19
Flow of information in a Bell–LaPadula system.
Figure A.20
Altered information flow in a Bell–LaPadula system.
Figure A.21
SAML message flow for identity federation.
Figure A.22
OAuth 2.0 user authorization message flow.
Figure A.23
OpenID connect message flow.
Figure A.24
Policy control workflow.
Cover
Table of Contents
Chapter
ix
x
xi
xii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
This book was written while the authors worked in the CloudBand Business Unit at Alcatel-Lucent. CloudBand, founded by Dor Skuler, is a market-leading platform for Network Functions Virtualization (NFV).
Igor Faynberg, Adjunct Professor in the Computer Science Department of Stevens Institute of Technology, is a Bell Labs Fellow. At the time of writing this book, he was a senior architect in charge of NFV security, reporting to the Chief Technology Officer of CloudBand.
Previous to that he had held various staff and managerial positions in Bell Labs and Alcatel-Lucent business units. In his Bell Labs career, he has influenced the development of several software technologies—from mathematical programming to Intelligent Network and Internet/PSTN convergence to virtualization. He has contributed to and held various leadership positions in the Internet Engineering Task Force (IETF), International Telecommunication Union (ITU), and European Telecommunication Standardization Institute (ETSI), where he presently serves as Chairman of the ETSI NFV Security working group. He has served on technical committees of several IEEE conferences, and he holds numerous patents for the inventions related to technologies that he had developed.
Igor has also co-authored two books and numerous refereed papers. He holds a Mathematics Diploma from Kharkov University, Ukraine, and MS and PhD degrees in Computer and Information Science from the University of Pennsylvania.
Hui-Lan Lu is a Bell Labs Fellow at Alcatel-Lucent, where she has conducted research and development in various areas, including mathematical programming, service creation, IP multimedia communication, quality of service in converged networks, and security.
She has been also involved in strategic standards efforts in the IETF, ITU, and ETSI. More recently, she has served as Rapporteur for the ETSI NFV case study of OpenStack security and Vice Chairman of ITU-T SG 13 (the lead study group on Cloud Computing and future networks).
Hui-Lan has co-authored a book on converged networks and services, and numerous refereed papers. She holds a PhD degree in physics from Yale University in New Haven and has over 40 patents.
Dor Skuler formerly served as Senior Vice President and General Manager of the CloudBand Business Unit at Alcatel-Lucent, which he founded. Prior to this role, Dor served as Vice President of Strategy and Head of Corporate Development for Alcatel-Lucent in its corporate headquarters in Paris. Previously Dor had held entrepreneurial roles such as General Manager of Mobile Security, a new venture in Alcatel-Lucent's Bell Labs and Enterprise Business Divisions.
Before joining Alcatel-Lucent, Dor served as Vice-President of Business Development and Marketing at Safend, an endpoint security company. Dor also founded and served as President of Zing Interactive Media, a venture-backed startup company in the field of mobile interactive media.
Dor holds a Master's of Science in Marketing and an MBA in International Business. Dor was selected in Global Telecom Business' “40 under 40” list in 2009, 2011 and 2013 and is often invited to speak in industry events and is interviewed by the global press.
A book of this scope and size could not have been written without help from many people. We acknowledge much stimulation that came from early discussions with Markus Hofmann who headed the Bell Labs research effort in the Cloud. We have had incisive discussions on various topics of networking with Mark Clougherty, Vijay Gurbani, and Dimitri Stiliadis.
We have been much influenced and supported by David Amzallag (then CTO of CloudBand), particularly on the topic of operations and management.
Our first steps in addressing Cloud security were made together with Doug Varney, Jack Kozik, and Herbert Ristock (now with Genesys). We owe much of our understanding of the subject to our CloudBand colleagues—Ranny Haibi, Chris Deloddere, Mark Hooper, and Avi Vachnis. Sivan Barzilay has reviewed Chapter 7, to which she has contributed a figure; we also owe to her our understanding of TOSCA.
Peter Busschbach has reviewed Chapter 3 and provided insightful comments.
A significant impetus for this book came from teaching, and the book is intended to be an assigned text in a graduate course on Cloud Computing. Such a course, taught in the Stevens Institute of Technology, has been developed with much encouragement and help from Professor Daniel Duchamp (Director of Department of Computer Science), and many useful suggestions from Professor Dominic Duggan. Important insight, reflected in the course and in the book, came from graduate students who had served over the years as teaching assistants: Bo Ye (2012); Wa Gao (2013); Xiaofang Yu (2014); and Saurabh Bagde and Harshil Bhatt (2015).
It is owing to meeting (and subsequent discussions with) Professor Ruby Lee of Princeton University that we have learned of her research on NoHype—an alternative to traditional virtualization that addresses some essential security problems.
The past two years of working in the European Telecommunications Standardization Institute (ETSI) Network Function Virtualization (NFV) Industry Specification Group have contributed significantly to our understanding of the demands of the telecommunications industry. In particular, deep discussions of the direction of NFV with Don Clarke (Cable Labs), Diego Garcia Lopez (Telefonica), Uwe Michel (Deutsche Telekom) and Prodip Sen (formerly of Verizon and then HP) were invaluable in forming our perspective. Specifically on the subject of NFV security we owe much to all participants in the NFV Security group and particularly to Bob Briscoe (BT) and Bob Moskowitz (Verizon).
We got much insight into the US standards development on this topic in our conversation with George W. Arnold, then Director of Standards in National Institute of Standards and Technology (NIST).
It has been a great delight to work under the cheerful guidance of Ms Liz Wingett, our Project Editor at John Wiley & Sons. Her vigilant attention to every detail kept us on our feet, but the manuscript improved with every suggestion she made. As the manuscript was being prepared for production, Ms Audrey Koh, Production Editor at John Wiley & Sons, has achieved a feat truly worthy of the Fifth Labor of Hercules, going through the proofs and cleaning up the Augean Stables of stylistic (and, at times, even factual) inconsistencies.
To all these individuals we express our deepest gratitude.
If the seventeenth and early eighteenth centuries are the age of clocks, and the later eighteenth and the nineteenth centuries constitute the age of steam engines, the present time is the age of communication and control.
Norbert Wiener (from the 1948 edition of Cybernetics: or Control and Communication in the Animal and the Machine).
It is unfortunate that we don't remember the exact date of the extraordinary event that we are about to describe, except that it took place sometime in the Fall of 1994. Then Professor Noah Prywes of the University of Pennsylvania gave a memorable invited talk at Bell Labs, at which two authors1 of this book were present. The main point of the talk was a proposal that AT&T (of which Bell Labs was a part at the time) should go into the business of providing computing services—in addition to telecommunications services—to other companies by actually running these companies' data centers. “All they need is just to plug in their terminals so that they receive IT services as a utility. They would pay anything to get rid of the headaches and costs of operating their own machines, upgrading software, and what not.”
Professor Prywes, whom we will meet more than once in this book, well known in Bell Labs as a software visionary and more than that—the founder and CEO of a successful software company, Computer Command and Control—was suggesting something that appeared too extravagant even to the researchers. The core business of AT&T at that time was telecommunications services. The major enterprise customers of AT&T were buying the customer premises equipment (such as private branch exchange switches and machines that ran software in support of call centers). In other words, the enterprise was buying things to run on premises rather than outsourcing things to the network provider!
Most attendees saw the merit of the idea, but could not immediately relate it to their day-to-day work, or—more importantly—to the company's stated business plan. Furthermore, at that very moment the Bell Labs computing environment was migrating from the Unix programming environment hosted on mainframes and Sun workstations to Microsoft Office-powered personal computers. It is not that we, who “grew up” with the Unix operating system, liked the change, but we were told that this was the way the industry was going (and it was!) as far as office information technology was concerned. But if so, then the enterprise would be going in exactly the opposite way—by placing computing in the hands of each employee. Professor Prywes did not deny the pace of acceptance of personal computing; his argument was that there was much more to enterprises than what was occurring inside their individual workstations—payroll databases, for example.
There was a lively discussion, which quickly turned to the detail. Professor Prywes cited the achievements in virtualization and massive parallel-processing technologies, which were sufficient to enable his vision. These arguments were compelling, but ultimately the core business of AT&T was networking, and networking was centered on telecommunications services.
Still, telecommunications services were provided by software, and even the telephone switches were but peripheral devices controlled by computers. It was in the 1990s that virtual telecommunications networking services such as Software Defined Networks—not to be confused with the namesake development in data networking, which we will cover in Chapter 4—were emerging on the purely software and data communications platform called Intelligent Network. It is on the basis of the latter that Professor Prywes thought the computing services could be offered. In summary, the idea was to combine data communications with centralized powerful computing centers, all under the central command and control of a major telecommunications company. All of us in the audience were intrigued.
The idea of computing as a public utility was not new. It had been outlined by Douglas F. Parkhill in his 1966 book [1].
In the end, however, none of us could sell the idea to senior management. The times the telecommunications industry was going through in 1994 could best be characterized as “interesting,” and AT&T did not fare particularly well for a number of reasons.2 Even though Bell Labs was at the forefront of the development of all relevant technologies, recommending those to businesses was a different matter—especially where a proposal for a radical change of business model was made, and especially in turbulent times.
In about a year, AT&T announced its trivestiture. The two authors had moved, along with a large part of Bell Labs, into the equipment manufacturing company which became Lucent Technologies and, 10 years later, merged with Alcatel to form Alcatel-Lucent.
At about the same time, Amazon launched a service called Elastic Compute Cloud (EC2), which delivered pretty much what Professor Prywes had described to us. Here an enterprise user—located anywhere in the world—could create, for a charge, virtual machines in the “Cloud” (or, to be more precise, in one of the Amazon data centers) and deploy any software on these machines. But not only that, the machines were elastic: as the user's demand for computing power grew, so did the machine power—magically increasing to meet the demand—along with the appropriate cost; when the demand dropped so did the computing power delivered, and also the cost. Hence, the enterprise did not need to invest in purchasing and maintaining computers, it paid only for the computing power it received and could get as much of it as necessary!
As a philosophical aside: one way to look at the computing development is through the prism of dialectics. As depicted in Figure 1.1(a), with mainframe-based computing as the thesis, the industry had moved to personal-workstation-based computing—the antithesis. But the spiral development—fostered by advances in data networking, distributed processing, and software automation—brought forth the Cloud as the synthesis, where the convenience of seemingly central on-demand computing is combined with the autonomy of a user's computing environment. Another spiral (described in detail in Chapter 2) is depicted in Figure 1.1(b), which demonstrates how the Public Cloud has become the antithesis to the thesis of traditional IT data centers, inviting the outsourcing of the development (via “Shadow IT ” and Virtual Private Cloud). The synthesis is Private Cloud, in which the Cloud has moved computing back to the enterprise but in a very novel form.
Figure 1.1 Dialectics in the development of Cloud Computing: (a) from mainframe to Cloud; (b) from IT data center to Private Cloud.
At this point we are ready to introduce formal definitions, which have been agreed on universally and thus form a standard in themselves. The definitions have been developed at the National Institute of Standards and Technology (NIST) and published in [2]. To begin with, Cloud Computing is defined as a model “for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” This Cloud model is composed of five essential characteristics, three service models, and four deployment models.
The five essential characteristics are presented in Figure 1.2.
Figure 1.2 Essential characteristics of Cloud Computing. Source: NIST SP 800-145, p. 2.
The three service models, now well known, are Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). NIST defines them thus:
Software-as-a-Service
(
SaaS
). The capability provided to the consumer is to use the provider's applications running on a Cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based e-mail), or a program interface. The consumer does not manage or control the underlying Cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
Platform-as-a-Service
(
PaaS
). The capability provided to the consumer is to deploy onto the Cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying Cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.
Infrastructure-as-a-Service
(
IaaS
). The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying Cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).
Over time, other service models have appeared—more often than not in the marketing literature—but the authors of the well-known “Berkeley view of Cloud Computing” [3] chose to “eschew terminology such as ‘X as a service (XaaS),’ ” citing the difficulty of agreeing “even among ourselves what the precise differences among them might be,” that is, among the services for some values of X…
Finally, the four Cloud deployment models are defined by NIST as follows:
Private Cloud.
The Cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.
Community Cloud
. The Cloud infrastructure is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination of them, and it may exist on or off premises.
Public Cloud
. The Cloud infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the Cloud provider.
Hybrid Cloud
. The Cloud infrastructure is a composition of two or more distinct Cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., Cloud bursting for load balancing between Clouds).
Cloud Computing is not a single technology. It is better described as a business development, whose realization has been enabled by several disciplines: computer architecture, operating systems, data communications, and network and operations management. As we will see, the latter discipline has been around for as long as networking, but the introduction of Cloud Computing has naturally fueled its growth in a new direction, once again validating the quote from Norbert Wiener's book that we chose as the epigraph to this book.
As Chapter 2 demonstrates, Cloud Computing has had a revolutionary effect on the information technology industry, reverberating through the telecommunications industry, which followed suit. Telecommunications providers demanded that vendors provide software only, rather than “the boxes.” There have been several relevant standardization efforts in the industry, and—perhaps more important—there have been open-source software packages for building Cloud environments.
Naturally, standardization was preceded by a significant effort in research and development. In 2011, an author3 of this book established the CloudBand product unit within Alcatel-Lucent, where, with the help of Bell Labs research, the telecommunications Cloud platform has been developed. It was in the context of CloudBand that we three authors met and the idea of this book was born.
We planned the book first of all as a textbook on Cloud Computing. Our experience in developing and teaching a graduate course on the subject at the Stevens Institute of Technology taught us that even the brightest and best-prepared students were missing sufficient knowledge in Central Processing Unit (CPU) virtualization (a subject that is rarely taught in the context of computer architecture or operating systems), as well as a number of specific points in data communications. Network and operations management has rarely been part of the modern computer science curriculum.
In fact, the same knowledge gap seems to be ubiquitous in the industry, where engineers are forced to specialize, and we hope that this book will help fill the gap by providing an overarching multi-disciplinary foundation.
The rest of the book is structured as follows:
Chapter 2 is mainly about “what” rather than “how.” It provides definitions, describes business considerations—with a special case study of
Network Function Virtualization
—and otherwise provides a bird's eye view of Cloud Computing. The “how” is the subject of the chapters that follow.
Chapter 3 explains the tenets of CPU virtualization.
Chapter 4 is dedicated to networking—the nervous system of the Cloud.
Chapter 5 describes network appliances
,
the building blocks of Cloud data centers as well as private networks.
Chapter 6 describes the overall structure of the modern data center, along with its components.
Chapter 7 reviews operations and management in the Cloud and elucidates the concepts of orchestration and identity and access management, with the case study of
OpenStack
—a popular open-source Cloud project.
The Appendix delves into the detail of selected topics discussed earlier.
The references (which also form a bibliography on the respective subjects) are placed separately in individual chapters.
Having presented an outline of the book, we should note that there are three essential subjects that do not have a dedicated chapter. Instead, they are addressed in each chapter inasmuch as they concern that chapter's subject matter.
One such subject is security. Needless to say, this is the single most important matter that could make or break Cloud Computing. There are many aspects to security, and so we felt that we should address the aspects relevant to each chapter within the chapter itself.
Another subject that has no “central” coverage is standardization. Again, we introduce the relevant standards and open-source projects while discussing specific technical subjects. The third subject is history. It is well known in engineering that many existing technical solutions are not around because they are optimal, but because of their historical development. In teaching a discipline it is important to point these out, and we have tried our best to do so, again in the context of each technology that we address.
1
Igor Faynberg and Hui-Lan Lu, then members of the technical staff at Bell Labs Area 41 (Architecture Area).
2
For one thing, the regional Bell operating companies and other local exchange carriers started to compete with AT&T Communications in the services market, and so they loathed buying equipment from AT&T Network Systems—a manufacturing arm of AT&T.
3
Dor Skuler, at the time Alcatel-Lucent Vice President and General Manager of the
CloudBand
product unit.
Parkhill, D.F. (1966)
Challenge of the Computer Utility
. Addison-Wesley, Reading, MA.
Mell, P. and Grance, T. (2011). Special Publication 800-145: The NIST Definition of Cloud Computing. Recommendations of the National Institute of Standards and Technology. US Department of Commerce, Gaithersburg, MD, September, 2011.
Armbrust, M., Fox, A., Griffith, R.,
et al.
(2009) Above the Clouds: A Berkeley view of Cloud Computing. Electrical Engineering and Computer Sciences Technical Report No. UCB/EECS-2009-2A, University of California at Berkeley, Berkeley, CA, February, 2009.
In this chapter, we evaluate the business impact of Cloud Computing.
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
