108,99 €
THE ONE-STOP RESOURCE FOR ANY INDIVIDUAL OR ORGANIZATION CONSIDERING FOG COMPUTING Fog and Fogonomics is a comprehensive and technology-centric resource that highlights the system model, architectures, building blocks, and IEEE standards for fog computing platforms and solutions. The "fog" is defined as the multiple interconnected layers of computing along the continuum from cloud to endpoints such as user devices and things including racks or microcells in server closets, residential gateways, factory control systems, and more. The authors--noted experts on the topic--review business models and metrics that allow for the economic assessment of fog-based information communication technology (ICT) resources, especially mobile resources. The book contains a wide range of templates and formulas for calculating quality-of-service values. Comprehensive in scope, it covers topics including fog computing technologies and reference architecture, fog-related standards and markets, fog-enabled applications and services, fog economics (fogonomics), and strategy. This important resource: * Offers a comprehensive text on fog computing * Discusses pricing, service level agreements, service delivery, and consumption of fog computing * Examines how fog has the potential to change the information and communication technology industry in the next decade * Describes how fog enables new business models, strategies, and competitive differentiation, as with ecosystems of connected and smart digital products and services * Includes case studies featuring integration of fog computing, communication, and networking systems Written for product and systems engineers and designers, as well as for faculty and students, Fog and Fogonomics is an essential book that explores the technological and economic issues associated with fog computing.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 720
Veröffentlichungsjahr: 2020
Cover
List of Contributors
Preface
1 Fog Computing and Fogonomics
2 Collaborative Mechanism for Hybrid Fog‐Cloud Scenarios
2.1 The Collaborative Scenario
2.2 Benefits and Applicability
2.3 The Challenges
2.4 Ongoing Efforts
2.5 Handling Data in Coordinated Scenarios
2.6 The Coming Future
Acknowledgments
References
3 Computation Offloading Game for Fog‐Cloud Scenario
3.1 Internet of Things
3.2 Fog Computing
3.3 A Computation Task Offloading Game for Hybrid Fog‐Cloud Computing
3.4 Conclusion
References
4 Pricing Tradeoffs for Data Analytics in Fog–Cloud Scenarios
4.1 Introduction: Economics and Fog Computing
4.2 Fog Pricing Today
4.3 Typical Fog Architectures
4.4 A Case Study: Distributed Data Processing
4.5 Future Research Directions
4.6 Conclusion
Acknowledgments
References
5 Quantitative and Qualitative Economic Benefits of Fog
5.1 Characteristics of Fog Computing Solutions
5.2 Strategic Value
5.3 Bandwidth, Latency, and Response Time
5.4 Capacity, Utilization, Cost, and Resource Allocation
5.5 Information Value and Service Quality
5.6 Sovereignty, Privacy, Security, Interoperability, and Management
5.7 Trade‐Offs
5.8 Conclusion
References
6 Incentive Schemes for User‐Provided Fog Infrastructure
6.1 Introduction
6.2 Technology and Economic Issues in UPIs
6.3 Incentive Mechanisms for Autonomous Mobile UPIs
6.4 Incentive Mechanisms for Provider‐assisted Mobile UPIs
6.5 Incentive Mechanisms for Large‐Scale Systems
6.6 Open Challenges in Mobile UPI Incentive Mechanisms
6.7 Conclusions
References
Notes
7 Fog‐Based Service Enablement Architecture
7.1 Introduction
7.2 Ongoing Effort on FogSEA
7.3 Early Results
References
Note
8 Software‐Defined Fog Orchestration for IoT Services
8.1 Introduction
8.2 Scenario and Application
8.3 Architecture: A Software‐Defined Perspective
8.4 Orchestration
8.5 Fog Simulation
8.6 Early Experience
8.7 Discussion
8.8 Conclusion
Acknowledgment
References
9 A Decentralized Adaptation System for QoS Optimization
9.1 Introduction
9.2 State of the Art
9.3 Fog Service Delivery Model and AdaptFog
9.4 Conclusion and Open Issues
References
Notes
10 Efficient Task Scheduling for Performance Optimization
10.1 Introduction
10.2 Individual Delay‐minimization Task Scheduling
10.3 Energy‐efficient Task Scheduling
10.4 Delay Energy Balanced Task Scheduling
10.5 Open Challenges in Task Scheduling
10.6 Conclusion
References
Notes
11 Noncooperative and Cooperative Computation Offloading
11.1 Introduction
11.2 Related Works
11.3 Noncooperative Computation Offloading
11.4 Cooperative Computation Offloading
11.5 Discussions
11.6 Conclusion
References
Notes
12 A Highly Available Storage System for Elastic Fog
12.1 Introduction
12.2 Design
12.3 Fault Tolerant Data Access and Share Placement
12.4 Implementation
12.5 Evaluation
12.6 Discussion and Open Questions
12.7 Related Work
12.8 Conclusion
Acknowledgments
References
13 Development of Wearable Services with Edge Devices
13.1 Introduction
13.2 Related Works
13.3 Problem Description
13.4 System Architecture
13.5 Methodology
13.6 Performance Evaluation
13.7 Discussion
13.8 Conclusion
References
14 Security and Privacy Issues and Solutions for Fog
14.1 Introduction
14.2 Security and Privacy Challenges Posed by Fog Computing
14.3 Existing Research on Security and Privacy Issues in Fog Computing
14.4 Open Questions and Research Challenges
14.5 Summary
References
Index
End User License Agreement
Chapter 2
Table 2.1 Resource continuity possibilities in a layered architecture (from [4])...
Table 2.2 Control architectures characteristics.
Chapter 4
Table 4.1 Fitted values.
Chapter 7
Table 7.1 Simulation configuration.
Chapter 8
Table 8.1 Comparison between web‐basedapplication and fog‐enabled IoT applicatio...
Chapter 10
Table 10.1 Average run time.
Chapter 14
Table 14.1 Basic difference between edge computing technologies.
Table 14.2 Existing research in security and privacy for fog computing.
Chapter 2
Figure 2.1 Overall resources topology (from [8]).
Figure 2.2 Stack of resources as envisioned in the F2C model.
Figure 2.3 F2C hierarchical control architecture (from [10]).
Figure 2.4 Fog node proposal (from [7]). (a) Physical devices forming a fog no...
Figure 2.5 Control architectures. (a) Centralized. (b) Decentralized. (c) Dist...
Figure 2.6 The abstraction model filling in the resource continuum (from [13])...
Figure 2.7 Smart city example.
Figure 2.8 Hierarchical architecture with Agents and Leaders.
Figure 2.9 Aggregation strategy in mF2C.
Figure 2.10 Mobile edge system reference architecture.
Figure 2.11 Basic DLC model for an IoT scenario.
Chapter 3
Figure 3.1 A smart city with fog‐cloud computing. IoT users can use the comput...
Figure 3.2 An instance of IoT systems with IoT users, fog nodes, and the remot...
Figure 3.3 Best response strategy algorithm.
Figure 3.4 The average perceived QoE of IoT users at the NE. The IoT users per...
Figure 3.5 The average delay each task experiences for different roundtrip del...
Figure 3.6 The average delay each task experiences versus different number of ...
Figure 3.7 The number of beneficial users versus the number of fog nodes when
Figure 3.8 The number of beneficial users in the proposed computation offloadi...
Chapter 4
Figure 4.1 Illustration of devices on the cloud‐to‐things continuum. Each devi...
Figure 4.2 A fog computing testbed with local sensors and local computing gate...
Figure 4.3 Decomposing data analytics between the computing gateways and the c...
Figure 4.4 Wi‐Fi transmission latency versus the total number of entries sent ...
Figure 4.5 Normalized disutility function
. (a)
; ...
Figure 4.6 Sensitivity of the equilibrium with respect to
and
.
Chapter 6
Figure 6.1 The architecture and possible operations of a UPI system. The opera...
Figure 6.2 Taxonomy of UPI models according to two criteria are as follows: (i...
Figure 6.3 (a) Impact of capacity diversity on the service performance:
Mbps,...
Figure 6.4 (a) Independent‐standalone user performance (downloading scenario)....
Figure 6.5 (a) Operators' optimal pricing–reimbursing strategy, (b) (average) ...
Figure 6.6 (a) A set of nodes exchange resources on a dynamic fashion. The sys...
Chapter 7
Figure 7.1 FogSEA service providers and the semantic dependency network.
Figure 7.2 Supporting architecture for fog services providers.
Figure 7.3 Example microservices.
Figure 7.4 Example of a service specification.
Figure 7.5 Example for ontology slices (extracted from the example in Figure 7...
Figure 7.6 Service composition: (a) baseline and (b) backward (msg: message, r...
Figure 7.7 Traffic for service composition in varying (a) service densities an...
Figure 7.8 Response time for overlay construction in varying numbers of compos...
Figure 7.9 Traffic for overlay construction in varying numbers of microservice...
Chapter 8
Figure 8.1 An orchestration scenario for an e‐Health service. Different IoT ap...
Figure 8.2 E‐Health system workflow and containerized microservices in the wor...
Figure 8.3 Mapping between microservice candidate, containerized microservice ...
Figure 8.4 Fog orchestration architecture.
Figure 8.5 Orchestration within the life‐cycle management. Main functional ele...
Figure 8.6 The workflow of system simulation.
Figure 8.7 The architecture of simulation as a service.
Figure 8.8 A parallel GA solver to accelerate the handling of optimization iss...
Figure 8.9 Initial results demonstrate that the proposed approach can outperfo...
Figure 8.10 Initial results of GA‐Par in terms of both time (a) and quality (b...
Figure 8.11 Fog orchestrator with Docker Swarm.
Chapter 9
Figure 9.1 Fog service delivery model.
Figure 9.2 Fog computing architecture.
Figure 9.3 Reputation assessment life cycle.
Figure 9.4 System architecture.
Figure 9.5 An example of the SRON.
Figure 9.6 Evaporation function.
Figure 9.7 Long short‐term memory network.
Figure 9.8 Demonstration of QoS prediction in IoT. (a) invocation graph, (b) U...
Figure 9.9 Renegotiation process.
Chapter 10
Figure 10.1 A fog network with four task nodes, four helper nodes, and three b...
Figure 10.2 System‐wide average delay with different number of TNs.
Figure 10.3 Number of beneficial TNs with different number of TNs.
Figure 10.4 A fog network with a task node,
helper nodes with sharable compu...
Figure 10.5 EE versus the number of helper nodes among the MEETS, equal‐time o...
Figure 10.6 A fog network with five FNs and one local fog controller. Boxes wi...
Figure 10.7 Energy consumption performance.
Figure 10.8 Service delay performance.
Figure 10.9 Delay jitter performance.
Chapter 11
Figure 11.1 An illustration of fog computing.
Figure 11.2 An illustration of D2D and fog offloaded task executions.
Figure 11.3 An illustration of bipartite matching–based task offloading, with ...
Figure 11.4 An illustration of the constructed three‐layer graph matching–base...
Chapter 12
Figure 12.1 The cloud‐to‐things continuum, from a data storage perspective.
Figure 12.2 Robust yet efficient share requests.
Figure 12.3 Snapshot of throughput while transmitting data pieces evenly into ...
Figure 12.4 Clustering example of 11 locations of AWS S3 using measurements of...
Figure 12.5 Implementation overview.
Figure 12.6 Fog agent process that encodes/transfers files from local HDD to f...
Figure 12.7 Clustering 11 nodes of AWS S3 from three different client location...
Figure 12.8 Storage latency measured for writing 1kB of data.
Figure 12.9 Traffic volume from cloud and edge storage locations.
Figure 12.10 Upload completion time.
Figure 12.11 Download time while changing file size.
Figure 12.12 Download completion time while changing
t
and
n
. (a) Campus. (b) ...
Figure 12.13 Download completion times for different (a) cloud selection algor...
Chapter 13
Figure 13.1 Building connection between wearable devices and the local‐hub.
Figure 13.2 The network architecture of fog computing.
Figure 13.3 The network architecture of virtual local‐hub.
Figure 13.4 The proposed system architecture.
Figure 13.5 The process of retrieving the location information.
Figure 13.6 The process of speech recognition.
Figure 13.7 The process of retrieving Google calendar information.
Figure 13.8 The execution environment of the fog node.
Figure 13.9 The mechanism of remote wearable services provision.
Figure 13.10 The prototype implementation of VLH.
Figure 13.11 The performance of speech recognition.
Figure 13.12 The CPU usage of speech recognition (end‐device).
Figure 13.13 The CPU usage of speech recognition for (fog node).
Figure 13.14 The execution time of different applications (all applications).
Figure 13.15 The execution time of different applications (localization and re...
Figure 13.16 The CPU usage of different applications (end‐device).
Figure 13.17 The CPU usage of different applications (fog node).
Figure 13.18 The execution time of different applications in VLHR (all applica...
Figure 13.19 The execution time of different applications in VLHR (localizatio...
Figure 13.20 The CPU usage of different applications in VLHR (end‐device).
Figure 13.21 The CPU usage of different applications in VLHR (fog node).
Figure 13.22 Power consumption of speech recognition.
Chapter 14
Figure 14.1 (a) Google operates data centers for computation and backend stora...
Figure 14.2 Three‐tier fog computing architecture.
Figure 14.3 Illustration of hierarchical fog deployment models [24]: (a) with...
Figure 14.4 Malicious attacker steals the end user's private key and illegitim...
Figure 14.5 The end users roam randomly over the network. Besides, fog nodes a...
Figure 14.6 Fog computing consists of a massive number of fog nodes as infrast...
Cover
Table of Contents
Begin Reading
iii
iv
v
xvii
xviii
xix
xx
xxi
xxii
xxiii
xxiv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
Edited by
Yang Yang
Shanghai Institute of Fog Computing Technology (SHIFT)ShanghaiTech UniversityShanghai, China
Jianwei Huang
The Chinese University of Hong KongShenzhen, China
Tao Zhang
National Institute of Standards and Technology (NIST)Gaithersburg, MD, USA
Joe Weinman
XFORMA LLCFlanders, NJ, USA
This edition first published 2020© 2020 John Wiley & Sons, Inc.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of Yang Yang, Jianwei Huang, Tao Zhang, and Joe Weinman to be identified as the authors of this work has been asserted in accordance with law.
Registered OfficeJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA
Editorial Office111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.
Limit of Liability/Disclaimer of Warranty
While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Library of Congress Cataloging‐in‐Publication data applied for
ISBN: 9781119501091
Cover Design: WileyCover Images: Cloud computing © Just_Super/Getty Images,Network and city background © Busakorn Pongparnit/Getty Images
To our families.
– Yang, Jianwei, Tao, and Joe
Mohammad Aazam
Carnegie Mellon University (CMU)
USA
Nanxi Chen
Chinese Academy of Sciences Bio‐vision Systems Laboratory
SIMIT 865 Changning Road 200050
Shanghai
China
Shu Chen
IBM Ireland
Watson Client Solution
Dublin
Ireland
Xu Chen
School of Data and Computer Science
Sun Yat‐sen University
Guangzhou
China
Mung Chiang
Department of Electrical and Computer Engineering
Purdue University
West Lafayette, IN
USA
Jaeyoon Chung
Myota Inc.
Malvern, PA
USA
Carnegie Mellon University
University of Colorado Boulder
Boulder, CO
USA
Siobhán Clarke
The University of Dublin
Distributed Systems Group SCSS Trinity College Dublin
College Green Dublin 2
Dublin
Ireland
Abdelouahid Derhab
Center of Excellence in Information Assurance (CoEIA)
King Saud University
Saudi Arabia
Mohamed Amine Ferrag
LabSTIC Laboratory
Department of Computer Science
Guelma University Guelma
Algeria
Lin Gao
Department of Electronic and Information Engineering
Harbin Institute of Technology
Shenzhen
China
Jordi Garcia
Advanced Network Architectures Lab (CRAAX)
Universitat Politècnica de Catalunya (UPC)
Vilanova i la Geltrú Barcelona
Spain
Peter Garraghan
School of Computing and Communications
Lancaster University
Lancaster
UK
Maria Gorlatova
Department of Electrical Engineering
Princeton University
Princeton, NJ
USA
Sangtae Ha
Department of Computer Science
University of Colorado Boulder
Boulder, CO
USA
Jianwei Huang
School of Science and Engineering
The Chinese University of Hong Kong
Shenzhen
China
Carlee Joe‐Wong
Department of Electrical and Computer Engineering
Carnegie Mellon University (CMU)
Pittsburgh, PA
USA
Fan Li
The University of Dublin
Distributed Systems Group
SCSS
Trinity College Dublin
College Green
Dublin 2, Dublin
Ireland
Tao Lin
School of Computer and Communication Sciences
École Polytechnique Fédérale de Lausanne
Lausanne
Switzerland
Zening Liu
School of Information Science and Technology
ShanghaiTech University
Shanghai
China
George Iosifidis
School of Computer Science and Statistics
Trinity College Dublin University of Dublin
Ireland
Yuan‐Yao Lou
Graduate Institute of Networking and Multimedia and Department of Computer Science and Information Engineering
National Taiwan University
Taipei City
Taiwan
Leandros Maglaras
School of Computer Science and Informatics
Cyber Technology Institute
De Montfort University Leicester
UK
Eva Marín
Advanced Network Architectures Lab (CRAAX)
Universitat Politècnica de Catalunya (UPC)
Vilanova i la Geltrú Barcelona
Spain
Xavi Masip
Advanced Network Architectures Lab (CRAAX)
Universitat Politècnica de Catalunya (UPC)
Vilanova i la Geltrú Barcelona
Spain
David McKee
School of Computing
University of Leeds
Leeds
UK
Mithun Mukherjee
Guangdong Provincial Key Laboratory of Petrochemical Equipment Fault Diagnosis
Guangdong University of Petrochemical Technology
Maoming
China
Ai‐Chun Pang
Graduate Institute of Networking and Multimedia and Department of Computer Science and Information Engineering
National Taiwan University
Taipei City
Taiwan
Yichen Ruan
Department of Electrical and Computer Engineering
Carnegie Mellon University (CMU)
Moffett Field, CA
USA
Sergi Sànchez
Advanced Network Architectures Lab (CRAAX)
Universitat Politècnica de Catalunya (UPC)
Vilanova i la Geltrú Barcelona
Spain
Hamed Shah‐Mansouri
Department of Electrical and Computer Engineering
The University of British Columbia
Vancouver
Canada
Yuan‐Yao Shih
Department of Communications Engineering
National Chung Cheng University
Taipei City
Taiwan
Leandros Tassiulas
Department of Electrical Engineering, and Institute for Network Science
Yale University
New Haven, CT
USA
Kunlun Wang
School of Information Science and Technology
ShanghaiTech University
Shanghai
China
Joe Weinman
XFORMA LLC
Flanders, NJ
USA
Zhenyu Wen
School of Computing
Newcastle University upon Tyne
Newcastle
UK
Gary White
Distributed Systems Group, SCSS, Trinity College Dublin
The University of Dublin
College Green Dublin
Dublin 2 Ireland
Vincent W.S. Wong
Department of Electrical and Computer Engineering
The University of British Columbia
Vancouver
Canada
Jie Xu
School of Computing
University of Leeds
UK
Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC)
Beihang University
Beijing China
Renyu Yang
School of Computing
University of Leeds
UK
Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC)
Beihang University
Beijing
China
Yang Yang
Shanghai Institute of Fog Computing Technology (SHIFT)
ShanghaiTech University
Shanghai
China
Tao Zhang
National Institute of Standards and Technology (NIST)
Gaithersburg, MD
USA
Shuang Zhao
Shanghai Institute of Microsystem and Information Technology (SIMIT)
Chinese Academy of Sciences
China
Liang Zheng
Department of Electrical Engineering
Princeton University
Princeton, NJ
USA
Zhi Zhou
School of Data and Computer Science
Sun Yat‐sen University
Guangzhou
China
In the eternal dance driven by the evolution of technology and its applications, computing infrastructure has evolved through numerous waves, from the mainframe, to the minicomputer, to the personal computer, client‐server, the smartphone, the cloud, and the edge. Whereas the cloud typically is viewed as pooled, centralized resources and the edge comprises the distributed resources that connect to endpoint devices and things, the fog, which is the latest wave, spans the cloud to device continuum.
To understand the fog, it helps to first understand the cloud. Cloud computing has a variety of definitions, ranging from those of standards bodies, to axiomatic and theoretical frameworks, to various vendor and analyst marketing and positioning statements. It typically is viewed as processing, storage, network, platform, software, and services resources that are available to multiple customers and various workload types. These resources are available “for rent” under a variety of pricing models, such as by the hour, by the minute, by the transaction, by the user, and so forth. Further variations include freemium models, discounts for advance reservation and purchase, for sustained flat use, and dynamic pricing. While some analysts define the cloud as having these resources accessed over the (public) Internet, there is no reason that other networking technologies cannot be used as well, ranging from cellular wireless radio access networks to interconnection facilities to dense wave division multiplexing and a variety of other public and private networks.
In any event, the reality of the cloud is that the major cloud providers have each built dozens of large hyper‐scale facilities packed with thousands, or even hundreds of thousands of servers, whose capacity and services are accessible on demand and with pay‐per‐use charging by a wide variety of customers. This “short‐term rental” consumption and business model exists in many other industries beyond cloud computing, e.g. overnight stays in hotels for a per‐night fee; cars rentals with a daily‐rate; airline, train, and bus ticket for each usage; dining at restaurants and cafés. It even exists in places that we do not normally consider: a bank loan is a means of renting capital by the day or month, where the pay‐per‐use fee is called the interest rate.
Cloud computing use is still growing at astronomical rates, due to the many advantages that it offers. Clouds gain their strength in large part through their consolidation into large masses of resources. This enables cost‐effective dynamic allocation of resources to customers on demand and with a pay‐per‐use charging model. Large hotels can offer rooms for rent at attractive rates because when one convention leaves, another one begins checking in, and the remaining breakage is rented out to other people. Rental car agencies have thousands of customers; when some are returning cars, others are driving them, and still others are arriving at the counters to begin their rentals. In addition to economies of scale, these demand smoothing effects through statistical multiplexing of multiple diverse customer workloads help generate a compelling customer value proposition. They enable elasticity for many workloads, and smoothing enables higher utilization than if the varying workloads were partitioned into smaller silos. Higher utilization reduces wasted resources, lowering the unit cost of each resource.
However, this main advantage of the cloud – consolidated resources – is also its main weakness. Hyper‐scale size and centralized pooled resources mean that computing and storage are located far from their actual use in factories, automobiles, smartphones, wearables, irrigation sensors, and the like. Moreover, in stark contrast to the days when computers were housed in temples and only acolytes could tend to them, computing has become pervasive, ubiquitous, low power, and cheap. Rather than the alleged prognostication from decades ago that there was a world market for “maybe five computers,” there are tens of billions of intelligent devices distributed in the physical world. It is clear that sooner or later, we will have hundreds of billions – or even a trillion – smart, connected, digital devices. It is an easy calculation to make. There are seven billion people in the world, so it only takes 15 devices per person, on average, to reach 100 billion globally. In the developed world, it is not unusual for an individual to have 4 or 5 video surveillance cameras, a few smart speakers, a laptop, a desktop, a tablet, a smartphone, some smart TVs, a fitness tracker, and a few Wi‐Fi lightbulbs or outlets. To this basic observation one can add three main insights.
First, the global economy is developing even as the price of technology is plummeting, suggesting that every individual will be able to own multiple such devices.
Second, ever more devices are becoming smart and connected. For example, the smart voice‐activated microwave has been introduced by Amazon; soon it will be virtually impossible to buy an object that is not smart and connected.
Third, these calculations often undercount the number of devices out there. Because in addition to consumer devices with dedicated ownership by an individual or household, there will be additional tens and hundreds of billions of devices such as manufacturing robots and traffic lights and retail point‐of‐sale systems and hospital wheelchair tracking systems and autonomous delivery vehicles. A trillion connected devices can be deployed if every individual has 60 or seventy devices – not unlikely once you start adding in light bulbs and outlets and nonconsumer device counts make up the other half‐trillion.
These massive resource‐limited devices with various functionalities and capabilities, when they are deployed and connected, contribute to the future Internet of Things (IoT) to enable different intelligent applications and services, such as environment monitoring, autonomous driving, city management, and medicine and health care. Moreover, emerging wireless capabilities, as embodied in 5G, reduce latency from tens of milliseconds to single digits. To fully take advantage of these capabilities requires processing and storage resources in proximity to the device. There is absolutely no way that the optimal system architecture in such a situation would be to interconnect all these devices across a dumb wide area network to a remote consolidated facility, i.e. the cloud. Instead, multiple layers of processing and storage are needed to bring order, collaboration, intelligence, and solutions out of what otherwise would be a random chaos of devices.
This is the fog.
A number of synonyms and related concepts with nuanced differences exist, such as edge computing, mobile edge computing, osmotic computing, pervasive computing, ubiquitous computing, mini‐cloud, cloudlets, and so on and so forth.
And, various bodies have proposed various definitions. The OpenFog Consortium defines fog computing as “a system‐level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to Things.” The US National Institute of Standards and Technology similarly defines it as a “horizontal, physical or virtual resource paradigm that resides between smart end‐devices and traditional cloud or data centers. This paradigm supports vertically‐isolated, latency‐sensitive applications by providing ubiquitous, scalable, layered, federated, and distributed computing, storage, and network connectivity.”
In other words, the fog is simply multiple interconnected layers of computing along the continuum from cloud to endpoints such as user devices and things. This may include racks or microcells in server closets, residential gateways, factory control systems, and the like.
Whereas clouds are hyper‐scale, fog nodes may be intermediate size, or even miniature. Whereas clouds rely on multiple customers and workloads, fog nodes may be dedicated to one customer, and even one use. Whereas clouds have state of the art power distribution architectures including multiple grids with diverse access, generators and/or fuel cells, or hydrothermal energy, fog nodes may be powered by batteries or even energy scavenging. Whereas clouds use advanced thermal management strategies including hot‐cold aisles, water cooling, airflow simulation and optimization, fog nodes may be cooled by the environmental ambient. Whereas clouds are built in walled data centers, fog nodes may be in homes, factories, agricultural fields, or vineyards. Whereas clouds have fixed street addresses, fog nodes may be mobile. Whereas clouds are engineered for uptime and five nines connectivity, fog nodes may be only intermittently powered, available, within a coverage area, or functional. Whereas clouds are offered by a specific vendor, fog solutions are inherently heterogeneous ecosystems.
Perhaps this is why fog is likely to have an impact across many domains – the economy, technology, standards, market disruption, society and culture, and innovation – on par with cloud computing's impact.
Of course, similar to how cloud's advantages are their weaknesses, fog's advantages can also be its weaknesses. The strength of mobility can lead to intermittent connectivity, which increases the challenges of reliable message passing. Low latency to endpoints means high latency for massive databases, which can be in the cloud. Small footprints can mean an inability to process massive compute jobs. Heterogeneity can create robustness by largely eliminating systemic failures due to design flaws; it can also create a nightmare for monitoring, management, and root cause analysis. This book will document, explore, and quantify many of these challenges and identify and propose solutions and promising directions for future research.
We, the editors, sincerely hope that this collection of insights from the world's leading fog experts and researchers helps you in your journey to the fog.
Shanghai, China, 27 May 2019
Yang YangJianwei HuangTao ZhangJoe Weinman
Yang Yang1, Jianwei Huang2, Tao Zhang3, and Joe Weinman4
1Shanghai Institute of Fog Computing Technology (SHIFT), ShanghaiTech University, Shanghai, China
2School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China
3National Institute of Standards and Technology (NIST), Gaithersburg, MD, USA
4XFORMA LLC, Flanders, NJ, USA
As a new computing paradigm, fog computing serves as the bridge that connects centralized clouds and distributed edges of the network and plays the crucial role in managing and coordinating multitier computing resources at the cloud, in the network, at the edge, and on the things (devices). In other words, fog computing provides a new architecture that spans along the cloud‐to‐things continuum, thus effectively pooling dispersive computing resources at global, regional, local, and device levels to quickly meet various service requirements. Together with the edge, fog computing ensures timely data processing, situation analysis, and decision‐making at the locations close to where the data are generated and should be used. Together with the cloud, fog computing supports more intelligent applications and sophisticated services in different industrial verticals and scenarios, such as cross‐domain data analysis, pattern recognition, and behavior prediction. Some infrastructure challenges and constraints in communication bandwidth, network connectivity, and service latency can be successfully addressed by fog computing, since it makes computing resources in any networks more accessible, flexible, efficient, and cost‐effective. It is no doubt that fog computing will not only empower end users by enabling intelligent services in their neighborhoods but also more importantly, deliver a broad variety of benefits to business, consumers, governments, and societies. This book aims at providing a state‐of‐the‐art review and analysis of key opportunities and challenges of fog computing in different application scenarios and business models.
The following three chapters address different technical and economic issues in collaborative fog and cloud scenarios. Specifically, Chapter 2 introduces the hybrid fog–cloud scenario that combines the whole set of resources from the edge up to the cloud, describing the challenges that need to be addressed to enable realistic management solutions, as well as a review of the current efforts. The authors propose an architectural solution called Fog‐to‐Cloud (F2C) as a candidate to efficiently manage the set of resources in the IoT‐fog‐cloud stack. Such an architectural solution is conceptually supported by a service and technology agnostic software solution, which is discussed thoroughly in this chapter in comparison to other existing initiatives. The proposed F2C architecture has two key advantages: (i) it is open and secure by design, easily adoptable by any system environments through distinct software suites and (ii) it has an inherent collaborative model that supports multiple users to optimize resources utilization and services execution. Finally, the authors analyze main challenges for building a stable, scalable, and optimized solution, from both the resource and service perspectives, with special attention to how data must be managed.
In Chapter 3, the authors give an overview of fog computing and highlight the challenges due to the tremendous growth of various Internet of Things (IoT) systems and applications in recent years. They propose a mechanism to efficiently allocate the computing resources in the cloud and fog to different IoT users, in order to maximize their quality of experience (QoE), i.e. less energy consumption and computation delay. The competition among multiple users is modeled as a potential game to determine the computation offloading decisions. The existence of a pure Nash equilibrium (NE) is proven for this game, and it is shown that the equilibrium efficiency loss due to the strategic behavior of users is bounded. A best response strategy algorithm is then developed to obtain an NE of the computation offloading game. Numerical results reveal that the proposed mechanism significantly enhances the overall QoE, and in particular, 18% more users can benefit from computing services than the existing offloading mechanism. The results also demonstrate that the proposed mechanism is promising to enable low‐latency computing services for delay‐sensitive IoT applications.
In Chapter 4, the authors examine the pricing and performance trade‐offs in data analytics. First, they introduce different types of computing devices employed in fog and cloud scenarios, review the current pricing techniques in use, and discuss their implications for performance criteria like accuracy and latency. Then, a data analytics case is studied under a testbed of temperature sensors, where the temperature readings can be analyzed either at local Raspberry Pis or on a cloud server. Local analysis can reduce the communication overhead as raw data are no longer sent to the cloud server, but it lengthens the computation time as Raspberry Pis have less computing capacity than cloud servers. Thus, it is not immediately clear whether fog‐based or cloud‐based analysis leads to a lower overall completion time; indeed, a hybrid algorithm that can utilize both types of resources in parallel will likely minimize the completion time. However, the choice between a fog‐based, cloud‐based, or hybrid algorithm also induces different monetary costs (including both computation and data transmission costs) and may lead to different levels of accuracy, since the local analysis involves analyzing only subsets of data and later combining their results due to the Raspberry Pis' limited computing capacity. The authors examine these trade‐offs for a simple linear regression scenario and show that there is a threshold number of samples above which a hybrid algorithm is preferred to the cloud‐based one.
In Chapter 5, the authors outline a number of qualitative and quantitative arguments and frameworks to help rationally assess the economic benefits and trade‐offs between different approaches. For example, resource consolidation tends to increase latency to and from distributed edge and fog services. On the other hand, it tends to reduce latency to cloud‐based data and services. The statistics of independent, identically distributed workload demands can benefit from aggregation: multiple independent varying workloads tend to “cancel” each other out, leading to a precisely quantifiable smoothing effect that boosts utilization for a given resource level, which in turn reduces the weighted unit cost of resources. In short, there are many quantifiable characteristics of the fog, which can be evaluated in light of alternative architectures. Ultimately this illustrates that there is no “perfect” solution, as trade‐offs need to be quantified and assessed in light of specific application requirements.
In Chapter 6, the authors analyze the design challenges of incentive mechanisms for encouraging user engagements in user‐provided infrastructures (UPIs). Motivated by novel business models in network sharing solutions, they focus on mobile UPIs, where the energy consumption and data usage costs are critical, while storage and computation resources are limited. Hence, these parameters have large impacts on users' decisions for requesting/offering their resources from/to UPIs. This chapter reviews a set of incentive schemes that have been proposed for such UPIs, leveraging cooperative game theory, bargaining theory, and auctions. The authors shed light on the attained equilibriums, and study the efficiency and sensitivity on various system parameters. Furthermore, the impact of the network graph on the collaboration benefits in UPI systems is modeled and analyzed, and whether local user interactions achieve system‐wide efficient sharing equilibriums is explored. Finally, key bottleneck issues are discussed in order to unleash the full potential of UPIs in fog computing.
In Chapter 7, the authors introduce a Fog‐based Service Enablement Architecture (FogSEA), which is a light‐weight, decentralized service enablement model. It supports fog services sharing at network edges by adopting a hierarchical management strategy and underpins cross‐domain IoT applications based on a semantic‐based overlay network. They also propose the Semantic Data Dependency Overlay Network (SeDDON) network, which maintains the semantic information about available microservices. SeDDON aims to reduce the traffic cost and the response time during service discovery. FogSEA produces less traffic and takes less time to return an execution result, comparing to the baseline approach. Generally, traffic increases as more microservices join the network. SeDDON creation needs to send less messages at varying connectivity densities and microservice numbers. The main reason is that SeDDON allows microservices to advertise their services only once when they join the network, and only the microservice that detects the new node as a reverse‐dependence neighbor needs to reply.
In Chapter 8, the authors firstly discuss new characteristics and open challenges of realizing fog orchestration for IoT services before summarizing the fundamental requirements. Then, they propose a software‐defined orchestration architecture that decouples software‐based control policies from the dependencies and operations of heterogeneous hardware. This design can intelligently compose and orchestrate thousands of heterogeneous fog appliances. Specifically, a resource filteringbased resource assignment mechanism is developed to optimize the resource utilization and fair resource sharing among multitenant IoT applications. Additionally, a component selection and placement mechanism is adopted for containerized IoT microservices to minimize the latency by harnessing the network uncertainty and security while considering different application requirements and appliance capabilities. Finally, a fog simulation platform is presented to evaluate the aforementioned procedures by modeling the entities, their attributes, and actions. The practical experiences show that the proposed parallelized orchestrator can reduce the execution time by 50% with at least 30% higher orchestration quality.
In Chapter 9, the authors focus on the problem of reliable Quality of Service (QoS) – aware service choreography within a fog environment where service providers may be unreliable. A distributed QoS optimized adaptive system is proposed to help users in selecting the best available service based on its reputation and to monitor the run‐time performance of the service according to the predetermined Service Level Agreement (SLA). A service adaptation model is described to keep the system up with an expected run‐time QoS when the SLA is violated. In addition, a performance validation mechanism is developed for the fog environment, which adopts a monitoring and negotiation component to enable the reputation system.
In Chapter 10, the authors propose a typical fog network consisting of multiple fog nodes (FNs), wherein some task nodes (TNs) have heavy computation tasks, while some helper nodes (HNs) have spare resources for sharing with their neighboring nodes. To minimize the delay of every task in such a fog network, a noncooperative game is formulated and investigated to model the competition among TNs for the communication resources and computation capabilities of HNs. Then, a comprehensive analytical model that considers circuit, computation, offloading energy consumptions is developed for accurately evaluating the overall energy efficiency. With this model, the trade‐off relationship between performance gains and energy costs in collaborative task offloading is investigated. A novel delay energy balanced task scheduling (DEBTS) algorithm is proposed to minimize the overall energy consumption while reducing average service delay and delay jitter. Further, extensive simulation results show DEBTS can offer much better delay‐energy performance in task scheduling challenges.
In Chapter 11, the authors explore both noncooperative and cooperative perspectives of resource sharing issues in multiuser fog networks. On one hand, for the noncooperative distributed computation offloading scenario, the authors develop a game theoretic mechanism with fast convergence property and good performance guarantee. On the other hand, for the cooperation‐based centralized computation offloading scenario, the authors devise a holistic dynamic scheduling framework for collaborative computation offloading, by taking into account a variety of system factors including resource heterogeneity and energy efficiency. Extensive performance evaluations demonstrate that the proposed competitive and cooperative computation offloading schemes can achieve superior performance gains over the existing approaches.
In Chapter 12, the authors design and implement an elastic fog storage solution that is fully client‐centric, allowing to handle variable availability and possible untrustworthiness at different remote storage locations. The availability, security, and storage efficiency are ensured by employing data deduplication and erasure coding to guarantee a user's ability to access his or her files. By using the FUSE library, a prototype with proper POSIX interfaces is developed and implemented to study the feasibility and practicality issues, such as reusing file statistic information in order to avoid the metadata management overhead from a database system. The proposed method is evaluated by Amazon S3 as a cloud server and five edge/thing resources, and our solution outperforms cloud‐only solutions and is robust to edge node failures, seamlessly integrating multiple types of resources to store data. Other fog‐based applications can take advantage of this service as a data storage platform.
In Chapter 13, the authors propose a system design of Virtual Local‐Hub (VLH) to effectively communicate with ubiquitous wearable devices, thus extending connection ranges and reducing response time. The proposed system deploys wearable services at edge devices and modifies the system behavior of wearable devices. Consequently, wearable devices can be served at the edge of the network without data traveling via the Internet. Most importantly, the system modifications on wearable devices are transparent to both users and application developers, so that the existing applications can fit into the system naturally without any modifications. Due to the limited computing capacity of edge devices, the execution environment needs to be light‐weight. Thus, the system enables remote sharing of common and native function modules on edge devices. By using off‐the‐shelf hardware, a testbed is developed to conduct extensive experiments. The results show that the execution time of wearable services can be reduced by up to 60% with a low system overhead.
In Chapter 14, the authors present an overview of the primary security and privacy issues in fog computing and survey the state‐of‐the‐art solutions that deal with the corresponding challenges. Then, they discuss major attacks in fog‐based IoT applications and provide a side‐by‐side comparison of the state‐of‐the‐art methods toward secure and privacy‐preserving fog‐based IoT applications. This chapter summarizes all up‐to‐date research contributions and outlines future research directions that researcher can follow in order to address different security and privacy preservation challenges in fog computing.
We hope you enjoy reading this book on both technical and economic issues of fog computing. More importantly, we will be very happy if some chapters could inspire you to generate new ideas, solutions, and contributions to this exciting research area.
Xavi Masip Eva Marín Jordi Garcia and Sergi Sànchez
Advanced Network Architectures Lab (CRAAX), Universitat Politècnica de Catalunya (UPC), Vilanova i la Geltrú, Barcelona, Spain
The collaborative scenario is introduced in Section 2.1, paying special attention to the architectural models as well as to the challenges posed when considering the fog–cloud collaborative model enriched with new strategies such as innovative resource sharing. Then, next section briefly introduces fog‐to‐cloud (F2C), as one of the proposed architectural contributions in this hybrid scenario, showing its main benefits in different verticals, also emphasizing some open questions remaining unsolved yet. Section 2.3 describes main F2C challenges (could be also applied to any F2C‐like architecture), split into three domains, research, industry, and business, in order to illustrate what the issues are on a large view for a successful deployment. Ongoing work done in well‐established fora is introduced next in Section 2.4, with the aim of providing users with pointers to main active efforts in the area. Certainly, as of today, this is a very active area; hence, many other relevant works are simultaneously running out there, as it can be seen after a light pass on the recent programs of highly reputed conferences or highly impacting journals. However, it is not the aim of this chapter to report all contributions dealing with all foreseen challenges, rather to point out the reader to the most active repositories. Section 2.5 addresses the insights of the data management in the Internet of Things (IoT)‐fog‐cloud scenario, emphasizing challenges, as well as the benefits an F2C‐like architecture may bring to sort them out. Finally, this chapter ends in Section 2.6 summarizing what the near future is expected to bring in and opening some additional questions for further discussions.
It is widely accepted that fog computing, as a recently coined computing paradigm, is nowadays driving and will definitely drive, many opportunities in the business sector for developers and also for service and infrastructure providers, as well as many research avenues for the scientific community. However, beyond the characteristics explicitly inherent to fog computing, all undoubtedly bringing many benefits leading to an optimal service execution, a key contribution from a successful fog computing deployment falls into the novel opportunities brought in by its interaction with cloud computing. The central aim of this chapter is to roll out the insights into such a novel collaborative scenario, emphasizing the expected benefits, not only for users as services consumers but also for those within the whole community willing to actively participate in the new roles envisioned for this new paradigm.
Recognized the fact that cloud computing (and certainly the relevance and added value of its products) has been instrumental in facilitating a wider deployment of the so‐called IoT services and also in empowering society toward a wide utilization of Internet services in general, some specific aspects of its deployment make the need to highlight several noteworthy constraints. Indeed, although cloud computing is the major and widely adopted commodity [1] designed to addressing the ever increasing demand for computing and processing information, conceptually supported by its massive storage and huge processing capabilities, cloud computing presents well‐known limitations to meet the specific demands of IoT services requiring low latency, which can neither be overlooked in near‐future IoT deployments nor easily addressed with current network transport technologies. Additionally, beyond the added delay, the long distance from the edge device – where data are collected, and services are requested – to the far datacenters is adding non‐negligible issues notably impacting on key performance aspects. For example, the need to convey huge volumes of data from the edge up to the cloud, significantly overloads the network with traffic that, if handled at the edge would not require to be forwarded up to the cloud. Also important, the huge gap between the edge and the cloud drives the need for considering specific security aspects that might be also avoided if staying local at the edge.
Fortunately, fog computing comes up leveraging the capabilities devices located at the edge of the network, such as smart vehicles or 5G mobile phones, bring in to enabling service execution closer to IoT users. Thus, the overall service response time – suitable for real‐time services – may be substantially reduced, while simultaneously removing the need to forward traffic through the core network, and some of the cloud security gaps, all in all with a notable impact on the energy consumption [2]. Nevertheless, as a novel technology, some major issues surround fog computing, which if unaddressed may hinder its real deployment and exploitation as well as limit its applicability. Two major fog characteristics must be undeniably considered: (i) the fog storage and processing capabilities are limited when compared to cloud and (ii) the resources volatility, inherent to the mobility and energy constraints of fog devices, might cause undesired service disruptions. These challenges may, with no doubt hinder the adoption of fog computing by the potential users, be it traditional datacenter operators, ISPs, or new actors such as smart city managers or smart services clients. For the sake of literature review, and assuming fog computing is in its infancy, a huge bunch of research contributions are being published in the fog arena, from specific solutions to particular problems to wide surveys highlighting the main fog research avenues, see for example [3].
It is also worth emphasizing that the aforementioned limitations on fog and cloud computing came to stay, for long. In other words, the specific nature of the IoT scenario and the envisioned IoT evolution are both driving toward a more dynamic scenario – mobility becomes an intrinsic attribute in many devices, where heterogeneity at many levels – hardware, network technologies, interfaces, programming models, etc. –, is a key characteristic, and even worse, with no foreseen boundary on that evolution. In fact, nowadays, there is no limit foreseen for IoT, either on the amount of devices to be deployed or on the services to be offered. Many reports from well‐reputed consultant companies worldwide foresee an impressive growth in all IoT‐related aspects what not only shows the good health these technologies have, the extraordinary impact they will have on the wide society, but also the demand for a persistent endeavor for the research and industrial sectors.
Thus, it seems reasonable to conclude that although cloud computing as a high‐performance computing paradigm, already has a solid footprint in nowadays life for many users/services all over the world, a great potential is envisioned when putting together the benefits of cloud computing along with the innovative scenario defined in fog computing. In fact, cloud computing and fog computing are not competing, but collaborating toward a novel hybrid scenario where services execution many benefit from the whole set of advantages brought by them both.
Designed not to compete but to complement cloud computing, fog computing when combined with cloud computing paves the way for a novel, enriched scenario, where services execution may benefit from resources continuity from the edge to the cloud. From a resources perspective, this combined scenario requires resource continuity when executing a service, whereby the assumption is that the selection of resources for service execution remains independent of their physical location. Table 2.1, extending the data cited in [4] – considering different computational layers from the edge up to the cloud –, shows how the different layers can allocate different devices as well as the corresponding relevant features of each of them, including application examples. As it can be seen, an appropriate resources categorization and selection is needed to help optimize service execution, while simultaneously alleviating combined problems of security, resource efficiency, network overloading, etc.
From a formal perspective, it is also worth devoting some efforts to converge on a widely accepted set of terms so as to avoid misleading information. Aligned to this effort, the OpenFog Consortium (OFC) and the Edge Computing Consortium (ECC) are doing a great task in setting a preliminary glossary of terms that may align the whole community into the same wording (cf. [5,6], respectively). For example, although many contributions are equally referring to edge computing or fog computing, the OFC in [6] considers fog computing as “a superset of edge computing,” also defining the concept behind edge computing as the scenario where “applications, data and processing are placed at the logical extremes of a network rather than centralizing them.” Another interesting discussion focuses on what a fog node should be, including terms such as cloudlets or mini DC, or even the need to consider virtual instances (cf. [7]).
Table 2.1 Resource continuity possibilities in a layered architecture (from [4]).
Resource continuity from edge to cloud
Fog
Cloud
Edge devices
Basic/aggregation nodes
Intermediate nodes
Cloud
Device
Sensor, actuator, wearables
Car, phone, computer
Smart building, cluster of devices
Datacenter
Features
Response time
Milliseconds
Subseconds, seconds
Seconds, minutes
Minutes, weeks, days
Application examples
M2M communication haptics
Dependable services (e‐health)
Visualizations, simple analytics
Big data analytics statistics
How long IoT data are stored
Transient
Minutes, hours
Days, weeks
Months, years
Geographic coverage
Device
Connected devices
Area, cluster
Global
All in all, the envisioned scenario can be seen as a collaborative IoT‐fog‐cloud context (also referred to as IoT continuum), distributed into a set of layers, each gathering distinct devices (resources) according to their characteristics. Figure 2.1 shows an illustrative representation of the whole hybrid fog–cloud scenario, including the IoT devices at the bottom, the cloud datacenter at top, and some smart elements, i.e. fog nodes, with capacity enough to execute some data processing located in between, setting the fog. These fog nodes are responsible for a preliminary data filtering and processing, properly customized to the nodes capacities and characteristics, limiting the scope of far cloud datacenters to specific needs not covered by the set of fog nodes. Finally, as also shown in Figure 2.1, since all components in Figure 2.1 are to be connected, network technologies deployed to guarantee that such a connectivity plays a significant role as well. Certainly, the advent of new communication technologies, such as 5G or LoRa, endows the whole scenario with capacities not foreseen yet, that certainly may, with no doubt drive outstanding innovations impacting on the whole society daily activities.
Figure 2.1 Overall resources topology (from [8]).
The next subsections go deep into the envisioned collaborative hybrid scenario, first introducing a zoom‐out view of the F2C model, setting some interesting discussions on what a fog node should be or strategies to deploy the envisioned F2C model, and later introducing a preliminary high‐level architecture best suiting main scenario demands, including the key architectural blocks and the main concepts toward a successful resources sharing strategy.
The collaborative scenario set when putting together fog and cloud resources may be graphically depicted in terms of several layers, as shown in Figure 2.1. From a service execution perspective, and aiming at using the resources best suiting individual service demands, the scenario in Figure 2.1 can be also mapped into a stack of resources, setting a hierarchical resources pyramid, as shown in Figure 2.2, filling the gap known as the IoT continuum.
After a careful analysis of Figure 2.2, some conclusions may be inferred. For example, it is pretty obvious that the higher in the hierarchy, the higher the amount of resources, what is evident since datacenters at cloud are endowed with the highest capacities. It is also patent that the lower in the hierarchy, the higher the amount of devices and the lower the control on them as well. This assessment is also unmistakable, since it is a reality the fact that the number of devices grows when getting closer to the edge. But, the interesting discussion comes in between. Indeed, Figure 2.2
