Fog for 5G and IoT -  - E-Book

Fog for 5G and IoT E-Book

0,0
105,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The book examines how Fog will change the information technology industry in the next decade. Fog distributes the services of computation, communication, control and storage closer to the edge, access and users. As a computing and networking architecture, Fog enables key applications in wireless 5G, the Internet of Things, and big data. The authors cover the fundamental tradeoffs to major applications of fog. The book chapters are designed to motivate a transition from the current cloud architectures to the Fog (Chapter 1), and the necessary architectural components to support such a transition (Chapters 2-6). The rest of the book (Chapters 7-xxx) are dedicated to reviewing the various 5G and IoT applications that will benefit from Fog networking. This volume is edited by pioneers in Fog and includes contributions by active researchers in the field. * Covers fog technologies and describes the interaction between fog and cloud * Presents a view of fog and IoT (encompassing ubiquitous computing) that combines the aspects of both industry and academia * Discusses the various architectural and design challenges in coordinating the interactions between M2M, D2D and fog technologies * "Fog for 5G and IoT" serves as an introduction to the evolving Fog architecture, compiling work from different areas that collectively form this paradigm

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 531

Veröffentlichungsjahr: 2017

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

CONTRIBUTORS

Introduction

I.1 SUMMARY OF CHAPTERS

I.2 ACKNOWLEDGMENTS

REFERENCES

PART I: Communication and Management of Fog

1 ParaDrop

1.1 INTRODUCTION

1.2 IMPLEMENTING SERVICES FOR THE PARADROP PLATFORM

1.3 DEVELOP SERVICES FOR PARADROP

REFERENCES

2 Mind Your Own Bandwidth

2.1 INTRODUCTION

2.2 RELATED WORK

2.3 CREDIT DISTRIBUTION AND OPTIMAL SPENDING

2.4 AN ONLINE BANDWIDTH ALLOCATION ALGORITHM

2.5 DESIGN AND IMPLEMENTATION

2.6 EXPERIMENTAL RESULTS

2.7 GATEWAY SHARING RESULTS

2.8 CONCLUDING REMARKS

ACKNOWLEDGMENTS

APPENDIX 2.A

REFERENCES

3 Socially‐Aware Cooperative D2D and D4D Communications toward Fog Networking

3.1 INTRODUCTION

3.2 RELATED WORK

3.3 SYSTEM MODEL

3.4 SOCIALLY‐AWARE COOPERATIVE D2D AND D4D COMMUNICATIONS TOWARD FOG NETWORKING

3.5 NETWORK ASSISTED RELAY SELECTION MECHANISM

3.6 SIMULATIONS

3.7 CONCLUSION

ACKNOWLEDGMENTS

REFERENCES

4 You Deserve Better Properties (From Your Smart Devices)

4.1 WHY WE NEED TO PROVIDE BETTER PROPERTIES

4.2 WHERE WE NEED TO PROVIDE BETTER PROPERTIES

4.3 WHAT PROPERTIES WE NEED TO PROVIDE AND HOW

4.4 CONCLUSIONS

ACKNOWLEDGMENT

REFERENCES

PART II: Storage and Computation in Fog

5 Distributed Caching for Enhancing Communications Efficiency

5.1 INTRODUCTION

5.2 FEMTOCACHING

5.3 USER‐CACHING

5.4 CONCLUSIONS AND OUTLOOK

REFERENCES

6 Wireless Video Fog

6.1 INTRODUCTION

6.2 RELATED WORK

6.3 SYSTEM OPERATION AND NETWORK MODEL

6.4 PROBLEM FORMULATION AND COMPLEXITY

6.5 VBCR: A DISTRIBUTED HEURISTIC FOR LIVE VIDEO WITH COOPERATIVE RECOVERY

6.6 ILLUSTRATIVE SIMULATION RESULTS

6.7 CONCLUDING REMARKS

REFERENCES

7 Elastic Mobile Device Clouds

7.1 INTRODUCTION

7.2 DESIGN SPACE WITH EXAMPLES

7.3 FEMTOCLOUD PERFORMANCE EVALUATION

7.4 SERENDIPITY PERFORMANCE EVALUATION

7.5 CHALLENGES

REFERENCES

PART III: Applications of Fog

8 The Role of Fog Computing in the Future of the Automobile

8.1 INTRODUCTION

8.2 CURRENT AUTOMOBILE ELECTRONIC ARCHITECTURES

8.3 FUTURE CHALLENGES OF AUTOMOTIVE E/E ARCHITECTURES AND SOLUTION STRATEGIES

8.4 FUTURE AUTOMOBILES AS FOG NODES ON WHEELS

8.5 DETERMINISTIC FOG NODES ON WHEELS THROUGH REAL‐TIME COMPUTING AND TIME‐TRIGGERED TECHNOLOGIES

8.6 CONCLUSION

REFERENCES

9 Geographic Addressing for Field Networks

9.1 INTRODUCTION

9.2 GEOGRAPHIC ADDRESSING

9.3 SAGP: WIRELESS GA IN THE FIELD

9.4 GEOROUTING: EXTENDING GA TO THE CLOUD

9.5 SGAF: A MULTI‐TIERED ARCHITECTURE FOR LARGE‐SCALE GA

9.6 THE AT&T LABS GEOCAST SYSTEM

9.7 TWO GA APPLICATIONS

9.8 CONCLUSIONS

REFERENCES

10 Distributed Online Learning and Stream Processing for a Smarter Planet

10.1 INTRODUCTION: SMARTER PLANET

10.2 ILLUSTRATIVE PROBLEM: TRANSPORTATION

10.3 STREAM PROCESSING CHARACTERISTICS

10.4 DISTRIBUTED STREAM PROCESSING SYSTEMS

10.5 DISTRIBUTED ONLINE LEARNING FRAMEWORKS

10.6 WHAT LIES AHEAD

ACKNOWLEDGMENT

REFERENCES

11 Securing the Internet of Things

11.1 INTRODUCTION

11.2 NEW IOT SECURITY CHALLENGES THAT NECESSITATE FUNDAMENTAL CHANGES TO THE EXISTING SECURITY PARADIGM

11.3 A NEW SECURITY PARADIGM FOR THE INTERNET OF THINGS

11.4 SUMMARY

ACKNOWLEDGMENT

REFERENCES

Index

WILEY SERIES ON INFORMATION AND COMMUNICATION TECHNOLOGY

End User License Agreement

List of Tables

Chapter 02

TABLE 2.1 Number of Devices at Each Gateway

Chapter 03

TABLE 3.1 The Preference Lists of

Nodes Based on the Physical Graph 

and Social Graph 

in Figure 3.2

Chapter 05

TABLE 5.1 The Average User Throughput Comparison Between the Cluster‐Based and ITLinQ Delivery Schemes

Chapter 06

TABLE 6.1 Symbols Used in the Paper

TABLE 6.2 Baseline Parameters of the Simulation

Chapter 07

TABLE 7.1 A Summary of Different System Assumptions

TABLE 7.2 FemtoCloud Experimental Device’s Characteristics

TABLE 7.3 FemtoCloud Experiment Tasks Characteristics and Evaluation Parameters

TABLE 7.4 FemtoCloud Experiment Parameters

TABLE 7.5 Prototype Performance Measurements

Chapter 10

TABLE 10.1 Data Management Systems and Their Support for SPA Requirements

TABLE 10.2 Example of Streams Programming to Realize Congestion Prediction Application Flowgraph

Chapter 11

TABLE 11.1 Ways to Determine the Trustworthiness of Another Device

List of Illustrations

Chapter 0

Figure I.1 Fog architectures and applications. Supported by such architectures.

Chapter 01

Figure 1.1 The fully implemented ParaDrop platform on the Wi‐Fi home gateway, which shares its resources with two wireless devices including a security camera and environment sensor.

Figure 1.2 The dashed box shows the block diagram representation of a “chute” installed on a ParaDrop‐enabled access point. Each chute hosts a stand‐alone service and has its own network subnet.

Figure 1.3 An example Chute.struct file, which is used to specify the key configuration parameters of a chute that hosts a stand‐alone service. Parameters such as CPU, memory, disk requirements, and network configurations are specified as JSON key–value pairs. ParaDrop provides chute configuration templates to developers, which can customized based on application requirements.

Figure 1.4 The primary Chute.struct component for the SecCam chute.

Figure 1.5 The Chute.files component lists the files required for the SecCam chute.

Figure 1.6 The Chute.resource component specifies the resource consumption limits for the SecCam chute.

Figure 1.7 The Chute.runtime component for the SecCam chute.

Figure 1.8 The Chute.traffic component allows users to access data within the SecCam chute.

Chapter 02

Figure 2.1 Hierarchical edge‐based bandwidth allocation.

Figure 2.2 System architecture. Dashed lines represent traffic flow, and solid lines represent rate and credit information.

Figure 2.3 Screenshots of the web interface. (a) Usage tracking and (b) traffic priorities and device/OS classification.

Figure 2.4 Receive buffer model.

Figure 2.5 Our rate limiting algorithm is (a) more accurate than

tc

and (b, c) more graceful than rate limiting. We average all results over 10 runs, 60 seconds each, and show 95% confidence intervals.

Figure 2.6 YouTube playback performance improves as

α

1

/

α

2

increases and YouTube receives higher prioritization over

wget

.

Figure 2.7 With our credit sharing scheme, all gateways (a) achieve comparable cumulative rates by (b) actively saving and spending credits at different times.

Figure 2.8 With our credit sharing scheme, users achieve similar utility gains over equal sharing over 1 week.

Figure 2.9 Despite their uncertain future budgets, with our online algorithm gateways (a) achieve comparable cumulative rates by (b) saving and spending credits at different times over 1 week.

Chapter 03

Figure 3.1 An illustration of cooperative D2D and D4D communication for cooperative networking. In sub‐figure (a), device R serves as the relay for the D2D communication between devices S and D. In sub‐figure (b), device R serves as the relay for the cellular communication between device S and the base station. In both cases, the D2D communication between devices S and R is part of cooperative networking.

Figure 3.2 An illustration of the social trust model for cooperative D2D communications. In the physical domain, different devices have different feasible cooperation relationships subject to physical constraints. In the social domain, different devices have different assistance relationships based on social trust among the devices.

Figure 3.3 Illustrative smart grid IoT devices. (a) Our outlet device with a Texas Instrument Microcontroller Board and XBee radio module; (b) Raspberry Pi to emulate a smartmeter or controller device.

Figure 3.4 The physical–social graph based on the physical graph and social graph in Figure 3.2. For example, there exists an edge between nodes 1 and 3 in the physical–social graph since they can serve as the feasible relay for each other and also have social trust toward each other.

Figure 3.5 An illustration of direct and indirect reciprocity.

Figure 3.6 The physical‐coalitional graph based on the physical graph and social graph in Figure 3.7. For example, there exists an edge between nodes 1 and 2 in the physical‐coalitional graph since they can serve as the feasible relay for each other and have no social trust toward each other.

Figure 3.7 An illustration of the resulting graphs 

at each iteration

t

of the core relay selection algorithm. (a)

, (b)

, (c)

, and

.

Figure 3.8 The reciprocal relay selection cycles identified by the core relay selection algorithm in Figure 3.7.

Figure 3.9 System throughput with the number of nodes

and different social network density.

Figure 3.10 Average size of the reciprocal relay selection cycles in the social trust and social reciprocity‐based relay selection with

and different social network density.

Figure 3.11 System throughput of nodes

and different distance threshold

δ

for relay detection.

Figure 3.12 The number of social links of the social graphs based on real trace Brightkite.

Figure 3.13 Average system throughput with different number of nodes.

Figure 3.14 Normalized energy efficiency with different number of nodes.

Figure 3.15 Average number of iterations of the NARS mechanism.

Figure 3.16 Average running time of the NARS mechanism.

Chapter 04

Figure 4.1 BlueSeal information flow permission screenshot.

Figure 4.2

AsyncTask

code snippet.

Figure 4.3

AsyncTask

flow.

Figure 4.4 Simplified Android architecture.

Figure 4.5 Android sensor architecture.

Figure 4.6 RTDroid architecture.

Figure 4.7 RTDroid sensor architecture.

Figure 4.8 Performance comparison between RTDroid and Android.

Figure 4.9 Storage API virtualization and its example extension.

Chapter 05

Figure 5.1 System model for femtocaching.

Figure 5.2 (a) Grid network with

nodes (black circles) with minimum separation

. (b) An example of single‐cell layout and the interference avoidance TDMA scheme. In this figure, each square represents a cluster. The grey squares represent the concurrent transmitting clusters. The highlighted circular area is the disk where the protocol model allows no other concurrent transmission.

r

is the worst‐case transmission range and Δ is the interference parameter. We assume a common

r

for all the transmitter–receiver pairs. In this particular example, the TDMA parameter is

, which means that each cluster can be activated every nine transmission scheduling slot durations.

Figure 5.3 Comparison between the normalized theoretical result (solid lines) and normalized simulated result (dashed lines) in terms of the minimum throughput per user versus outage probability. The throughput is normalized by

C

r

so that it is independent of the link rate. We assume

,

, and

and reuse factor

. The parameter

γ

r

for the Zipf distribution varies from 0.1 to 0.6, which are shown from the rightmost curves to the leftmost curves. The theoretical curves show the plots of the dominating term in (5.14) divided by

C

r

.

Figure 5.4 A deterministic view of the optimality condition for treating interference as noise.

Figure 5.5 Comparison of the achievable spectral reuse of cached ITLinQ and interference avoidance for the case of

and

.

Figure 5.6 Comparison of CDF of the achievable rates of users under the cluster‐based and ITLinQ schemes in the small library regime.

Figure 5.7 Comparison of CDF of the achievable rates of users under the cluster‐based and ITLinQ schemes in the large library regime.

Figure 5.8 Simulation results for the throughput–outage trade‐off for different schemes under the realistic indoor/outdoor propagation environment (for details, see Ref. [29]). For harmonic broadcasting with only the

m

′ most popular files, solid line:

; dash–dot line:

; dash line:

. We have

,

,

, and

.

Figure 5.9 Potential spectral gain of blind index coding.

Figure 5.10 Illustration of the example of three users, three files, and

, achieving 1/2 transmissions in term of file. We divide each file into six packets (e.g.,

A

is divided into

A

1

, …, 

A

6

). User 1 requests

A

, user 2 requests

B

, and user 3 requests

C

. The cached packets are shown in the rectangles under each user. For the delivery phase, user 1 transmits

, user 2 transmits

, and user 3 transmits

. The normalized number of transmissions is

, which is also information theoretically optimal for this network [16].

Chapter 06

Figure 6.1 Illustration of wireless fog. The

source

node pulls the video stream from the streaming cloud and broadcasts the stream to other clients within the fog in a multihop manner.

Figure 6.2 A representation of a wireless streaming mesh, with nodes arranged according to their shortest distance to the source node.

Figure 6.3 Slot‐based operations. “pkt” stands for “packet”.

Figure 6.4 An example beacon packet format for the case that

. The beacon contains one bitmap for video packet availability and a bitmap for each buffered NC packet. IDs of any downstream nodes are also appended in

UIX

beacon exchange. For different

sizes, bitmap lengths can be adjusted accordingly.

R

is a reserved 1‐bit field.

Figure 6.5 Flowchart of NC‐based cooperative recovery.

Figure 6.6 Flowchart of video packet forwarding.

Figure 6.7 Loss rate versus network size.

Figure 6.8 Network traffic versus network size.

Figure 6.9 Video PSNR versus network size.

Figure 6.10 Network traffic vs. link loss rate.

Figure 6.11 VBCR network traffic composition.

Figure 6.12 Number of parents for each node.

Figure 6.13 Histogram of the number of parents in VBCR.

Chapter 07

Figure 7.1 Mobile cluster stability spectrum.

Figure 7.2 Elastic mobile‐device clouds architecture.

Figure 7.3 Impact of device arrival rate and presence time. (a) Computational throughput, (b) network utilization, and (c) computational resource utilization.

Figure 7.4 Impact of cluster stability. (a) Computational throughput, (b) network utilization, and (c) computational resource utilization.

Figure 7.5 Impact of task characteristics. (a) Computational resource utilization and (b) network utilization.

Figure 7.6 Robustness to estimation errors. (a) Computational resource utilization and (b) network utilization.

Figure 7.7 A comparison of Serendipity’s performance benefits. The average job completion times with their 95% confidence intervals are plotted. We use two data traces, Haggle and RollerNet, to emulate the node contacts and three input sizes for each. (a) 10 tasks, (b) 100 tasks, and (c) 300 tasks.

Figure 7.8 The load distribution of Serendipity nodes when there are 100 tasks total, each of which takes 2 Mb input data. (a) RollerNet and (b) Haggle.

Figure 7.9 The impact of wireless bandwidth on the performance of Serendipity. The average job completion times are plotted when the bandwidth is 1, 5.5, 11, 24, and 54 Mb/s, respectively. (a) RollerNet and (b) Haggle.

Figure 7.10 The impact of node mobility on Serendipity. We generate the contact traces for 10 nodes in a

area. In (a) we set the node speed to be 5 m/s, while in (b) we use Levy Walk as the mobility model.

Figure 7.11 The impact of node numbers on the performance of Serendipity. We analyze the impact of both node number and node density by fixing the activity area and setting it proportional to the node numbers, respectively. (a) Fixed active area and (b) fixed node density.

Figure 7.12 Serendipity’s performance with multiple jobs executed simultaneously. The job arrival time follows a Poisson distribution with varying arrival rates. (a) RollerNet and (b) Haggle.

Figure 7.13 A job example where both PNP‐block

B

and

C

are disseminated to Serendipity nodes after

A

completes. Their task positions in the nodes’ task lists are shown blow the DAG.

Figure 7.14 The importance of assigning priorities to PNP‐blocks.

Chapter 08

Figure 8.1 The future automobile as fog computing on wheels.

Figure 8.2 Traditional E/E automobile architecture.

Figure 8.3 Example of an automotive ECU, the TTA drive platform from TTTech (inner structure and with automotive‐grade housing).

Figure 8.4 The IoT virtuous information cycle.

Figure 8.5 Example of automotive network.

Figure 8.6 Example of communication schedule in time‐triggered communication.

Figure 8.7 Future automotive E/E architecture.

Figure 8.8 Classical system architecture versus virtualization.

Figure 8.9 Ethernet frame format.

Figure 8.10 (a) Example of network and (b) communication scenario of the IEEE 802.1Qbv “time‐aware shaper.”

Figure 8.11 Example of vehicle‐wide virtualization (VWV).

Chapter 09

Figure 9.1 Sender, geocast region, and forwarding zone.

Figure 9.2 Example of a geocast propagated via SAGP.

Figure 9.3 Pictorial proof idea showing why SAGP only uses

O

(lg 

n

) transmissions per geocast in dense scenarios. The originator is somewhere to the left of the diagram; the intuition is that, on average, successive transmissions will occur at devices approximately half way to the CGR.

Figure 9.4 Example SAGP geocast propagation, with arrows showing the useful relays.

Figure 9.5 Packet transmission in the georouter tier.

Figure 9.6 A notional example of a large‐scale GA system built by bridging together many individual GA tiers. Rectangles represent georouter tiers, while dashed ovals represent geocast tiers.

Figure 9.7 An example propagation across multiple tiers using bridging. Starting at A, the packet first traverses the geocast tier around A, then up through the georouter tier, and then back “down” into the georouter tier near the geocast region. Finally, it traverses that tier to reach the GR.

Figure 9.8 Schematic diagram of the AT&T Labs Geocast System.

Figure 9.9 Screen capture of the PSCommander smartphone application.

Figure 9.10 The FCOP problem illustrated. (a) The general monitoring problem and (b) the common operating picture special case.

Figure 9.11 Graph of bytes per second used by the FCOP algorithm in a dense scenario, divided by

n

 lg 

n

versus number of devices

n

.

Figure 9.12 Typical iTron game, showing both real‐world and virtual‐world views.

Figure 9.13 Final screenshot of the championship iTron match played at the culmination of a multi‐week iTron teaching unit in a NJ high school Physical Education class. It was played at a larger scale, and players exploited different terrain types.

Chapter 10

Figure 10.1 Smarter transportation: individuals and city.

Figure 10.2 Distributed learning needed for real‐time signage update.

Figure 10.3 From logical operator flowgraphs to deployment.

Figure 10.4 Possible trade‐offs with parallelism and placement.

Figure 10.5 Flow of information toward learner 1 at time slot

n

for a binary tree network.

Figure 10.6 System model described in Section 10.5.2.

Figure 10.7 A comparison between the proposed algorithm and the combine‐then‐adapt (CTA) scheme in terms of information dissemination and weight update rule. Unlike diffusion, in the proposed approach the weight vectors do not need to be disseminated.

Figure 10.8 A generic speed sensor

i

must detect collisions in real time and inform the drivers (left). To achieve this goal sensor

i

receives the observations from the other sensors, and the flow of information is represented by a directed graph 

(right).

Figure 10.9 Illustration of the considered notations.

Chapter 11

Figure 11.1 Fog computing and fog‐based security services to help protect resource‐constrained devices and systems.

Figure 11.2 Hierarchical crowd attestation.

Guide

Cover

Table of Contents

Begin Reading

Pages

ii

iii

iv

xi

xii

1

2

3

4

5

6

7

8

9

11

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

107

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

285

286

287

288

289

290

b1

b2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

WILEY SERIES ON INFORMATIONAND COMMUNICATION TECHNOLOGY

Series Editors: T. Russell Hsing, Vincent K. N. Lau, and Mung Chiang

A complete list of the titles in this series appears at the end of this volume.

FOG FOR 5G AND IoT

 

Edited by

Mung Chiang

Arthur LeGrand Doty Professor of Electrical Engineering,Princeton University, Princeton, NJ, USA

Bharath Balasubramanian

Senior Inventive Scientist, ATT Labs Research, Bedminster,NJ, USA

Flavio Bonomi

Founder and CEO, Nebbiolo Technologies, Milpitas, CA, USA

 

 

 

 

 

 

 

 

 

This edition first published 2017© 2017 John Wiley & Sons, Inc.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permision to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Mung Chiang, Bharath Balasubramanian, and Flavio Bonomi to be identified as the editorial material in this work has been asserted in accordance with law.

Registered OfficeJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA

Editorial Office111 River Street, Hoboken, NJ 07030, USA

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of WarrantyWhile the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

The advice and strategies contained herein may not be suitable for every situation. In view of ongoing research, equipment modifications, changes in governmental regulations, and the constant flow of information relating to the use of experimental reagents, equipment, and devices, the reader is urged to review and evaluate the information provided in the package insert or instructions for each chemical, piece of equipment, reagent, or device for, among other things, any changes in the instructions or indication of usage and for added warnings and precautions. The fact that an organization or Website is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or Website may provide or recommendations it may make. Further, readers should be aware that Internet Websites listed in this work may have changed or disappeared between when this work was written and when it is read. No warranty may be created or extended by any promotional statements for this work. Neither the publisher nor the author shall be liable for any damages arising herefrom.

Library of Congress Cataloging‐in‐Publication Data

Names: Chiang, Mung, editor. | Balasubramanian, Bharath, editor. | Bonomi,Flavio, editor.Title: Fog for 5G and IoT/edited by Mung Chiang, Bharath Balasubramanian, Flavio Bonomi.Description: Hoboken, NJ, USA : John Wiley & Sons Inc., 2017. | Includesbibliographical references and index.Identifiers: LCCN 2016042091| ISBN 9781119187134 (cloth) | ISBN 9781119187172 (epub) | ISBN 9781119187158 (epdf)Subjects: LCSH: Electronic data processing–Distributed processing. | Distributed shared memory. | Storage area networks (Computer networks) | Mobile computing. | Internet of things. | Cloud computing.Classification: LCC QA76.9.D5 F636 2017 | DDC 004.67/82–dc23LC record available at https://lccn.loc.gov/2016042091

Cover image: Cultura/Seb Oliver/GettyimagesCover design by Wiley

CONTRIBUTORS

MOSTAFA AMMAR, School of Computer Science, College of Computing, Georgia Institute of Technology, Atlanta, GA, USA

HELDER ANTUNES, Corporate Strategic Innovations Group, Cisco Systems, Inc., San Jose, CA, USA

A. SALMAN AVESTIMEHR, Department of Electrical Engineering, University of Southern California, Los Angeles, CA, USA

BHARATH BALASUBRAMANIAN, ATT Labs Research, Bedminster, NJ, USA

SUMAN BANERJEE, Department of Computer Sciences, University of Wisconsin‐Madison, Madison, WI, USA

FLAVIO BONOMI, Nebbiolo Technologies, Inc., Milpitas, CA, USA

S.‐H. GARY CHAN, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong

XU CHEN, School of ECEE, Arizona State University, Tempe, AZ, USA

MUNG CHIANG, EDGE Labs; Department of Electrical Engineering, Princeton University, Princeton, NJ, USA

SANGTAE HA, Department of Computer Science, University of Colorado at Boulder, Boulder, CO, USA

KARIM HABAK, School of Computer Science, College of Computing, Georgia Institute of Technology, Atlanta, GA, USA

ROBERT J. HALL, AT&T Labs Research, Bedminster, NJ, USA

KHALED A. HARRAS, Computer Science Department, School of Computer Science, Carnegie Mellon University, Doha, Qatar

CARLEE JOE‐WONG, Electrical and Computer Engineering, Carnegie Mellon University, Silicon Valley, CA, USA

STEVEN Y. KO, University at Buffalo, The State University of New York, Buffalo, NY, USA

PENG LIU, Pennsylvania State University, State College, PA; Department of Computer Sciences, University of Wisconsin‐Madison, Madison, WI, USA

ZHENMING LIU, Department of Computer Science, College of William and Mary, Williamsburg, VA, USA

ZHI LIU, Global Information and Telecommunication Institute, Waseda University, Tokyo, Japan

SATYAJAYANT MISRA, Department of Computer Science, New Mexico State University, Las Cruces, NM, USA

ANDREAS F. MOLISCH, Department of Electrical Engineering, University of Southern California, Los Angeles, CA, USA

ASHISH PATRO, Department of Computer Sciences, University of Wisconsin‐ Madison, Madison, WI, USA

STEFAN POLEDNA, TTTech Computertechnik AG, Wien, Austria

CONG SHI, School of Computer Science, College of Computing, Georgia Institute of Technology, Atlanta, GA, USA; Square, Inc., San Francisco, CA, USA

WILFRIED STEINER, TTTech Computertechnik AG, Wien, Austria

DEEPAK S. TURAGA, IBM T. J. Watson Research Center, Yorktown, New York, NY, USA

MIHAELA VAN DER SCHAAR, Electrical Engineering Department, University of California at Los Angeles, Los Angeles, CA, USA

DALE WILLIS, Department of Computer Sciences, University of Wisconsin‐ Madison, Madison, WI, USA

FELIX MING FAI WONG, Yelp Inc., San Francisco, CA, USA

ELLEN W. ZEGURA, School of Computer Science, College of Computing, Georgia Institute of Technology, Atlanta, GA, USA

BO ZHANG, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong

JUNSHAN ZHANG, School of ECEE, Arizona State University, Tempe, AZ, USA

TAO ZHANG, Corporate Strategic Innovation Group, Cisco Systems, Inc., San Jose, CA, USA

YI ZHENG, Corporate Strategic Innovation Group, Cisco Systems, Inc., San Jose, CA, USA

RAYMOND ZHENG, Corporate Strategic Innovation Group, Cisco Systems, Inc., San Jose, CA, USA

Introduction

BHARATH BALASUBRAMANIAN,1 MUNG CHIANG,2 and FLAVIO BONOMI3

1ATT Labs Research, Bedminster, NJ, USA

2EDGE Labs, Princeton University, Princeton, NJ, USA

3Nebbiolo Technologies, Inc., Milpitas, CA, USA

The past 15 years have seen the rise of the cloud, along with rapid increase in Internet backbone traffic and more sophisticated cellular core networks. There are three different types of clouds: (i) data centers, (ii) backbone IP networks, and (iii) cellular core networks, responsible for computation, storage, communication, and network management. Now the functions of these three types of clouds are descending to be among or near the end users, as the “fog.” Empowered by the latest chips, radios, and sensors, the edge devices today are capable of performing complex functions including computation, storage, sensing, and network management. In this book, we explore the evolving notion of the fog architecture that incorporates networking, computing, and storage.

Architecture is about the division of labor in modularization: who does what, at what timescale, and how to glue them back together. The division of labor between layers, between control plane and data plane, and between cloud and fog [1] in turn supports various application domains. We take the following as a working definition of the fog architecture: it is an architecture for the cloud‐to‐things (C2T) continuum that uses one or a collaborative multitude of end‐user clients or near‐user edge devices to carry out a substantial amount of storage, communication, and control, configuration, measurement, and management. Engineering artifacts that may use the fog architecture include 5G, home/personal networking, embedded AI, and the Internet of things (IoT) [2].

In Figure I.1, we highlight that fog can refer to an architecture for computing, storage, control, or communication network, and that as a network architecture it may support a variety of applications. We contrast between the fog architecture and the current practice of the cloud along the following three dimensions:

Carry out a substantial amount of storage at or near the end user (rather than stored primarily in large‐scale data centers).

Carry out a substantial amount of communication at or near the end user (rather than all routed through the backbone network).

Carry out a substantial amount of computing and management, including network measurement, control, and configuration, at or near the end user (rather than controlled primarily by gateways such as those in the LTE core).

Figure I.1 Fog architectures and applications. Supported by such architectures.

Why would we be interested in the fog view now? There are four main reasons summarized as CEAL. Many examples in recent publications, across mobile and landline, and from physical layer beamforming to application layer edge analytics have started leveraging these advantages [3–8]:

Cognition: Awareness of Client‐Centric Objectives

. Following the end‐to‐end principle, some of the applications can be best enabled by knowing the requirements on the clients. This is especially true when privacy and reliability cannot be trusted in the cloud or when security is enhanced by shortening the extent over which communication is carried out.

Efficiency: Pooling of Local Resources

. There are typically hundreds of gigabytes sitting idle on tablets, laptops, and set‐top boxes in a household every evening, across a table in a conference room, or among the passengers of a public transit system. Similarly, idle processing power, sensing ability, and wireless connectivity on the edge may be pooled within a fog network.

Agility: Rapid Innovation and Affordable Scaling

. It is usually much faster and cheaper to experiment with client and edge devices. Rather than waiting for vendors of large boxes inside the network to adopt an innovation, in the fog world a small team may take advantages of smartphone API and SDK, the proliferation of mobile apps, and offer a networking service through its own API.

Latency: Real‐Time Processing and Cyber–Physical System Control

. Edge data analytics, as well as the actions it enables through control loops, often have stringent time requirement and can only be carried out on the edge or the “things”, here and now. This is particularly essential for Tactile Internet: the vision of millisecond reaction time on networks that enable virtual–reality‐type interfaces between humans and devices.

We further elaborate on the previous potential advantages of fog. Client and edge devices have increasing strength and capabilities. For instance, the original iPhone had a single core 412 MHz ARM processor with 128 MB RAM and 8GB storage space. The iPhone 5S on the other hand carries a dual‐core 1.3 GHz Apple A7 processor with 1GB RAM, 64 GB storage space, and enhanced GPU capabilities. Intel’s mobile chip Atom and Nvidia’s Tegra too promise near similar specifications. The increase in strength and capabilities implies complex functionality such as CPU/GPU intensive gaming, powerful location/context tracking sensors, and enhanced storage. Further, as suggested in [9], these interconnected edge devices will play a crucial role in orchestrating the IoT. Edge devices including mobile phones and wearable devices use a rich variety of sensors including gyroscopes, accelerometers, and odometers to monitor the environment around them. This enables the crucial notion of exploiting context both personal in terms of location and physical/psychological characteristics and context in the communal sense of how devices are interacting with other devices around them.

As the need for cloud‐based services increases, the amount of data traffic generated in the core networks is increasing at an alarming rate. Cisco predicts that cloud traffic will increase almost four to five times over the next 5 years [10]. Further, they predict that cloud IP traffic will account for nearly two‐thirds of all data center traffic by 2017. Can the fog alleviate some of this by satisfying application needs locally? For example, can part of cloud storage be moved closer to the user with edge/client devices acting as micro‐data centers? Can videos be cached efficiently at the edge devices to reduce accesses to the cloud? Or more broadly, can edge devices perform an active role in orchestrating both data plane‐based cloud services and control plane‐based core network services?

Accesses to the cloud often span geographically distant entities with round‐trip times of nearly 150–200 ms. Access latency is a crucial factor in the end‐user experience with studies showing that a 20% decrease in RTTs results in a 15% decrease in page load time [11]. A significant way to decrease the RTT for content access is to place as much of the content physically close to the end user as possible. While decreasing latency is beneficial to all services, it may be a necessity for many services in the future. For example, services involving augmented reality applications may not tolerate latencies of more than 10–20 ms [12]. Hence, any computation/processing for these kind of services need to be performed locally. Fog services may play a significant part in addressing this challenge.

The fog R&D will leverage past experience in sensor networks, peer‐to‐peer systems, and mobile ad hoc networks while incorporating the latest advances in devices, systems, and data science to reshape the “balance of power” in the ecosystem between powerful data centers and the edge devices. Toward that end, this book serves as the first introduction to the evolving fog architecture, compiling work traversing many different areas that fit into this paradigm.

In this book, we will encounter many use cases and applications that in many ways are not necessarily new and revolutionary and have been conceived in the context of distributed computing, networking, and storage systems. Computing resources have been always distributed in homes, in factories, along roads and highways, in cities, and in their shopping centers. The field of pervasive or ubiquitous computing has been active for a long time. Networking has always deployed switches, routers, and middleboxes at the edge. Caching media and data at the edge has been fundamental to the evolution of Web services and video delivery.

As is typical of any emergent area of R & D, many of the themes in the fog architecture are not completely new and instead are evolved versions of accumulated transformations in the past decade or two:

Compared with peer‐to‐peer (P2P) networks in the mid‐2000s, fog is not just about content sharing (or data plane as a whole) but also network measurement, control and configuration, and service definition.

Compared with mobile ad hoc network (MANET) research a decade ago, we have much more powerful and diverse off‐the‐shelf edge devices and applications now, together with the structure/hierarchy that comes with cellular/broadband networks.

Compared with generic edge networking in the past, fog networking provides a new layer of meaning to the end‐to‐end principle: not only do edge devices optimize among themselves, but also they collectively measure and control the rest of the network.

Along with two other network architecture themes, ICN and SDN, each with a longer history, the fog is revisiting the foundation of how to think about and engineer networks, that is, how to optimize network functions: who does what and how to glue them back together:

Information‐Centric Networks

. Redefine functions (to operate on digital objects rather than just bytes)

Software‐Defined Networks

. Virtualize functions (through a centralized control plane)

Fog Networks

. Relocate functions (closer to the end users along the C2T continuum)

While fog networks do not have to have any virtualization or to be information centric, one could also imagine an information‐centric, software‐defined fog network (since these three branches are not orthogonal).

With its adoption of the most modern concepts developed in the IT domain and at the same time with its need to satisfy the requirements of the operational technology (OT) domains, such as time‐sensitive and deterministic behaviors in networking, computing and storage, sensor and actuator support and aggregation, and sometimes even safety support, the fog is a perfect conduit for the highly promising convergence of IT and OT in many key IoT verticals. In this perspective, the fog not only builds on and incorporates many of the traditional relevant technologies from sensor and ad hoc network, ubiquitous computing, distributed storage, etc. but also manifests in a timely manner new and specific characteristics coming from the IT and OT convergence behind IoT.

As the cloud catalyzed, consolidated, and evolved a range of existing technologies and approaches, the fog is catalyzing, consolidating, and evolving a range of edge technologies and approaches in a creative and rich mix, at this special transition time into IoT. Complementing the swarm of endpoints and the cloud, the fog will enable the seamless deployment of distributed applications, responding to the needs of critical use cases in a broad array of verticals. For example, some of the early work on fog architecture and functionality was driven by specific applications in connected vehicle and transportation, smart grid, the support of distributed analytics, and the improvement of Web services and video delivery [9, 13, 14].

I.1 SUMMARY OF CHAPTERS

Following the above paragraphs, the chapters in this edited volume are divided into three broad sections. In the first four chapters, we describe work that presents techniques to enable communication and management of the devices in a fog network involving their interaction with the cloud, management of their bandwidth requirements, and prescriptions on how the edge devices can often work together to fulfill their requirements. The next natural step is to understand how to perform the two fundamental components of many applications on the edge: storage and computation. We focus on this aspect in the following three chapters. And finally, we focus on the applications that will be enabled on top of the fog infrastructure and the challenges in realizing them.

Communication and Management In the first chapter the authors present a unique edge computing framework, called ParaDrop, that allows developers to leverage one of the most stable and persistent computing resources in the end customer premises: the gateway (e.g., the Wi‐Fi access point or home set‐top box). Based on a platform that allows the deployment of containers on these edge devices, the authors show how interesting applications such as security cameras and environment sensors can be deployed on these devices. While the first chapter focuses on an operating system agnostic container‐based approach, the second chapter posits that the underlying operating system on these devices too should evolve to support fog computing and networking. In a broad analysis, the authors focus on four important aspects: why do these systems need to provide better properties to support the fog, where do they need to improve, what are the exact properties that need to be provided, and finally how can they provide these better properties?

To enable rich communication in the fog, bandwidth needs have to be addressed. Following the philosophy of fog networking, why can’t the power of edge devices be used to leverage this? In the second chapter, the authors present a home‐user‐based bandwidth management solution to cope with the growing demand for bandwidth, with a novel technique that puts more intelligence in both the home gateways and the end‐user devices. They show that using a two‐level system, one based on the gateways “buying bandwidth” from the ISPs within a fixed budget driven by incentives and the other based on end‐user prioritization of applications, much better utilization of network bandwidth can be achieved.

The following chapter addresses this question from the point of view of peer‐to‐peer communication among devices. They present a game theory‐based mechanism that end‐user devices like tablets and cell phones can use to cooperate with one another and act as relays for each other’s network traffic, thereby boosting network capability. An important aspect of fog management and communication is that of addressing the potentially thousands and maybe even millions of fog–IoT devices.

In the final chapter, the author contends that traditional IP‐based addressing will not always work for field IoT devices working in a fog environment, interacting with cloud servers or among themselves. This is primarily due to factors such as device mobility, spatial density of devices, and gaps in coverage. As an alternative, they propose a technique of geographic addressing where communication protocols allow devices to specify the destination devices based on their geographic location rather than IP address.

Computation and Storage Following the first section of chapters on communication and management of fog devices, we move on to two important platform functions: storage and caching for video delivery in fog networks and techniques for fog computation. The first chapter in this section presents caching schemes for video on demand (VoD), especially to optimize the last wireless hop in video delivery. While most CDN‐based systems focus on caching at the edge of the network, the authors here focus on caching in edge devices such as Femto helper nodes (similar to Femto base stations) and the end‐user devices themselves.

The second chapter, on the other hand, shifts the focus from VoD to live streaming, a use case with very different requirements but similar potential uses of the fog paradigm. The authors discuss a technique through which the end‐user devices collaborate to deliver live streams to each other, operating as a wireless fog. They focus on a crucial problem in such systems—that of errors due to lossy wireless links—and present a store–recover–forward strategy for wireless multihop fog networks that combines traditional store and forward techniques with network coding.

In the final chapter of this section, we move from storage to general‐purpose computation in fog. Similar to other chapters in this book, the authors posit that mobile devices have now become far more powerful and can hence perform several computations locally, with carefully planned fog architectures. They focus on two such designs: femto cloud, in which they discuss a general purpose architecture of a computational platform for mobile devices, and Serendipity, in which they consider a more severe version of the same problem in which devices are highly mobile and often tasks need to be off‐loaded to one another.

Applications Having set the foundation with the previous section on the platform requirements and innovations, we finally move on to applications built on the fog architecture. In the first chapter in this section, the authors provide a close look at the challenges facing the connected car, an IoT use case that is increasingly prominent these days. In particular, they focus on the electrical architecture that will enable this application and describe how fog computing with its virtualization techniques, platform unification of several concerns such as security and management will help alleviate these challenges.

In the following chapter, the authors provide a detailed analysis of distributed stream processing systems and online learning frameworks with a view to building what they term a smarter planet. In their vision of smart planet, they envisage a world in which users are constantly gathering data from their surroundings, processing this data, performing meaningful analysis, and taking decisions based on this analysis. The main challenge however is that given the potentially huge number of low‐power sensors and the mobility of the users, all this data analysis needs to be heavily distributed through its life cycle. The combination of potent‐distributed learning frameworks and fog computing that will provide the platform capabilities for such frameworks can bring forth the vision of the smarter planet.

Finally, we end the book with a chapter on how fog computing can help address the crucial needs of security in IoT devices. The authors start with the question: what is so different about IoT security as opposed to standard enterprise security, and what needs to change? They then go on answering these questions and identify IoT concerns ranging from the incredibly large number of such devices to the need for keeping them regularly updated with regard to security information. Crucially, they focus on how the fog paradigm can help address many of these concerns by providing frameworks and platforms to alleviate the load on the IoT devices and perform functions such as endpoint authentication and security updates.

The electronic supplemental content to support use of this book is available online at https://booksupport.wiley.com

I.2 ACKNOWLEDGMENTS

This book would not have been possible without help from numerous people, and we wish to sincerely thank all of them.

In particular, Dr. Jiasi Chen, Dr. Michael Wang, Dr. Christopher Brinton, Dr. Srinivas Narayana, Dr. Zhe Huang, and Dr. Zhenming Liu provided valuable feedback on the individual chapters of the book. The publisher, John Wiley and Sons, made a thorough effort to get the book curated and published. We are grateful to the support from funding agencies of National Science Foundation under the fog research grants. Last but not the least, the book will ultimately stand on its contents, and we are grateful to all the chapter authors for their technical contributions and never‐ending enthusiasm in writing this book.

REFERENCES

1. Mung Chiang, Steven H. Low, A. Robert Calderbank, and John C. Doyle. Layering as optimization decomposition: A mathematical theory of network architectures. In

Proceedings of the IEEE

, volume 95, pages 255–312, January 2007.

2. Mung Chiang and Tuo Zhang, Fog and IoT: an overview of research opportunities.

IEEE Journal of Internet of Things

, 3(6), December 2016.

3. Abhijnan Chakraborty, Vishnu Navda, Venkata N. Padmanabhan, and Ramachandran Ramjee. Coordinating cellular background transfers using load sense. In

Proceedings of the 19th Annual International Conference on Mobile Computing & Networking

, MobiCom ’13, pages 63–74, New York, NY, USA, 2013. ACM.

4. Ehsan Aryafar, Alireza Keshavarz‐Haddad, Michael Wang, and Mung Chiang. Rat selection games in hetnets. In

INFOCOM

, pages 998–1006 April 14–19, 2013. IEEE Turin, Italy.

5. Luca Canzian and Mihaela van der Schaar. Real‐time stream mining: Online knowledge extraction using classifier networks.

IEEE Network

, 29(5):10–16, 2015.

6. Jae Yoon Chung, Carlee Joe‐Wong, Sangtae Ha, James Won‐Ki Hong, and Mung Chiang. Cyrus: Towards client‐defined cloud storage. In

Proceedings of the 10th European Conference on Computer Systems

, EuroSys ’15, pages 17:1–17:16, New York, NY, USA, 2015. ACM.

7. Felix Ming Fai Wong, Carlee Joe‐Wong, Sangtae Ha, Zhenming Liu, and Mung Chiang. Mind your own bandwidth: An edge solution to peak‐hour broadband congestion.

CoRR

, abs/1312.7844, 2013.

8. Yongjiu Du, Ehsan Aryafar, Joseph Camp, and Mung Chiang. iBeam: Intelligent client‐side multi‐user beamforming in wireless networks. In

2014 IEEE Conference on Computer Communications, INFOCOM 2014

, pages 817–825, Toronto, Canada, April 27–May 2, 2014. IEEE.

9. Flavio Bonomi, Rodolfo Milito, Jiang Zhu, and Sateesh Addepalli. Fog computing and its role in the internet of things. In

Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing

, MCC ’12, pages 13–16, New York, NY, USA, 2012. ACM.

10. Cisco Global Cloud Index: Forecast and Methodology.

http://www.intercomms.net/issue‐21/pdfs/articles/cisco.pdf

(accessed September 12, 2016).

11. Latency: The New Web Performance Bottleneck.

https://www.igvita.com/2012/07/19/latency‐the‐new‐web‐performance‐bottleneck/

(accessed September 12, 2016).

12. W. Pasman, Arjen Van Der Schaaf, R.L. Lagendijk, and Frederik W. Jansen. Low latency rendering and positioning for mobile augmented reality. In

Proceedings Vision Modeling and Visualization ’99

, pages 309–315, 1999.

13. Flavio Bonomi. Cloud and fog computing: Trade‐offs and applications. In

EON‐2011 Workshop, at the International Symposium on Computer Architecture (ISCA 2011)

, San Jose, USA, June 4–8, 2011.

14. Xiaoqing Zhu, Douglas S. Chan, Hao Hu, Mythili S. Prabhu, Elango Ganesan, and Flavio Bonomi. Improving video performance with edge servers in the fog computing architecture.

Intel Technology Journal

19(1):202–224, 2015.

PART ICommunication and Management of Fog

1ParaDrop: An Edge Computing Platform in Home Gateways

SUMAN BANERJEE,1 PENG LIU,1,2 ASHISH PATRO,1 and DALE WILLIS1

1Department of Computer Sciences, University of Wisconsin‐Madison, Madison, WI, USA

2Pennsylvania State University, State College, PA, USA

1.1 INTRODUCTION

The last decade has seen a rapid diversification of computing platforms, devices, and services. For example, desktops used to be the primary computing platform until the turn of the century. Since then, laptops and more recently handheld devices such as laptops and tablets have been widely adopted. Wearable devices and the Internet of things (IoT) are the latest trends in this space. This has also led to widespread adoption of the “cloud” as a ubiquitous platform for supporting applications and services across these different devices.

Simultaneously, cloud computing platforms, such as Amazon EC2 and Google App Engine, have become a popular approach to provide ubiquitous access to services across different user devices. Third‐party developers have come to rely on cloud computing platforms to provide high quality services to their end users, since they are reliable, always on, and robust. Netflix and Dropbox are examples of popular cloud‐based services. Cloud services require developers to host services, applications, and data on off‐site data centers. But, due to application‐specific reasons, a growing number of high quality services restrict computational tasks to be colocated with the end user. For example, latency‐sensitive applications require the backend service to be located to a user’s current location. Over the years, a number of research threads have proposed that a better end‐user experience is possible if the computation is performed close to the end user. This is typically referred to as “edge computing” and comes in various flavors including: cyber foraging [1], cloudlets [2], and more recently fog computing [3].

This chapter presents a unique edge computing framework, called ParaDrop, which allows developers to leverage one of the last bastions of persistent computing resources in the end customer premises: the gateway (e.g., the Wi‐Fi access point (AP) or home set‐top box). Using this platform, which has been fully implemented on commodity gateways, developers can design virtually isolated compute containers to provide a persistent computational presence in the proximity of the end user. The compute containers retain user state and also move with the users as the latter changes their points of attachment. We demonstrate the capabilities of this platform by demonstrating useful third‐party applications, which utilize the ParaDrop framework. The ParaDrop framework also allows for multitenancy through virtualization, dynamic installation through the developer API, and tight resource control through a managed policy design.

1.1.1 Enabling Multitenant Wireless Gateways and Applications through ParaDrop

A decade or two ago, the desktop computer was the only reliable computing platform within the home where third‐party applications could reliably and persistently run. However diverse mobile devices, such as smartphones and tablets, have deprecated the desktop computer since, and today persistent third‐party applications are often run in remote cloud‐based servers. While cloud‐based third‐party services have many advantages, the rise of edge computing concepts stems from the observation that many services can benefit from a persistent computing platform, right in the end‐user premises.

With end‐user devices going mobile, there is one remaining device that provides all the capabilities developers require for their services, as well as the proximity expected from an edge computational framework. The gateway—which could be a home Wi‐Fi AP or a cable set‐top box provided by a network operator—is a platform that is continuously on and due to its pervasiveness is a primary entry point into the end‐user premises for such third‐party services.

We want to push computation onto the home gateways (e.g., Wi‐Fi APs and cable set‐top boxes) for the following reasons:

The home gateways can handle it—modern home gateways are much more powerful than what they need to be for their networking workload. What is more if you are not running a Web server out of the house, your gateway sits dormant majority of the time (when no one is home using it).

Utilizing computational resources in the home gateway gives us a footprint within the home to devices that are starved for computational resources, namely, IoT devices. Using ParaDrop, developers can piggyback their IoT devices onto the AP without the need for cloud services OR a dedicated desktop!

Every household connected to the Internet by definition must contain an Internet gateway somewhere in the house. With these devices sitting around, we can use them to their full potential.

Pervasive Hardware: Our world is quickly moving toward households only having mobile devices (tablets and laptops) in the home that are not always on or always connected. Developers can no longer rely on pushing software into the home without also developing their own hardware too.

A Developer‐Centric Framework. In this chapter, we examine the requirements of services in order to build an edge computing platform, which enables developers to provide services to the end user in place of a cloud computing platform. A focus on edge computation would require developers to think differently about their application development process; however we believe there are many benefits to a distributed platform such as ParaDrop. The developer has remained our focus in the design and implementation of our platform. Thus, we have implemented ParaDrop to include a fully featured API for development, with a focus on a centrally managed framework. Through virtualization, ParaDrop enables each developer access to resources in a way as to completely isolate all services on the gateway. A tightly controlled resource policy has been developed, which allows fair performance between all services.

1.1.2 ParaDrop Capabilities

ParaDrop takes advantage of the fact that resources of the gateway are underutilized most of the time. Thus each service, referred to as a chute (as in parachute), borrows CPU time, unused memory, and extra disk space from the gateway. This allows vendors an unexplored opportunity to provide added value to their services through the close proximity footprint of the gateway.

Figure 1.1 shows system ParaDrop running on real hardware, the “Wi‐Fi home gateway,” along with two services to motivate our platform: “security camera” and “environment sensors.” ParaDrop has been implemented on PC engines ALIX 2D2 single board computer running OpenWrt “Barrier Breaker” on an AMD Geode 500 MHz processor with 256 MB of RAM. This low‐end hardware platform was chosen to showcase ParaDrop’s capabilities with existing gateway hardware.

Figure 1.1 The fully implemented ParaDrop platform on the Wi‐Fi home gateway, which shares its resources with two wireless devices including a security camera and environment sensor.

We have emulated two third‐party developers who have migrated their services to the ParaDrop platform to showcase the potential of ParaDrop. Each of these services contains a fully implemented set of applications to capture, process, store, and visualize the data from their wireless sensors within a virtually isolated environment. The first service is a wireless environmental sensor designed as part of the Emonix research platform [4], which we refer to as “EnvSense.” The second service is a wireless security camera based on a commercially available D‐Link DCS 931L webcam, which we call “SecCam.” Leveraging the ParaDrop platform, the two developer services allow us to motivate the following characteristics of ParaDrop:

Privacy

. Many sensors and even webcams today rely on the cloud as the only storage mechanism for generated data. Leveraging the ParaDrop platform, the end user no longer must rely on cloud storage for the data generated by their private devices and instead can borrow disk space available in the gateway for such data.

Low Latency

. Many simple processing tasks required by sensors are performed in the cloud today. By moving these simple processing tasks onto gateway hardware, one hop away from the sensor itself, a reliable low‐latency service can be implemented by the developer.

Proprietary Friendly

. From a developer’s perspective, the cloud is the best option to deploy their proprietary software because it is under their complete control. Using ParaDrop, a developer can package up the same software binaries and deploy them within the gateway to execute in a virtualized environment, which is still under their complete control.

Local Networking Context

. In the typical service implemented by a developer, the data is consumed only by the end user yet stored in the cloud. This requires data generated by a security camera in the home to travel out to a server somewhere in the Internet and upon the end user’s request travel back from this server into the end‐user device for viewing. Utilizing the ParaDrop platform, a developer can ensure that only data requested by the end user is transmitted through Internet paths to the end‐user device.

Internet Disconnectivity