139,99 €
The goal of this book is to describe new concepts for Internet next generation. This architecture is based on virtual networking using Cloud and datacenters facilities. Main problems concern 1) the placement of virtual resources for opening a new network on the fly, and 2) the urbanisation of virtual resource implemented on physical network equipment. This architecture deals with mechanisms capable of controlling automatically the placement of all virtual resources within the physical network.
In this book, we describe how to create and delete virtual networks on the fly. Indeed, the system is able to create any new network with any kind of resource (e.g., virtual switch, virtual routers, virtual LSRs, virtual optical path, virtual firewall, virtual SIP-based servers, virtual devices, virtual servers, virtual access points, and so on). We will show how this architecture is compatible with new advances in SDN (Software Defined Networking), new high-speed transport protocol like TRILL (Transparent Interconnection of Lots of Links) and LISP (Locator/Identifier Separation Protocol), NGN, IMS, Wi-Fi new generation, and 4G/5G networks. Finally, we introduce the Cloud of security and the virtualisation of secure elements (smartcard) that should definitely transform how to secure the Internet.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 301
Veröffentlichungsjahr: 2015
Cover
Title
Copyright
Introduction
I.1. The first two revolutions
I.2. The third revolution
I.3. “Cloudification” of networks
I.4. Conclusion
1: Virtualization
1.1. Software networks
1.2. Hypervisors
1.3. Virtual devices
1.4. Conclusion
2: SDN (Software-Defined Networking)
2.1. The objective
2.2. The ONF architecture
2.3. NFV (Network Functions Virtualization)
2.4. OPNFV
2.5. Southbound interface
2.6. The controller
2.7. Northbound interface
2.8. Application layer
2.9. Urbanization
2.10. The NSX architecture
2.11. CISCO ACI (Application Centric Infrastructure)
2.12. OpenContrail and Juniper
2.13. Brocade
2.14. Alcatel Lucent’s SDN architecture
2.15. Conclusion
3: Smart Edges
3.1. Placement of the controller
3.2. Virtual access points
3.3. Software LANs
3.4. Automation of the implementation of software networks
3.5. Intelligence in networks
3.6. Management of a complex environment
3.7. Multi-agent systems
3.8. Reactive agent systems
3.9. Active networks
3.10. Programmable networks
3.11. Autonomous networks
3.12. Autonomic networks
3.13. Situated view
3.14. Conclusion
4: New-generation Protocols
4.1. OpenFlow
4.2. VXLAN
4.3. NVGRE (Network Virtualization using Generic Routing Encapsulation)
4.4. MEF Ethernet
4.5. Carrier-Grade Ethernet
4.6. TRILL (Transparent Interconnection of a Lot of Links)
4.7. LISP (Locator/Identifier Separation Protocols)
4.8. Conclusion
5: Mobile Cloud Networking and Mobility Control
5.1. Mobile Cloud Networking
5.2. Mobile Clouds
5.3. Mobility control
5.4. Mobility protocols
5.5. Mobility control
5.5.1. IP Mobile
5.5.2. Solutions for micromobility
5.6. Multihoming
5.7. Network-level multihoming
5.7.1. HIP (Host Identity Protocol)
5.7.2. SHIM6 (Level 3 Multihoming Shim Protocol for IPv6)
5.7.3. mCoA (Multiple Care-of-Addresses) in Mobile IPv6
5.8. Transport-level multihoming
5.8.1. SCTP (Stream Control Transmission Protocol)
5.8.2. CMT (Concurrent Multipath Transfer)
5.8.3. MPTCP (Multipath TCP)
5.9. Conclusion
6: Wi-Fi and 5G
6.1. 3GPP and IEEE
6.2. New-generation Wi-Fi
6.3. IEEE 802.11ac
6.4. IEEE 802.11ad
6.5. IEEE 802.11af
6.6. IEEE 802.11ah
6.7. Small cells
6.8. Femtocells
6.9. Hotspots
6.10. Microcells
6.11. Wi-Fi Passpoint
6.12. Backhaul networks
6.13. Software radio and radio virtual machine
6.14. 5G
6.15. C-RAN
6.16. The Internet of Things
6.17. Sensor networks
6.18. RFID
6.19. EPCglobal
6.20. Security of RFID
6.21. Mifare
6.22. NFC (Near-Field Comunication)
6.23. Mobile keys
6.24. NFC contactless payment
6.25. HIP (Host Identity Protocol)
6.26. The Internet of Things in the medical domain
6.27. The Internet of Things in the home
6.28. Conclusion
7: Security
7.1. Secure element
7.2. Virtual secure elements
7.3. The TEE (Trusted Execution Environment)
7.4. TSM
7.5. Solution without a TSM
7.6. HCE
7.7. Securing solutions
7.8. Conclusion
8: Concretization and Morphware Networks
8.1. Accelerators
8.2. A reconfigurable microprocessor
8.3. Morphware networks
8.4. Conclusion
Conclusion
Bibliography
Index
End User License Agreement
Cover
Table of Contents
Begin Reading
Introduction
Figure I.1. Terminal connection by 2020
Figure I.2. The gap between technological progress and user demand
Figure I.3. Public Cloud services market and their annual growth rate
Figure I.4. Number of virtual machines per physical server
Figure I.5. The rise in power of Ethernet ports for datacenters
Figure I.6. The three main types of Cloud
Figure I.7. The different types of Clouds
Chapter 1. Virtualization
Figure 1.1. Virtualization of three routers
Figure 1.2. A virtualized machine
Figure 1.3. A set of software networks
Figure 1.4. Para-virtualization
Figure 1.5. Virtualization by emulation
Figure 1.6. Virtualization by execution zones
Chapter 2. SDN (Software-Defined Networking
Figure 2.1. The three basic principles
Figure 2.2. The five domains necessary for the life of a company
Figure 2.3. Virtualization of the five domains
Figure 2.4. The pilot program
Figure 2.5. The ONF architecture
Figure 2.6. The SDN architecture
Figure 2.7. Example of Open Source developments
Figure 2.8. NFV (Network Functions Virtualization)
Figure 2.9. NFV machines
Figure 2.10. The signaling protocol OpenFlow
Figure 2.11. The control layer and its interfaces
Figure 2.12. The load-balancing protocol
Figure 2.13. The Open Stack system
Figure 2.14. The overall architecture of SDN solutions
Figure 2.15. The cost of a datacenter environment
Figure 2.16. The urbanization of a network environment
Figure 2.17. The NSX architecture
Figure 2.18. Detailed NSX architecture
Figure 2.19. The characteristics of Open vSwitch
Figure 2.20. The ACI architecture
Figure 2.21. Detailed ACI architecture
Figure 2.22. The OpenContrail plat form, from Juniper
Figure 2.23. Brocade’s architecture
Figure 2.24. The SDN platform from Alcatel-Lucent
Figure 2.25. A view of SDN networks of tomorrow
Figure 2.26. The OpenFlow market, and more generally the SDN market
Chapter 3. Smart Edges
Figure 3.1. Scenarios for the placement of a controller
Figure 3.2. Centralized C-RAN
Figure 3.3. C-RAN with a distribution element
Figure 3.4. Cloudlet solution
Figure 3.5. A network with a femto-datacenter
Figure 3.6. Context of a femto-Cloud network for a “smart edge”
Figure 3.7. A femto-datacenter environment to create virtual LANs
Figure 3.8. Hierarchy of controls and datacenters
Figure 3.9. Self-piloting system
Figure 3.10. Operation of the blackboard
Figure 3.11. Operation of a multi-agent system
Figure 3.12. Problem-solving
Figure 3.13. Architecture of active networks
Figure 3.14. Definition of an autonomic network
Figure 3.15. The architecture of autonomic networks
Figure 3.16. One-hop situated view
Chapter 4. New-generation Protocols
Figure 4.1. SDN architecture
Figure 4.2. OpenFlow protocol
Figure 4.3. OpenFlow protocol in a network
Figure 4.4. Fields in the OpenFlow protocol
Figure 4.5. The different ONF standards pertaining to the OpenFlow protocol
Figure 4.6. OpenDaylight controller
Figure 4.7. VxLAN protocol
Figure 4.8. VXLAN encapsulation
Figure 4.9. NVGRE protocol
Figure 4.10. The different versions of Carrier-Grade Ethernet
Figure 4.11. TRILL protocol
Figure 4.12. LISP protocol
Chapter 5. Mobile Cloud Networking and Mobility Control
Figure 5.1. An architecture for Mobile Cloud Networking
Figure 5.2. An architecture for local Mobile Cloud Networking
Figure 5.3. A third Mobile Cloud Networking architecture
Figure 5.4. A fourth architecture for Mobile Cloud Networking
Figure 5.5. Example of a mobile Cloud
Figure 5.6. Two mobile Clouds
Figure 5.7. A large mobile Cloud
Figure 5.8. Properties of mobile device controllers
Figure 5.9. The two controller solutions
Figure 5.10. Access authorization cases for a controller
Figure 5.11. IP Mobile for IPv4
Figure 5.12. IP Mobile for IPv6
Figure 5.13. HIP architecture
Figure 5.14. Base exchange procedure for HIP
Figure 5.15. HIP base exchange with DNS
Figure 5.16. Example of matching in SHIM6
Figure 5.17. Mobile IPv6
Figure 5.18. Registration of several CoAs in Mobile IPv6
Figure 5.19. SCTP architecture
Figure 5.20. Structure of SCTP packet
Figure 5.21. Heartbeat mechanism
Figure 5.22. Architecture of LS-SCTP
Chapter 6. Wi-Fi and 5G
Figure 6.1. The different wireless solutions
Figure 6.2. The two major wireless solutions and their convergence
Figure 6.3. SDMA
Figure 6.4. Small cells and backhaul networks
Figure 6.5. Operation of a femtocell
Figure 6.6. Access to the HNB
Figure 6.7. A network of metrocells
Figure 6.8. The integration of 3G/4G/5G and Wi-Fi access
Figure 6.9. A backhaul network
Figure 6.10. The different advances made in the field of software radio
Figure 6.11. The different 5G access solutions
Figure 6.12. Virtualization of a Wi-Fi access point
Figure 6.13. Virtualization of 5G devices
Figure 6.14. Virtualization of an HNB
Figure 6.15. Virtualization of a mesh access network
Figure 6.16. Virtualization of a backhaul network
Figure 6.17. The fully-centralized Cloud-RAN architecture
Figure 6.18. The partially-distributed Cloud-RAN architecture
Figure 6.19. An RFID
Figure 6.20. Active RFID
Figure 6.21. Structure of GEN1 Electronic Product Code
Figure 6.22. The structure of GEN2 Electronic Product Code
Figure 6.23. The environment of a mobile key
Figure 6.24. A BAN (Body Area Network)
Chapter 7. Security
Figure 7.1. A Cloud of security
Figure 7.2. Hardware architecture of the smartcard
Figure 7.3. Authentication procedure using an EAP smartcard
Figure 7.4. Virtualization of smartcards
Figure 7.5. A Cloud of secure elements
Figure 7.6. The key for the Internet
Figure 7.7. The different security solutions
Figure 7.8. The relationships between the different participants with the TSM
Figure 7.9. The economic system of NFC and TSM
Figure 7.10. The security domains
Figure 7.11. Solution without a TSM
Figure 7.12. Securing by local secure elements
Figure 7.13. Securing using external secure elements before the Android 4.4
Figure 7.14. Securing using external secure elements with Android 4.4 and later
Figure 7.15. Securing by a Cloud of secure elements
Figure 7.16. Advantages of the external solution
Figure 7.17. Architecture of securing using external secure elements
Figure 7.18. Securing of virtual machines
Figure 7.19. Securing of an electronic payment
Chapter 8. Concretization and Morphware Networks
Figure 8.1. The process of concretization
Figure 8.2. A DSP
Figure 8.3. An example of an FPGA
Figure 8.4. An EDLP component
Figure 8.5. The different types of microprocessors
Figure 8.6. Reconfigurable element matrix
Figure 8.7. Reconfigurable microprocessor using fine- and coarse-grained elements
Figure 8.8. A hardware network
Figure 8.9. Software networks
Figure 8.10. Morphware network
Conclusion
Figure C.1. The fundamental elements of new generation networks (NGNs)
Chapter 6. Wi-Fi and 5G .
Table 6.1. RFID transmission frequencies
Cover
Contents
iii
iv
ix
x
xi
xii
xiii
xiv
xv
xvi
xvii
xviii
xix
xx
xxi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
Advanced Networks Setcoordinated byGuy Pujolle
Volume 1
Guy Pujolle
First published 2015 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUKwww.iste.co.uk
John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USAwww.wiley.com
© ISTE Ltd 2015
The rights of Guy Pujolle to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2015942608
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN 978-1-84821-694-5
Currently, networking technology is experiencing its third major wave of revolution. The first was the move from circuit-switched mode to packet-switched mode, and the second from hardwired to wireless mode. The third revolution, which we examine in this book, is the move from hardware to software mode. Let us briefly examine these three revolutions, before focusing more particularly on the third, which will be studied in detail in this book.
A circuit is a collection of hardware and software elements, allocated to two users – one at each end of the circuit. The resources of that circuit belong exclusively to those two users; nobody else can use them. In particular, this mode has been used in the context of the public switched telephone network (PSTN). Indeed, telephone voice communication is a continuous application for which circuits are very appropriate.
A major change in traffic patterns brought about the first great revolution in the world of networks, pertaining to asynchronous and non-uniform applications. The data transported for these applications make only very incomplete use of circuits, but are appropriate for packet-switched mode. When a message needs to be sent from a transmitter to a receiver, the data for transmission are grouped together in one or more packets, depending on the total size of the message. For a short message, a single packet may be sufficient; however, for a long message, several packets are needed. The packets then pass through intermediary transfer nodes between the transmitter and the receiver, and ultimately make their way to the end-point. The resources needed to handle the packets include memories, links between the nodes and sender/receiver. These resources are shared between all users. Packet-switched mode requires a physical architecture and protocols – i.e. rules – to achieve end-to-end communication. Many different architectural arrangements have been proposed, using protocol layers and associated algorithms. In the early days, each hardware manufacturer had their own architecture (e.g. SNA, DNA, DecNet, etc.). Then, the OSI model (Open System Interconnection) was introduced in an attempt to make all these different architectures mutually compatible. The failure of compatibility between hardware manufacturers, even with a common model, led to the re-adoption of one of the very first architectures introduced for packet-switched mode: TCP/IP (Transport Control Protocol/Internet Protocol).
The second revolution was the switch from hardwired mode to wireless mode. Figure I.1 shows that, by 2020, terminal connection should be essentially wireless, established using Wi-Fi technology, including 3G/4G/5G technology. In fact, increasingly, the two techniques are used together, as they are becoming mutually complimentary rather than representing competition for one another. In addition, when we look at the curve shown in Figure I.2, plotting worldwide user demand against the growth of what 3G/4G/5G technology is capable of delivering, we see that the gap is so significant that only Wi-Fi technology is capable of handling the demand. We shall come back to wireless architectures, because the third revolution also has a significant impact on this transition toward radio-based technologies.
Figure I.1.Terminal connection by 2020
Figure I.2.The gap between technological progress and user demand. For a color version of the figure, see www.iste.co.uk/pujolle/software.zip
The third revolution, which is our focus in this book, pertains to the move from hardware-based mode to software-based mode. This transition is taking place because of virtualization, whereby physical networking equipment is replaced by software fulfilling the same function.
Let us take a look at the various elements which are creating a new generation of networks. To begin with, we can cite the Cloud. The Cloud is a set of resources which, instead of being held at the premises of a particular company or individual, are hosted on the Internet. The resources are de-localized, and brought together in resource centers, known as datacenters.
The reasons for the Cloud’s creation stem from the low degree of use of server resources worldwide: only 10% of servers’ capacities is actually being used. This low value derived from the fact that servers are hardly used at all at night-time, and see relatively little use outside of peak hours, which represent no more than 4-5 hours each day. In addition, the relatively-low cost of hardware meant that, generally, servers were greatly oversized. Another factor which needs to be taken into account is the rising cost of personnel to manage and control the resources. In order to optimize the cost both of resources and engineers, those resources need to be shared. The purpose of Clouds is to facilitate such sharing in an efficient manner.
Figure I.3 shows the growth of the public Cloud services market. Certainly, that growth is impressive, but in the final analysis, it is relatively low in comparison to what it could have been if there were no problems of security. Indeed, as the security of the data uploaded to such systems is rather lax, there has been a massive increase in private Clouds, taking the place of public Cloud services. In Chapter 6, we shall examine the advances made in terms of security, with the advent of secure Clouds.
Figure I.3.Public Cloud services market and their annual growth rate
Virtualization is also a key factor, as indicated at the start of this chapter. The increase in the number of virtual machines in undeniable, and in 2015 more than two thirds of the servers available throughout the world are virtual machines. Physical machines are able to host increasing numbers of virtual machines. This trend is illustrated in Figure I.4. In 2015, each physical server hosts around eight virtual machines.
Figure I.4.Number of virtual machines per physical server
The use of Cloud services has meant a significant increase in the data rates being sent over the networks. Indeed, processing is now done centrally, and both the data and the signaling must be sent to the Cloud and then returned after processing. We can see this increase in data rate requirement by examining the market of Ethernet ports for datacenters. Figure I.5 plots shipments of 1 Gbps Ethernet ports against those of 10 Gbps ports. As we can see, 1 Gbps ports, which are already fairly fast, are being replaced by ports that are ten times more powerful.
Figure I.5.The rise in power of Ethernet ports for datacenters
The world of the Cloud is, in fact, rather diverse, if we look at the number of functions which it can fulfill. There are numerous types of Clouds available, but three categories, which are indicated in Figure I.6, are sufficient to clearly differentiate them. The category which offers the greatest potential is the SaaS (Software as a Service) cloud. SaaS makes all services available to the user– processing, storage and networking. With this solution, a company asks its Cloud provider to supply all necessary applications. Indeed, the company subcontracts its IT system to the Cloud provider. With the second solution – PaaS (Platform as a Service) – the company remains responsible for the applications. The Cloud provider offers a complete platform, leaving only the management of the applications to the company. Finally, the third solution – IaaS (Infrastructure as a Service) – leaves a great deal more initiative in the hands of the client company. The provider still offers the processing, storage and networking, but the client is still responsible for the applications and the environments necessary for those applications, such as the operating systems and databases.
Figure I.6.The three main types of Cloud
More specifically, we can define the three Cloud architectures as follows.
– IaaS (Infrastructure as a Service): this is the very first approach, with a portion of the virtualization being handled by the Cloud, such as the network servers, the storage servers, and the network itself. The Internet network is used to host PABX-type machines, firewalls or storage servers, and more generally, the servers connected to the network infrastructure;
– PaaS (Platform as a Service): this is the second Cloud model whereby, in addition to the infrastructure, there is an intermediary software program corresponding to the Internet platform. The client company’s own servers only handle the applications;
– SaaS (Software as a Service): with SaaS, in addition to the infrastructure and the platform, the Cloud provider actually provides the applications themselves. Ultimately, nothing is left to the company, apart from the Internet ports. This solution, which is also called Cloud Computing, outsources almost all of the company’s IT and networks.
Figure I.7 shows the functions of the different types of Cloud in comparison with the classical model in operation today.
Figure I.7.The different types of Clouds
The main issue for a company that operates a Cloud is security. Indeed, there is nothing to prevent the Cloud provider from scrutinizing the data, or – as much more commonly happens – the data from being requisitioned by the countries in which the physical servers are located; the providers must comply. The rise of sovereign Clouds is also noteworthy: here, the data are not allowed to pass beyond the geographical borders. Most states insist on this for their own data.
The advantage of the Cloud lies in the power of the datacenters, which are able to handle a great many virtual machines and provide the power necessary for their execution. Multiplexing between a large number of users greatly decreases costs. Datacenters may also serve as hubs for software networks and host virtual machines to create such networks. For this reason, numerous telecommunications operators have set up companies which provide Cloud services for the operators themselves and also for their customers.
In the techniques which we shall examine in detail hereafter, we find SDN (Software-Defined Networking), whereby multiple forwarding tables are defined, and only datacenters have sufficient processing power to perform all the operations necessary to manage these tables. One of the problems is determining the necessary size of the datacenters, and where to build them. Very roughly, there are a whole range of sizes, from absolutely enormous datacenters, with a million servers, to femto-datacenters, with the equivalent of only a few servers, and everything in between.
The rise of this new generation of networks, based on datacenters, has an impact on energy consumption in the world of ICT. This consumption is estimated to account for between 3% and 5% of the total carbon footprint, depending on which study we consult. However, this proportion is increasing very quickly with the rapid rollout of datacenters and antennas for mobile networks. By way of example, a datacenter containing a million servers consumes approximately 100 MW. A Cloud provider with ten such datacenters would consume 1 GW, which is the equivalent of a sector in a nuclear power plant. This total number of servers has already been achieved or surpassed by ten well-known major companies. Similarly, the number of 2G/3G/4G antennas in the world is already more than 10 million. Given that, on average, consumption is 1500 W per antenna (2000 W for 3G/4G antennas but significantly less for 2G antennas), this represents around 15 GW worldwide.
Continuing in the same vein, the carbon footprint produced by energy consumption in the world of ICT is projected to reach 20% by 2025. Therefore, it is absolutely crucial to find solutions to offset this rise. We shall come back to this in the last chapter of this book, but there are solutions that already exist and are beginning to be used. Virtualization represents a good solution, whereby multiple virtual machines are hosted on a common physical machine, and a large number of servers are placed in standby mode (low power) when not in use. Processors also need to have the ability to drop to very low speeds of operation whenever necessary. Indeed, the power consumption is strongly proportional to processor speed. When the processor has nothing to do, it almost stops, and then speeds up depending on the workload received.
Mobility is also another argument in favor of adopting a new form of network architecture. We can show that by 2020, 95% of devices will be connected to the network by a wireless solution. Therefore, we need to manage the mobility problem. Thus, the first order of business is management of multi-homing – i.e. being able to connect to several networks simultaneously. The word “multi-homing” stems from the fact that the terminal receives several IP addresses, assigned by the different connected networks. These multiple addresses are complex to manage, and the task requires specific characteristics. Mobility also involves managing simultaneous connections to several networks. On the basis of certain criteria (to be determined), the packets can be separated and sent via different networks. Thus, they need to be re-ordered when they arrive at their destination, which can cause numerous problems. Mobility also raises the issues of addressing and identification. If we use the IP address, it can be interpreted in two different ways: user identification enables us to determine who the user is, but an address is also required, to show where that user is. The difficulty lies in dealing with these two concepts simultaneously. Thus, when a customer moves sufficiently far to go beyond the subnetwork with which he/she is registered, it is necessary to assign a new IP address to the device. This is fairly complex from the point of view of identification. One possible solution, as we can see, is to give two IP addresses to the same user: one reflecting his/her identity and the other the location.
Another revolution that is currently under way pertains to the “Internet of Things” (IoT): billions of things will be connected within the next few years. The prediction is that 50 billion will be connected to the IoT by 2020. In other words, the number of connections will likely increase tenfold in the space of only a few years. The “things” belong to a variety of domains: 1) domestic, with household electrical goods, home health care, home management, etc.; 2) medicine, with all sorts of sensors both on and in the body to measure, analyze and perform actions; 3) business, with light level sensors, temperature sensors, security sensors, etc. Numerous problems arise in this new universe, such as identity management and the security of communications with the sensors. The price of identification is often set at $40 per object, which is absolutely incompatible with the cost of a sensor which is often less than $1. Security is also a complex factor, because the sensor has very little power, and is incapable of performing sufficiently-sophisticated encryption to ensure the confidentiality of the transmissions.
Finally, there is one last reason to favor migration to a new network: security. Security requires a precise view and understanding of the problems at hand, which range from physical security to computer security, with the need to lay contingency plans for attacks that are sometimes entirely unforeseeable. The world of the Internet today is like a bicycle tire which is now made up entirely of patches (having been punctured and repaired multiple times), and every time an attack succeeds, a new patch is added. Such a tire is still roadworthy at the moment, but there is the danger that it will burst if no new solution is envisaged in the next few years. At the end of this book, in Chapter 7, we shall look at the secure Cloud, whereby, in a datacenter, a whole set of solutions is built around specialized virtual machines to provide new elements, the aim of which is to enhance the security of the applications and networks.
An effective security mechanism must include a physical element: a safe box to protect the important elements of the arsenal, necessary to ensure confidentiality, authentication, etc. Software security is a reality, and to a large extent, may be sufficient for numerous applications. However, secure elements can always be circumvented when all of the defenses are software-based. This means that, for new generations, there must be a physical element, either local or remote. This hardware element is a secure microprocessor known as a “secure element”. A classic example of this type of device is the smartcard, used particularly prevalently by telecom operators and banks.
Depending on whether it belongs to the world of business or public electronics, the secure element may be found in the terminal, near to it, or far away from the terminal. We shall examine the different solutions in the subsequent chapters of this book.
Virtualization also has an impact on security: the power of the Cloud, with specialized virtual machines, means that attackers have remarkable striking force at their disposal. In the last few years, hackers’ ability to break encryption algorithms has increased by a factor of 5-6.
Another important point which absolutely must be integrated in networks is “intelligence”. So-called “intelligent networks” have had their day, but the intelligence in this case was not really what we mean by “intelligence” in this field. Rather, it was a set of automatic mechanisms, employed to deal with problems perfectly determined in advance, such as a signaling protocol for providing additional features in the telephone system. Here, intelligence pertains to learning mechanisms and intelligent decisions based on the network status and user requests. The network needs to become an intelligent system, capable of making decisions on its own. One solution to help move in this direction was introduced by IBM in the early 2000s: “autonomic”. “Autonomic” means autonomous and spontaneous – autonomous in the sense that every device in the network must be able to independently make decisions with knowledge of the situated view, i.e. the state of the nodes surrounding it within a certain number of hops. The solutions that have been put forward to increase the smartness of the networks are influenced by Cloud technology. We shall discuss them in detail in the chapter on “smart edges” (Chapter 3).
Finally, one last point, which could be viewed as the fourth revolution, is concretization – i.e. the opposite of virtualization. Indeed, the problem with virtualization is a significant reduction in performance, stemming from the replacement of hardware with software. There are a variety of solutions that have been put forward to regain the performance: software accelerators and, in particular, the replacement of software with hardware, in the step of concretization. The software is replaced by reconfigurable hardware, which can transform depending on the software needing to be executed. This approach is likely to create morphware networks, which will be described in Chapter 8.
In conclusion, the world of networks is changing greatly, for the reasons listed above. It is changing more quickly than might have been expected a few years ago. One initial proposition was put forward, but failed: starting again from scratch. This is known as the “Clean Slate Approach”: eliminating everything and starting again from nothing. Unfortunately, no concrete proposition has been adopted, and the transfer of IP packets continues to be the solution for data transport. However, in the numerous propositions, virtualization and the Cloud are the two main avenues which are widely used today and upon which this book focuses.
In this chapter, we introduce virtualization, which is at the root of the revolution in the networking world, as it involves constructing software networks to replace hardware networks.
Figure 1.1 illustrates the process of virtualization. We simply need to write a code which performs exactly the same function as the hardware component. With only a few exceptions, which we shall explore later on, all hardware machines can be transformed into software machines. The basic problem associated with virtualization is the significant reduction in performance. On average (though the reality is extremely diverse), virtualization reduces performance by a factor of 1000: that is, the resulting software, executed on the physical machine that has been virtualized, runs 1000 times more slowly. In order to recover from this loss of performance, we simply need to run the program on a machine that is 1000 times more powerful. This power is to be found in the datacenters hosted in Cloud environments that are under development in all corners of the globe.
It is not possible to virtualize a certain number of elements, such as an antenna or a sensor, since there is no piece of software capable of picking up electromagnetic signals or detecting temperature. Thus, we still need to keep hardware elements such as the metal wires and optical links, or the transmission/reception ports of a router and a switch. Nevertheless, all of the signal-processing operations can be virtualized perfectly well. Increasingly, we find virtualization in wireless systems.
More and more, to speed up the software processing, it is possible to move to a mode of concretization, i.e. the reverse of virtualization, but with one very significant difference: the hardware behaves like software. It is possible to replace the software, which is typically executed on a general machine, with a machine that can be reconfigured almost instantly, and thus behaves like a software program. The components used are derived from FPGAs (field-programmable gate arrays) and, more generally, reconfigurable microprocessors. A great deal of progress still needs to be made in order to obtain extremely fast concretizations, but this is only a question of a few years.
The virtualization of networking equipment means we can replace the hardware routers with software routers, and do the same for any other piece of hardware that could be made into software, such as switches, LSRs (Label Switching Routers), firewalls, diverse and varied boxes, DPI (Deep Packet Inspection), SIP servers, IP-PBXs, etc. These new machines are superior in a number of ways. To begin with, one advantage is their flexibility. Let us look at the example given in
