96,99 €
MANAGEMENT OF DATA CENTER NETWORKS Discover state-of-the-art developments in DCNs from leading international voices in the field In Management of Data Center Networks, accomplished researcher and editor Dr. Nadjib Aitsaadi delivers a rigorous and insightful exploration of the network management challenges that present within intra- and inter-data center networks, including reliability, routing, and security. The book also discusses new architectures found in data center networks that aim to minimize the complexity of network management while maximizing Quality of Service, like Wireless/Wired DCNs, server-only DCNs, and more. As DCNs become increasingly popular with the spread of cloud computing and multimedia social networks employing new transmission technologies like 5G wireless and wireless fiber, the editor provides readers with chapters written by world-leading authors on topics like routing, the reliability of inter-data center networks, energy management, and security. The book also offers: * A thorough overview of the architectures of data center networks, including the classification of switch-centric, server-centric, enhanced, optical, and wireless DCN architectures * An exploration of resource management in wired and wireless data center networks, including routing and wireless channel allocation and assignment challenges and criteria * Practical discussions of inter-data center networks, including an overview of basic virtual network embedding * Examinations of energy and security management in data center networks Perfect for academic and industrial researchers studying the optimization of data center networks, Management of Data Center Networks is also an indispensable guide for anyone seeking a one-stop resource on the architectures, protocols, security, and tools required to effectively manage data centers.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 237
Veröffentlichungsjahr: 2021
Cover
Title Page
Copyright
About the Editor
Contributors
Acronyms
Introduction
1 Architectures of Data Center Networks: Overview
1.1 Taxonomy of DCN Architectures
1.2 Comparison Between DCN Architectures
1.3 Proposed HDCN Architecture
1.4 Conclusion
References
2 Data Center Optimization Techniques
2.1 Ethernet Switching and Routing
2.2 Data Center Optimization Techniques
2.3 Conclusion
Bibliography
Notes
3 Resource Management in Hybrid (Wired/Wireless) Data Center Networks
3.1 Routing and Wireless Channel Allocation Problematic in HDCN
3.2 Wireless Channel Allocation Strategies for One‐Hop Communications in HDCN
3.3 Online Joint Routing and Wireless Channel Allocation Strategies in HDCN
3.4 Joint Batch Routing and Channel Allocation Strategies in HDCN
3.5 Joint Batch Routing and Channel Allocation Strategies in HDCN
3.6 Summary
3.7 Conclusion
References
4 Inter‐Data Center Networks: Routing and Reliability in Virtual Network Backbone
4.1 Overview of Basic Virtual Network Embedding Without Reliability Constraint
4.2 Overview of Virtual Network Embedding with Reliability Constraint
4.3 Conclusion
References
5 An Evaluation Method of Optimal Cost Saving in a Data Center with Proactive Management
5.1 Introduction
5.2 Related Work
5.3 Framework for DC Modeling
5.4 Cost Formulation
5.5 Application to a Real DC
5.6 Conclusion
References
Index
End User License Agreement
Chapter 1
Table 1.1 Summary and analysis of DCN architectures
Chapter 2
Table 2.1 Summary of cloud network overlay protocols.
Chapter 3
Table 3.1 Summary of routing and channel allocation strategies in HDCN.
Chapter 4
Table 4.1 Overview of reliable
embedding strategies
Chapter 5
Table 5.1 Notations
Table 5.2 The server groups in the Google DC
Chapter 1
Figure 1.1 Taxonomy of DCN architectures.
Figure 1.2 Traditional tree‐based DCN architecture.
Figure 1.3
with
= 4.
Figure 1.4
with
= 4.
Figure 1.5 Switched‐beam antenna model: (a) spherical coordinate system and ...
Figure 1.6 Hybrid CISCO MSDC architecture of a DCN.
Chapter 2
Figure 2.1 Spanning tree protocol.
Figure 2.2 Traditional ethernet frame vs. 802.1Q frame.
Figure 2.3 Provider bridges: IEEE 802.1ad (QinQ).
Figure 2.4 802.1ah frame.
Figure 2.5 Virtual network embedding.
Figure 2.6 MPLS traffic engineering.
Figure 2.7 A reference network example for MMF and PF allocations.
Chapter 5
Figure 5.1 Proactive
and reactive
actions to provide requested energy
....
Figure 5.2 Requests processed in the time interval
, in gray
;
is the en...
Figure 5.3 Costs paid for (a) over estimation and (b) underestimation.
Figure 5.4 Example of energy predicted and consumed using proactive and reac...
Figure 5.5 True relative energy consumed vs. its linear and nonlinear predic...
Figure 5.6 Upper bound for
seconds (a)
seconds (b).
Figure 5.7 RES for CPU requests using the linear optimal predictor for
sec...
Figure 5.8 RES for CPU requests using the nonlinear optimal predictor for
...
Cover Page
Table of Contents
Begin Reading
ii
iii
iv
v
xi
xiii
xv
xvii
xviii
xix
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
445 Hoes Lane
Piscataway, NJ 08854
Ekram Hossain, Editor in Chief
Jón Atli Benediktsson
Xiaoou Li
Jeffrey Reed
Anjan Bose
Lian Yong
Diomidis Spinellis
David Alan Grier
Andreas Molisch
Sarah Spurgeon
Elya B. Joffe
Saeid Nahavandi
Ahmet Murat Tekalp
Edited by
Nadjib Aitsaadi
Universités Paris‐Saclay, UVSQ, DAVID, F‐78035, Versailles, France
Copyright © 2021 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per‐copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750‐8400, fax (978) 750‐4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748‐6011, fax (201) 748‐6008, or online at http://www.wiley.com/go/permission.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762‐2974, outside the United States at (317) 572‐3993 or fax (317) 572‐4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data applied for:
ISBN: 9781119647423
Cover Design: Wiley
Cover Image: © Bill Donnelley/WT Design
To my daughters L. H. S.
Nadjib Aitsaadi, PhD, is a Full Professor in Networks and Telecommunications at UVSQ Paris‐Saclay University in France. He is a member of DAVID Laboratory and leads the Next Generation Networks Team. Prof. Aitsaadi earned a PhD in 2010 from Sorbonne University in Networks & Telecommunications and graduated in 2016 with a “Habilitation” diploma from University Paris Est (UPE). His main research fields are the security and QoS optimization of cellular networks (5G, 6G, HAP), IoT, DCN, V2X, MEC, NFV/SDN, and more. The results have been published in many major journals such as IEEE JSAC, IEEE TVT, Elsevier ComNet, Elsevier ComCom, etc. and major conferences such IEEE SECON, IEEE LCN, IEEE MASS, ACM MSWiM, IEEE/IFIP NOMS/IM, IEEE ICC, IEEE GLOBECOM, etc. Prof. Aitsaadi chairs many tracks in IEEE/IFIP conferences such as IEEE GLOBECOM, IEEE/IFIP IM, IEEE/IFIP CIoT, etc. and he is very active in IEEE Technical Committees TCIIN.
Nadjib AitsaadiUniversités Paris‐Saclay, UVSQDAVIDVersaillesFrance
Dallal BelabedAirbus Defense and Space, AirbusSaint‐Quentin en YvelinesElancourtFrance
Selma BoumerdassiCEDRIC, CNAMParisFrance
Boutheina DabVMware, Hauts‐de‐SeineLa DefenseFrance
and
LiSSi Lab, UPEC, Val de MarneVitry sur SeineFrance
Ilhem FajjariOrange Labs, OrangeHauts‐de‐SeineChatillonFrance
Ruben MiloccoGCAyS, UNComahueNeuquenArgentina
Pascale MinetInriaParisFrance
Eric RenaultLIGM, Univ. Gustave Eiffel, CNRSESIEE ParisMarne‐la‐ValléeFrance
Oussama SoualahOS‐ConsultingAthis‐Mons, EssonneFrance
and
LiSSi Lab, UPEC, Vitry sur SeineVal de MarneFrance
DCN
Data Center Network
ECMP
Equal Cost Multi‐Path
GMPLS
Generalized Multi‐Protocol Label Switching
HDCN
Hybrid Data Center Network
IGP
Interior Gateway Protocol
LISP
Locator/Identifier Separation Protocol
MSDC
Massively Data Center
MPTCP
Multipath TCP
NFV
Network Function Virtualization
OSPF
Open Shortest Path First
QoS
Quality of Service
SCTP
Stream Control Transmission Protocol
SDN
Software Defined Network
SN
Substrate Network
STP
Spanning Tree Protocol
STT
Stateless Transport Tunneling
TCI
Tag Control Information
TE
Traffic Engineering
ToR
Top of Rack
TPID
Tag Protocol IDentifier
TRILL
Transparent Interconnection of Lots of Links
VDC
Virtual Data Center
VLC
Visible Light Communication
VNE
Virtual Network Embedding
VXLAN
Virtual Extensible LAN
WTU
Wireless Transmission Unit
Thanks to the advent of the long‐awaited fifth‐generation (5G) mobile networks, mobile data and online services are becoming widely accessible. Discussions of this new standard have taken place in both industry and academia to design this emerging architecture. The main near future objective is to ensure the capability to respond to the different applications needs such as videos, games, web searching, etc., while ensuring a higher data rate and an enhanced Quality of Service (QoS). While no official standardization is yet delivered for 5G, experts assure that, the impressive proliferation of smart devices will lead to the explosion of traffic demand. Billions of connected users are expected to deploy a myriad of applications.
In this respect, recent statistics elaborated by CISCO Visual Networking Index (VNI) highlight that the annual global IP traffic will roughly triple over the next 5 years and will reach 2.3 zettabytes by 2020. More specifically, it is expected that smart phones traffic will impressively increase from in 2015 to of the total of IP traffic in 2020. Mobile data traffic per month will grow from 7 Exabytes in 2016 to 49 Exabytes by 2021. In particular, tremendous video traffic will be crossing IP networks to reach of the totality of IP traffic. It is also expected that the number of connected mobile devices will be more than three times the size of the global population by 2020. In this regard, future networks are anticipated to support and connect plenty of devices, while offering higher data rate and lower latency.
To cope with this unprecedented traffic explosion, the service providers are urged to rethink their network architectures. In fact, efficient scalable physical infrastructures, e.g., Data Centers (DCs), are required to support the drastically increasing number of both online services and users.
To manage their DCs infrastructure, many of giant service tenants are resorting to virtualization technologies making use of Software Defined Networking (SDN) and Network Functions Virtualization (NFV).
On the one hand, SDN controllers offer the opportunity to implement more powerful algorithms thanks to a real‐time centralized control leveraging an accurate view of the network. Indeed, thanks to the separation of the forwarding and the control planes, the managements' complexity of the network infrastructure is considerably reduced while providing tremendous computational power compared to legacy devices. On the other hand, thanks to NFV paradigm, network functions and communication services are first softwarized and then cloudified, so that they can be on demand orchestrated and managed as cloud‐native IT applications. It is straightforward to see that these approaches are complimentary. They offer a new way to design and manage data centers while guaranteeing a high level of flexibility and scalability. The new emerging SDN and NFV technologies requires scalable infrastructures. To that end, a great deal of efforts have been devoted to the design of efficient DC architectures. Indeed, Internet giants ramped up their investment in data centers/IT infrastructures and poured in billions of dollars to widen their global presence and improve their competitiveness in the Cloud market.
In this context, the latest Capital Expenditure (CAPEX) of the five largest‐scale Internet operators, Apple, Google, Microsoft, Amazon, and Facebook, increased by in 2016 in order to invest in designing their DCs. Over the past years, these companies have spent, in total, a capital of 115 $ billions, to build out their DCs. For instance, Google has invested millions of dollars in expanding its data centers spread all over the world: Taiwan, Latin America, Singapore, etc. Facebook has started, since 2010, building out its own DCs in Altoona, Iowa, and North Carolina. In this regard, efficiently designing data centers is a crucial task to ensure scalability required to meet today's massive workload of Cloud applications. Moreover, it is mandatory to deploy the proper mechanisms for routing and resource allocation to communication flows in DCs.
To deal with these challenges, we investigate, in this book, a radically new methodology changing the design of traditional Data Center Network (DCN) while ensuring scalability and enhancing performance. First, in Chapter 1, we will overview the Data Center network architecture. Then, in Chapter 2, we will summarize the main related DC networks routing strategies at layer‐2, layer‐3, and up layers. Besides, an overview of Traffic Engineering (TE) techniques from link‐state while considering TCP fairness models. Next, in Chapter 3, we will overview the related work addressing intra‐data center resource allocation and routing in both wired and/or wireless (i.e., hybrid) data center networks. Afterwards, in Chapter 4, we will summarize the main reliable virtual network embedding strategies connecting geographically distributed data centers. Thanks to network function virtualization, CAPEX and OPEX are deeply reduced while ensuring the requested quality of service is more complex. Finally, in Chapter 5, we will provide a methodology to evaluate the energy cost reduction in DC brought by proactive management, while keeping a high level of user satisfaction.
Boutheina Dab1,2, Ilhem Fajjari3, Dallal Belabed4, and Nadjib Aitsaadi5
1 VMware, Hauts‐de‐Seine, La Defense, France
2 LiSSi Lab, UPEC, Val de Marne, Vitry sur Seine, France
3 Orange Labs, Orange, Hauts‐de‐Seine, Chatillon, France
4 Airbus Defense and Space, Airbus, Saint‐Quentin en Yvelines, Elancourt, France
5 Universités Paris‐Saclay, UVSQ, DAVID, F‐78035, Versailles, France
To deal with the widespread use of cloud services and the unprecedented traffic growth, the scale of the Data Center has importantly increased. Therefore, it is crucial to design novel efficient network architectures able to satisfy the requirements on bandwidth. As a key physical infrastructure, Data Center Network (DCN) designing has widely been a hot research focus.
This chapter reviews the main DCN architectures propounded in the literature. To do so, a taxonomy of DCN designs will be proposed, while analyzing in depth each structure of the given classification. Then, we will provide a qualitative comparison between these different DCN groups. Finally, we will present hybrid DCN architecture based on wired and wireless architecture.
In this section, we present a taxonomy of the existent Data Center Network (DCN) architectures with a detailed review of each drawn class. In general, several criteria have to be considered to design robust DCNs, namely, high network performance, efficient resource utilization, full available bandwidth, high scalability, easy cabling, etc. To deal with the aforementioned challenges, a panoply of solutions have been designed. Mainly, we can distinguish two research directions. In the first one, wired DCN architectures have been upgraded to build advanced cost‐effective topologies able to scale up data centers. The second approach has resorted to deploying new network techniques within the existing DCN so as to handle the challenges encountered in the prior architectures. Hereafter, we will give a detailed taxonomy of these techniques.
With regard to the aforementioned research directions, we can identify three main groups of DCN architectures, namely, switch‐centric DCN, server‐centric DCN, and enhanced DCN. Each group includes a variety of categories that we will detail hereafter.
Switch‐centric DCN architecture
: switches are, mostly, responsible for network‐related functions, whereas the servers handle processing tasks. The focus of such a design is to improve the topology so as to increase network scale, reduce oversubscription, and speed up flow transmission. Switch‐centric architectures can be classified into three main categories according to their structural properties:
Traditional tree‐based DCN architecture
: represents a specific kind of switch‐centric architecture, where switches are linked in a multirooted form.
Hierarchic DCN architecture
: is a switch‐centric DCN, where network components are arranged in multiple layers. Each layer characterizes traffic differently.
Flat DCN architecture
: compresses the three switch layers into only one or two switch layers, in order to simplify the management and maintenance of the DCN.
Server‐centric DCN architecture
: servers are enhanced to handle networking functions, whereas switches are used only to forward packets. Basically, servers are simultaneously end‐hosts and relaying nodes for multihop communications. Usually, server‐centric DCN are recursively defined multilevel topologies.
Enhanced DCN architecture
: is a specific DCN which is tailored for future Cloud computing services. Indeed, the future research direction attempts to deploy networking techniques so as to deal with wired DCN designs limitations. Recently, a variety of technologies have been used in this context, namely, optical switching and wireless communications. Accordingly, we distinguish two main classes of enhanced DCN architectures:
Figure 1.1 Taxonomy of DCN architectures.
Optical DCN
: makes use of optical devices to speed up communications. It can be either: (i) all‐optical DCN (i.e. with completely optical devices) or (ii) hybrid optical DCN (i.e. both optical and Ethernet switches).
Wireless DCN
: deploys wireless infrastructure in order to enhance network performance, and may be: (i) fully wireless DCN (i.e. only wireless devices) or (ii) Hybrid DCN (i.e. both wireless and wired devices).
Figure 1.1 illustrates the taxonomy of current DCN architectures. In the following, we will detail each category and discuss their impact on Cloud computing performance.
The traditional DCN is typically based on a multiroot tree architecture. The latter is a three‐tier topology composed by three layers of switches. The top level (i.e. root) represents the core layer, the middle level is the aggregation layer, while the bottom level is known as the access layer. The core devices are characterized by high capacities compared with aggregation and access switches. Typically, the core switches' uplinks connect the data center to the Internet. On the other hand, the access layer switches commonly use 1 Gbps downlink interfaces and 10 Gbps uplink interfaces, while aggregation switches provide 10 Gbps links. Access switches (i.e. top of rack, ToRs) interconnect servers in the same rack. Aggregation layer allows the connection between access switches and the data forwarding. It is worth noting that the above values of network interface cards throughput are continuously increasing. For instance, nowadays it is easy and not really expensive to deploy interfaces with 25 and 100 Gbps. An illustration of tree‐based DCN architecture is depicted in Figure 1.2.
Figure 1.2 Traditional tree‐based DCN architecture.
Unfortunately, traditional DCNs struggle to resist to the increasing traffic demand. First, core switches are prone to bottlenecks issues as soon as the workloads reach the peak. Moreover, in such a DCN, several downlinks of a ToR switch share the same uplink which limits the available bandwidth. Second, DCN scalability strongly depends on the number of switch ports. Therefore, the unique way to scale this topology is to increase the number of network devices. However, these solutions results in high construction costs and energy consumption. Third, tree‐based DCN suffers from serious resiliency problems. For instance, if a failure happens on some of the aggregation switches, then servers are likely to lose connection with others. In addition, resource utilization is not efficiently balanced. For all the aforementioned reasons, researchers put forward alternative DCN topologies.
Hierarchical topology arranges the DCN components in multiple layers. The key insight behind this model is to reduce the congestion by minimizing the oversubscription in lower‐layer switches using the upper‐layer devices. In the literature, we find several hierarchic DCN examples, namely, CLOS, FatTree, and VL2. Hereafter, we will describe each one of them.
CLOS‐Based DCN Is an advanced tree‐based network architecture. It was, first, introduced by Charles Clos, from Bell Labs, in 1953 to create nonblocking multistage topologies, able to provide higher bandwidth than a single switch. Typically, CLOS‐based DCNs come with three layers of switches: (i) access layer (ingress), composed of the ToRs switches, directly connected to servers in the rack; (ii) aggregation layer (middle), formed by aggregation switches referred as spines and connected to the ToRs; and (iii) core layer (egress), formed by core switches serving as edges to manage traffic in and out the DCN (Chen et al., 2016).
The CLOS network has been widely used to build modern IP fabrics, generally referred to as spine and leaf topologies. Accordingly, in this kind of DCN, commonly named folded‐CLOS topology, the spine layer represents the aggregation switches (i.e. spines), while the leaf layer is composed of the ToR switches (i.e. leaves). In other words, in CLOS topology, (i) leaf layer is composed of ToR switches and (ii) spine layer is composed of aggregation switches. The spine layer is responsible for interconnecting leafs. CLOS inhibits the transition of traffic through horizontal links (i.e. inside the same layer). Moreover, CLOS topology scales up the number of ports and makes possible huge connection using only a small number of switches. Indeed, augmenting the switches ports enhances the spine layer width and, hence, alleviates the network congestion. In general, each leaf switch is connected to all spines. In other words, the number of up (respectively, down) ports of each ToR is equal to the number of spines (respectively, leaves). Accordingly, in a DCN of leaves and spines, there are wired links. The main reason behind this link redundancy is to enable multipath routing and to mitigate oversubscription caused by the conventional link state open shortest path first (OSPF) routing protocol. In doing so, CLOS network provides multiple paths for the communication to be switched without being blocked.
CLOS architecture succeeds to ensure better scalability and path diversity than conventional tree‐based Data Center (DC) topologies. Moreover, this design reduces bandwidth limitation in aggregation layer. However, this architecture requires homogeneous switches and deploys huge number of links.
Fat‐Tree DCN Is a special instance of CLOS‐based DCN introduced by Al‐Fares et al. (2008) in order to remedy the network bottleneck problem existing in the prior tree‐based architectures. Specifically, Fat‐Tree comes with a new way to interconnect commodity Ethernet switches. Typically, it is organized in pods, where each pod contains two layers of switches. Each ‐port switch in the lower layer is directly connected to hosts, and to of the ports in the aggregation layer. Therefore, there is a total of ‐port core switches, each one is connected to each port of the pods. Accordingly, a Fat‐Tree built with ‐port switches supports hosts.
The main advantage of the Fat‐Tree topology is its capability to deploy identical cheap switches, which alleviates the cost of designing DCN. Further, it guarantees equal number of links in different layers which inhibits communication blockage among servers. In addition, this design can importantly mitigate congestion effects thanks to the large number of redundant paths available between any two given communicating ToR switches. Nevertheless, Fat‐Tree DCN suffers from complex connections, and its scalability is closely dependent on the number of switch ports. Moreover, this structure is impacted by the possible lower‐layer devices failure which may entail the degradation of DCN performance.
This architecture has been improved by designing new structures based on a Fat‐Tree model, namely, ElasticTree Heller et al. (2010), PortLand Mysore et al. (2009), and Diamond Sun et al. (2014). The main advantage of such topologies is to reduce maintenance cost and enhance scalability by reducing the number of switch layers.
Valiant Load Balancing DCN Architecture Valiant load balancing (VLB) is introduced in order to handle traffic variation and alleviate hotspots when random traffic transits through multipaths. In the literature, we find, mainly, two kinds of VLB architectures. First, VL2 is three‐layer CLOS architecture introduced by Microsoft in Greenberg et al. (2009a). Contrarily to Fat‐Tree, VL2 resorts to connecting all servers through a virtual two‐layer Ethernet, located in the same local area network (LAN) with servers. Moreover, VL2 implements VLB mechanism and OpenFlow to perform routing while enhancing load balancing. To forward data over multiple equal cost paths, it makes use of equal‐cost multi‐path (ECMP) protocol. VL2 architecture is characterized by its simple connection and does not require software or hardware modifications. Nevertheless, it still suffers from scalability issue and does not take into account reliability, since single node failure problem persists.
Second, Monsoon architecture (Greenberg et al., 2008), aims to alleviate over‐subscription based on a two‐layer network that connects servers and a third layer for core switches/routers. Unfortunately, it is not compatible with the existing wired DCN architecture.
The main idea of the Flat switch‐centric architectures is to flatten down the multiple switch layers to only two or one single layer, so as to simplify maintenance and resource management tasks. There are several topologies that are proposed for this kind of architecture. First, the authors of Abts et al. (2010) conceive flattened butterfly (FBFLY) architecture to build energy‐aware DCN. Specifically, it considers power consumption proportionally to the traffic load, and so replaces the 40 Gbps links by several links with fewer capacity regarding the requested traffic in each scenario. colored butterfly (C‐FBFLY) Csernai et al. (2015) is an improved version of FBFLY which makes use of the optical infrastructure in order to reduce cabling complexity while keeping the same control plane. Then, FlaNet Lin et al. (2012) is also a two‐layer DCN architecture. Layer 1 includes a single ‐port switch connecting servers, whereas the second layer is recursively formed by one‐layer FlatNet. In doing so, this architecture reduces the number of deployed links and switches by roughly compared to the classical three‐layer FatTree topology, while keeping the same performance level. Moreover, FlatNet guarantees fault‐tolerance thanks to the two‐layer structure and ensures load balancing using the efficient routing protocols.
Discussion In conclusion, switch‐centric architectures succeed to relatively enhance traffic load balancing. Most of these structures ensure multirouting. Nevertheless, such a design brings up in general at least three layers of switches which strongly increases cabling complexity and limits, hence, network scalability. Moreover, the commodity switches commonly deployed in these architectures do not provide fault‐tolerance compared to the high‐level switches.
In general, these DCN architectures are conceived in a recursive way where a high‐level structure is formed by several low‐level structures connected in a specific manner. The key insight behind this design is to avoid the bottleneck of a single element failure and enhance network capacity.
The main server‐centric DCN architectures found in the literature include BCube, which is a recursive server‐centric architecture (Guo et al., 2009a) that makes use of on specific topological properties to ensure custom routing protocols. Another one is DCell which is a recursive architecture built on switches and servers with multiple network interface cards (NICs) (Guo et al., 2008b). The objective is to increase the scale of servers. Finally, CamCube Abu‐Libdeh et al. (2010) is a free‐of‐switching DCN architecture, specifically modeled as a 3D DCN topology, where each server connects to exactly two servers in 3D directions, however, the suppression of the swathing by a direct connection between servers seems unfeasible. In the following, we
