Top Banner
Energy Efficiency of an Integrated Intra-Data-Center and Core Network With Edge Caching Matteo Fiorani, Slavisa Aleksic, Paolo Monti, Jiajia Chen, Maurizio Casoni, and Lena Wosinska AbstractThe expected growth of traffic demand may lead to a dramatic increase in the network energy con- sumption, which needs to be handled in order to guarantee scalability and sustainability of the infrastructure. There are many efforts to improve energy efficiency in communi- cation networks, ranging from the component technology to the architectural and service-level approaches. Because data centers and content delivery networks are respon- sible for the majority of the energy consumption in the information and communication technology sector, in this paper we address network energy efficiency at the archi- tectural and service levels and propose a unified network architecture that provides both intra-data-center and inter-data-center connectivity together with interconnec- tion toward legacy IP networks. The architecture is well suited for the carrier cloud model, where both data-center and telecom infrastructure are owned and operated by the same entity. It is based on the hybrid optical switching (HOS) concept for achieving high network performance and energy efficiency. Therefore, we refer to it as an inte- grated HOS network. The main advantage of the integra- tion of core and intra-data-center networks comes from the possibility to avoid the energy-inefficient electronic interfaces between data centers and telecom networks. Our results have verified that the integrated HOS network introduces a higher number of benefits in terms of energy efficiency and network delays compared to the conven- tional nonintegrated solution. At the service level, recent studies demonstrated that the use of distributed video cache servers can be beneficial in reducing energy con- sumption of intra-data-center and core networks. However, these studies only take into consideration conventional network solutions based on IP electronic switching, which are characterized by relatively high energy consumption. When a more energy-efficient switching technology, such as HOS, is employed, the advantage of using distributed video cache servers becomes less obvious. In this paper we evaluate the impact of video servers employed at the edge nodes of the integrated HOS network to understand whether edge caching could have any benefit for carrier cloud operators utilizing a HOS network architecture. We have demonstrated that if the distributed video cache servers are not properly dimensioned they may have a negative impact on the benefit obtained by the integrated HOS network. Index TermsBackbone networks; Edge caching; Energy consumption; Hybrid optical switching; Intra-data-center networks; Performance analysis. I. INTRODUCTION A lthough information and communication technology (ICT) can play a fundamental role in enabling a low-carbon economy, the energy and carbon impact of the ICT sector itself is already significant, and it is ex- pected to grow rapidly with the proliferation of the number of connected devices and with the emergence of new ser- vices. The energy consumption of the ICT sector can be divided into (i) energy consumed by the user devices, (ii) energy consumed by the telecommunication network in- frastructure, and (iii) energy consumed by the data centers. While end user devices are the major contributors, the sum of the energy consumed by the telecommunication net- works and data centers amounts to 51% [ 1] of the total ICT consumption. With the expected growth in the Inter- net and data center traffic [ 2, 3] the energy consumption of telecommunication networks and data centers is destined to drastically increase if the network energy efficiency is not improved. In addition to low-power device technologies, this problem can be addressed on architectural and service levels. Taking into consideration the architectural level, we ob- serve that generally telecommunication networks can be divided into three areas: access, metro, and core. Several research papers address the energy consumption of the different network areas [ 4, 5]. It was shown that although access networks are currently the major contributor, the energy consumption of core networks is expected to grow rapidly to be able to support very high capacities in the range of several hundreds of terabits per second or even petabits per second per node [ 2]. As for data centers, their energy consumption is divided into energy consumed by the information technology (IT) equipment, energy con- sumed by the cooling system, and energy consumed by the power supply chain. According to the latest specifica- tions, data centers are designed in such a way that the ICT equipment consumes nearly all the energy within http://dx.doi.org/10.1364/JOCN.6.000421 Manuscript received September 27, 2013; revised February 15, 2014; ac- cepted February 17, 2014; published March 31, 2014 (Doc. ID 198202). M. Fiorani (e-mail: [email protected]) and M. Casoni are with the Department of Engineering Enzo Ferrari,University of Modena and Reggio Emilia, Modena, Italy. S. Aleksic is with the Institute of Telecommunications, Vienna Univer- sity of Technology, Vienna, Austria. P. Monti, J. Chen, and L. Wosinska are with the KTH Royal Institute of Technology, Kista, Sweden. Fiorani et al. VOL. 6, NO. 4/APRIL 2014/J. OPT. COMMUN. NETW. 421 1943-0620/14/040421-12$15.00/0 © 2014 Optical Society of America
12
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Energy Efficiency

Energy Efficiency of an IntegratedIntra-Data-Center and Core Network

With Edge CachingMatteo Fiorani, Slavisa Aleksic, Paolo Monti, Jiajia Chen,

Maurizio Casoni, and Lena Wosinska

Abstract—The expected growth of traffic demand maylead to a dramatic increase in the network energy con-sumption, which needs to be handled in order to guaranteescalability and sustainability of the infrastructure. Thereare many efforts to improve energy efficiency in communi-cation networks, ranging from the component technologyto the architectural and service-level approaches. Becausedata centers and content delivery networks are respon-sible for the majority of the energy consumption in theinformation and communication technology sector, in thispaper we address network energy efficiency at the archi-tectural and service levels and propose a unified networkarchitecture that provides both intra-data-center andinter-data-center connectivity together with interconnec-tion toward legacy IP networks. The architecture is wellsuited for the carrier cloud model, where both data-centerand telecom infrastructure are owned and operated by thesame entity. It is based on the hybrid optical switching(HOS) concept for achieving high network performanceand energy efficiency. Therefore, we refer to it as an inte-grated HOS network. The main advantage of the integra-tion of core and intra-data-center networks comes fromthe possibility to avoid the energy-inefficient electronicinterfaces between data centers and telecom networks.Our results have verified that the integrated HOS networkintroduces a higher number of benefits in terms of energyefficiency and network delays compared to the conven-tional nonintegrated solution. At the service level, recentstudies demonstrated that the use of distributed videocache servers can be beneficial in reducing energy con-sumption of intra-data-center and core networks. However,these studies only take into consideration conventionalnetwork solutions based on IP electronic switching, whichare characterized by relatively high energy consumption.When a more energy-efficient switching technology, suchas HOS, is employed, the advantage of using distributedvideo cache servers becomes less obvious. In this paperwe evaluate the impact of video servers employed at theedge nodes of the integrated HOS network to understandwhether edge caching could have any benefit for carriercloud operators utilizing a HOS network architecture.We have demonstrated that if the distributed video cacheservers are not properly dimensioned they may have a

negative impact on the benefit obtained by the integratedHOS network.

Index Terms—Backbone networks; Edge caching; Energyconsumption; Hybrid optical switching; Intra-data-centernetworks; Performance analysis.

I. INTRODUCTION

A lthough information and communication technology(ICT) can play a fundamental role in enabling a

low-carbon economy, the energy and carbon impact ofthe ICT sector itself is already significant, and it is ex-pected to grow rapidly with the proliferation of the numberof connected devices and with the emergence of new ser-vices. The energy consumption of the ICT sector can bedivided into (i) energy consumed by the user devices,(ii) energy consumed by the telecommunication network in-frastructure, and (iii) energy consumed by the data centers.While end user devices are the major contributors, the sumof the energy consumed by the telecommunication net-works and data centers amounts to 51% [1] of the totalICT consumption. With the expected growth in the Inter-net and data center traffic [2,3] the energy consumption oftelecommunication networks and data centers is destinedto drastically increase if the network energy efficiency isnot improved. In addition to low-power device technologies,this problem can be addressed on architectural and servicelevels.

Taking into consideration the architectural level, we ob-serve that generally telecommunication networks can bedivided into three areas: access, metro, and core. Severalresearch papers address the energy consumption of thedifferent network areas [4,5]. It was shown that althoughaccess networks are currently the major contributor, theenergy consumption of core networks is expected to growrapidly to be able to support very high capacities in therange of several hundreds of terabits per second or evenpetabits per second per node [2]. As for data centers, theirenergy consumption is divided into energy consumed bythe information technology (IT) equipment, energy con-sumed by the cooling system, and energy consumed bythe power supply chain. According to the latest specifica-tions, data centers are designed in such a way that theICT equipment consumes nearly all the energy withinhttp://dx.doi.org/10.1364/JOCN.6.000421

Manuscript received September 27, 2013; revised February 15, 2014; ac-cepted February 17, 2014; published March 31, 2014 (Doc. ID 198202).

M. Fiorani (e-mail: [email protected]) and M. Casoni are withthe Department of Engineering “Enzo Ferrari,” University of Modena andReggio Emilia, Modena, Italy.

S. Aleksic is with the Institute of Telecommunications, Vienna Univer-sity of Technology, Vienna, Austria.

P. Monti, J. Chen, and L. Wosinska are with the KTH Royal Institute ofTechnology, Kista, Sweden.

Fiorani et al. VOL. 6, NO. 4/APRIL 2014/J. OPT. COMMUN. NETW. 421

1943-0620/14/040421-12$15.00/0 © 2014 Optical Society of America

Page 2: Energy Efficiency

the data center. This also means that in modern data cen-ters major energy savings can be achieved by reducing thepower consumption of the IT equipment. According to [6],the intra-data-center network, which handles the traffic in-side a data center as well as that destined to the externalnetworks, currently represents 23% of the IT equipmentenergy consumption. This number is expected to grow inthe future due to the forecasted increase in the data centertraffic [3]. It is, therefore, of the utmost importance todefine new energy-efficient intra-data-center network tech-nologies. In [5,7] it has been shown that the switchinginfrastructure consumes the major part of the energy incore and intra-data-center networks, and it was pointedout that future research needs to focus on improving theenergy efficiency of switching devices. Today, core andintra-data-center networks are based on electronic switch-ing; that is, data transmission is performed in the opticaldomain, whereas switching and control is done byelectronic equipment. Consequently, electrical-to-optical(E/O) and optical-to-electrical (O/E) conversions are per-formed at each hop, which leads to high energy consump-tion. To solve this problem, several optical switchingsolutions have been proposed in core [8,9] and intra-data-center [10,11] networks. In particular, [9,11] proposedtwo architectures based on hybrid optical switching (HOS)for achieving high performance and energy efficiency incore and intra-data-center networks, respectively. Theterm hybrid is used to describe the coexistence of differentoptical switching paradigms, namely packet, burst, andcircuit switching.

Meanwhile, the latest Cisco Visual Networking Indexforecast [2] reports that consumer Internet video trafficwill increase from 57% to 69% of total Internet traffic inthe period between 2012 and 2017. As a consequence,energy-efficient video distribution systems are an impor-tant tool to maintain a sustainable Internet growth. Videocontent can be either stored and distributed from a few cen-tralized servers located in large data centers (referred to asthe centralized approach) or the most popular video con-tents can be replicated in cache servers located close tothe end users (referred to as the distributed approach).From the energy consumption perspective it is not obviouswhich approach (i.e., centralized or distributed) is mostbeneficial. In fact, storing content only in a centralizedserver decreases the energy consumption for storage whileincreasing transport energy requirements. On the otherhand, replicating some content closer to the users indistributed cache servers decreases transport energy whileincreasing the storage energy requirements. A few recentstudies [12–15] address this trade-off and conclude that thehighest energy efficiency is achieved by storing popularcontent in cache servers close to the end users.

Recently, communication service providers are lookingfor cloud solutions to reduce costs and create a new levelof efficiency. In this context, one of the most promising sol-utions is the carrier cloud model, where both data centersand the core network are owned by the same entity and theresources are virtualized and shared by multiple tenants.Several large telecom operators are considering a move tothis novel business model [16,17]. Carrier clouds could

overcome several problems that occur in the existing cloudsolutions, such as unpredictable and nondeterministicnetwork performance and insufficient availability andsecurity, which severely complicate or even preclude car-rier-grade service level agreements. In order to increaseboth the adaptability to different traffic types and theenergy efficiency at the architectural level, this paperproposes a unified network architecture for carrier cloudoperators. The architecture is based on HOS and providesboth intra-data-center and inter-data-center connectivityas well as interconnection capabilities toward legacy IPnetworks. This architecture is referred to as an integratedHOS network, in which the traffic is carried in the opticaldomain along the entire path from an aggregation switchinside a data center up to another aggregation switch (inthe same or a different data center) or to an edge node serv-ing as an interface to legacy IP networks. In our study, weanalyze the structure of such an integrated architectureand evaluate the benefits compared to a nonintegratedHOS architecture as well as a conventional IP networkbased on electronic switching.

Regarding the service level, we observe that the studiesin [12–15] take into consideration only traditional core andintra-data-center networks based on electronic switches.Since these networks are characterized by a low energy ef-ficiency, the reduction in the transport energy introducedby the distributed storing approach generally overcomesby far the energy consumption of the cache servers. Ifwe consider instead a carrier cloud operator that relieson the integrated HOS network, which is able to achievehigh energy efficiency, the advantage of the distributed ap-proach on energy consumption might become less obvious.For this reason, in this paper we deploy distributed cacheservers at the edge nodes of the integrated HOS networkand evaluate their impact on the network performance andenergy consumption. To the best of our knowledge, theperformance of edge caching in combination with anenergy-efficient network concept based on optical switch-ing has not been evaluated so far.

To summarize, the contribution of this paper is twofold,namely, (i) we propose and evaluate an integrated intra-data-center and core network architecture based on theHOS concept for carrier cloud operators along with a studyof its benefits and (ii) we assess the impact of distributedvideo cache servers on the proposed integrated HOSnetwork architecture.

The remainder of the paper is organized as follows. InSection II we describe the proposed integrated core andintra-data-center HOS network. Section III introducesthe approach used to model the video cache servers. InSection IV the reference network used for the simulationsand the energy consumptionmodel are described. Section Vpresents the simulation results, while Section VI containssome concluding remarks.

II. INTEGRATED INTRA-DATA-CENTER AND CORE NETWORK

Figure 1 shows a high-level representation of the pro-posed integrated core and intra-data-center network based

422 J. OPT. COMMUN. NETW./VOL. 6, NO. 4/APRIL 2014 Fiorani et al.

Page 3: Energy Efficiency

on HOS. The integrated network provides three differenttypes of interconnections using a unified all-optical infra-structure and a common control plane. The first type ofinterconnection is between servers inside the same datacenter, referred to as an intra-data-center interconnection.In Fig. 1 it is represented by a red dotted line to highlightthe path over which data are sent using the HOS paradigm.The second type of interconnection is between serverslocated in different data centers. We refer to it as aninter-data-center interconnection, and we use a blue dashedline in Fig. 1 to indicate the path performed in the HOSdomain. The third type of interconnection is between serv-ers inside a data center and HOS edge nodes; that is, it pro-vides the server-to-edge interconnections. An example isindicated in Fig. 1 by a green solid line for the HOS path.

It should be noted that in the proposed integrated net-work, core and data centers employ the same unified con-trol plane. A first attempt of an integrated control plane forintra-data-center and core networks for a carrier cloud hasbeen recently proposed in [17]. The authors create a proofof concept for an integrated control plane based on the soft-ware-defined network mechanism. However, the proposedsolution is still based on traditional electronic interfacesbetween data centers and core networks, and hence it isnot optimized from the energy-efficiency point of view. Inorder to minimize energy consumption, in this paper wepropose a novel control plane for intra-data-center and corenetworks based on the HOS network model described in[9,11]. It consists of two layers, the generalized multi-protocol label switching (GMPLS) control layer and theHOS forwarding layer. The GMPLS control layer is incharge of configuring and managing the network virtualtopology. It consists of three building blocks: routing,signaling, and link management. The HOS forwardinglayer performs data aggregation, data scheduling, andresource reservation. It supports three different opticaltransport mechanisms, namely circuits, bursts, and pack-ets. The HOS forwarding layer has the unique feature ofemploying a common control packet for managing all three

switching paradigms, enabling circuits, bursts, and packetsto dynamically share the optical resources. The use of op-tical bursts in combination with packets and circuits allowsthe dynamic implementation of different service classes,leading to an efficient quality-of-service differentiation.

A. HOS Core Network

The HOS core network provides connectivity amongdifferent data centers as well as between data centersand legacy IP networks. As shown in Fig. 1, each nodein the HOS core network includes a HOS core switch. Ifthe node is located at the edge of the HOS core network,it is equipped with an electronic switch for interdomainconnectivity.

An electronic switch in the HOS edge node ensures inter-operability between the core network and the legacy IP net-works. In the direction toward the HOS core network, theHOS edge node performs traffic classification and trafficaggregation. In other words, each incoming IP packet isclassified based on the value of the differentiated servicecode point field in the IP header and mapped over thebest-suited optical transport mechanism, as described in[9]. In the direction toward the legacy IP networks, theHOS edge node extracts IP packets and performs IP rout-ing. The HOS edge node is divided into two logical buildingblocks, one of which consists of an electronic switch to per-form IP routing and the second one of which includes all theelectronic components required to (i) perform traffic aggre-gation and classification in the direction toward the HOScore network and (ii) perform IP packet extraction in thedirection toward the legacy IP networks. For simplicity,we will refer to this block as the traffic aggregation block.

High-capacity optical switches provide connectivityinside the core network. A HOS core switch can be logicallydivided into two building blocks, namely the electroniccontrol logic and the optical switching fabric. The elec-tronic control logic consists of three electronic blocks for

Fig. 1. Representation of the proposed integrated intra-data-center and core network based on hybrid optical switching.

Fiorani et al. VOL. 6, NO. 4/APRIL 2014/J. OPT. COMMUN. NETW. 423

Page 4: Energy Efficiency

implementing the GMPLS control layer, the HOS forward-ing layer, and the switch control unit. The optical switchingfabric is composed of two large optical switches. A fastoptical switch, based on semiconductor optical amplifiers(SOAs), takes care of the transmission of packets and shortbursts. A slow optical switch, based on microelectrome-chanical systems (MEMS), handles the transmission ofcircuits and long bursts. In the optical switching fabricblock we also include the following active optical compo-nents: optical amplifiers (OAs), tunable wavelength con-verters (TWCs), and control information extraction/reinsertion (CIE/R) blocks in order to compensate for signallosses in components, reduce blocking probability, andencode the control information together with the datapayload on the same optical carrier, respectively.

For a detailed description of the HOS core network werefer to [9].

B. HOS Intra-Data-Center Network

The HOS intra-data-center network provides connectiv-ity among the servers inside a data center and connects thedata centers to the HOS core network. It is organized in athree-tier fat-tree topology. The first tier consists of elec-tronic top-of-rack (ToR) switches. In a conventional high-end data center, servers are organized in racks, with eachrack hosting typically 48 blade servers. The ToR switchesinterconnect the servers inside a rack and connect theracks to the second tier of the intra-data-center network,which is composed of the HOS aggregation nodes. TheHOS aggregation nodes perform the same functions insidea data center as HOS edge nodes in the HOS core network.In particular, in the direction toward the network core, theHOS aggregation nodes perform traffic classification and

traffic aggregation, while in the direction toward the datacenter servers, the HOS aggregation nodes extract the IPpackets and perform IP routing. The HOS aggregationnodes consist of the same logical building blocks as theHOS edge nodes. The main difference between HOS edgeand HOS aggregation nodes is that the HOS edge nodescould also include the video cache servers, which will befurther elaborated in Section III. The third tier of theintra-data-center network is represented by a single largeHOS core node. This node has exactly the same architec-ture as the HOS core switch used in the core network.For more details we refer to [11].

III. EDGE CACHE

To evaluate the impact of distributed video cache serverson the proposed integrated intra-data-center and core HOSnetwork, we extend the HOS edge node architecture de-scribed in Section II in order to include the video cacheservers. The extended architecture of HOS edge nodes withcache servers is shown in Fig. 2. It can be logically dividedinto three building blocks. Two of them have already beenmentioned in Section II, namely the electronic switch blockand the traffic aggregation block. The former one includesthe switch, the GMPLS module, and the input electronicline cards, while the latter block comprises the classifier,the conditioner, the assembler, the resource allocator,and the packet extractor. The last block, which representsan extension to the architecture previously presented in[9,11], is related to the caching operations and consistsof the content tracker, the ToR switch, and the video cacheservers. The content tracker interacts with theHOS controlplane in order to keep track of all the video content insidethe cache servers, process the incoming video requests, andupdate the cache servers.

Fig. 2. Architecture of the HOS edge node with video cache servers.

424 J. OPT. COMMUN. NETW./VOL. 6, NO. 4/APRIL 2014 Fiorani et al.

Page 5: Energy Efficiency

As already mentioned in Section I, the impact of distrib-uted cache servers on the network energy efficiency hasalready been addressed in previous studies [12–15], mainlyfocused on electronic switched networks. The rationalebehind these works is that distributed cache servers reducethe traffic load, leading to a lower number of electronicswitch ports used in the core and intra-data-centernetworks. However, the electronic switching devices com-mercially available today do not implement dynamicswitching-off of the ports, and thus their energy consump-tion is almost independent of the traffic load. Techniquesfor dynamically switching off the line cards (LCs) havebeen proposed in [18,19], but their efficiency in real net-work scenarios has still to be proven. In fact, schedulingthe switching-off of the LCs in a packet switching networkis a very challenging task because of the stochastic natureof the traffic, and usually the interarrival time betweentwo successive packets is very small. The novelty of our ap-proach consists in applying the caching concept to a HOSnetwork, where we assume that all the optical components(in the optical switching fabric of the HOS core nodes) areturned off when they are inactive. This is not as challeng-ing as turning off electronic switch ports [9]. In fact, withtwo parallel optical switches, only one needs to be active toserve traffic from a particular port at a specified time. Inaddition, in a HOS network, circuits and bursts are sched-uled a priori; thus the incoming traffic is more predictablethan in a traditional packet switched network, that is, onewhere the traffic is processed on a packet-by-packet basis.

IV. MODELING APPROACH

In this section, we describe assumptions used to modeland evaluate performance of the proposed integrated intra-data-center and core HOS network with edge caching.First, we present the power consumption model followedby the description of the reference network scenario, andfinally we introduce the performance metrics.

A. Power Consumption Model

The total power consumption of the integrated core andintra-data-center HOS network is given by the sum of thepower consumed by each node in the core network (Pi

Node)and the power consumed by data centers (Pj

DC):

PNetwork �XNNode

i�1

PiNode �

XNDC

j�1

PjDC; (1)

whereNNode is the number of nodes in the core network andNDC is the number of data centers. Each node in the HOScore network performs both edge and core functions. Thepower consumption of the ith node in the network is deter-mined by

PiNode � Pi

Edge � PiCore; (2)

where PiEdge is the power consumption of the ith HOS edge

part and PiCore is the power consumption of the ith HOS

core switch. The power consumption of the ith HOS edgepart is given by the sum of the power consumption of itsbuilding blocks (Section II):

PiEdge � NEdge;i

F ·NW · �PES � PA� � PiCache; (3)

where NEdge;iF is the total number of fibers connected to the

HOS edge node i andNW is the number of wavelengths perfiber, which are assumed to be the same for all nodes. In theformula, PES is the power consumption of the electronicswitch block per port and PA is the power consumptionof the traffic aggregation block per port. The number ofports of the switch is given by the product of the numberof wavelength channels per fiber and the number of fibers(NEdge;i

F ·NW ); that is, it represents the total number ofwavelength channels at a HOS edge node. Finally, Pi

Cacheis the power consumption of the cache block. The powerconsumption of the cache block of the ith HOS edge nodeis obtained through the following formula:

PiCache � PCT � PToR �Ni

CS · PCS; (4)

where PCT is the power consumption of the content tracker,PToR is the power consumption of the ToR switch, and PCS isthe power consumption of a cache server. Finally, Ni

CS rep-resents the number of cache servers hosted in the ith HOSedge node. The cache servers are assumed to have a fixedstorage capacity of 1 TByte. Also the power consumption ofthe ith HOS core switch is computed by summing up thepower consumption of its building blocks, as defined byEq. (5):

PiCore � Pi

ECL � PiOSF; (5)

where PiECL is the power consumption of the electronic con-

trol logic and PiOSF is the power consumption of the optical

switching fabric of the ith HOS core switch. The power con-sumption of the control logic of the ith HOS core switch isgiven by Eq. (6):

PiECL � NCore;i

F ·NW · PGMPLS � PHOS � PSC; (6)

where NCore;iF is the total number of fibers connected to the

HOS core node i. In Eq. (6), PGMPLS is the power consump-tion of the GMPLS block per port, PHOS is the power con-sumption of the HOS forwarding layer, and PSC is thepower consumption of the switch control unit. The powerconsumption of the optical switching fabric of HOS corenodes depends on the traffic because we assume that opti-cal switch ports can be turned off when they are inactive. Tocompute the power consumption of the optical switchingfabric of the ith HOS core switch, we use Eq. (7):

PiOSF � Nactive;i

SOA · PSOA �Nactive;iMEMS · PMEMS �Nactive;i

TWC · PTWC

�Ncore;iF · �NW · PCIE∕R � 2 · PEDFA�: (7)

Here,Nactive;iSOA ,Nactive;i

MEMS, andNactive;iTWC represent the number of

active SOA-switch ports, MEMS-switch ports, and TWCs ofthe ith HOS core node, respectively. These values depend

Fiorani et al. VOL. 6, NO. 4/APRIL 2014/J. OPT. COMMUN. NETW. 425

Page 6: Energy Efficiency

on the traffic load and are computed through simulations.In Eq. (7), PSOA, PMEMS, and PTWC are the power consump-tion of the SOA-switch per port, theMEMS-switch per port,and the TWC, respectively. Finally, PCIE∕R and PEDFA arethe power consumption of the CIE/R block and the OAs.When computing the power consumption of the HOSintra-data-center networks we exclude from our analysisthe power consumed by servers and consider only thepower consumed by the network equipment, that is, bythe intra-data-center network. The power consumptionof the jth intra-data-center network is computed usingEq. (8):

PjDC � Nj

ToR · PToR �NjAggr · PAgrr � Pj

Core; (8)

whereNjToR andNj

Aggr are the numbers of ToR switches andHOS aggregation switches in the jth data center, respec-tively. Here, PAggr represents the power consumption ofan HOS aggregation switch and Pj

Core represents the powerconsumption of the HOS core switch inside the jth datacenter. We assume that each HOS aggregation switch isconnected to the correspondingHOS core switch in the datacenter using one fiber and that the number of ToR switchesconnected to the corresponding aggregation node is equalto the number of wavelength channels per fiber (NW ). Tocalculate the power consumption of a HOS aggregationswitch we use Eq. (9):

PAggr � NW · �PES � PA�: (9)

Finally, the power consumptions of the HOS core switchinside the jth data center is computed using Eq. (5)and replacing the index i with the index j. The energyconsumptions of all the considered network componentsare reported in Table I and have been obtained by collect-ing data from data sheets as well as from researchpapers [9,11].

B. Reference Network Scenario

To assess the performance of the proposed integratedintra-data-center and core HOS network with edge cach-ing, we developed a custom event-driven C++ simulator.In the following we report the main parameters thatwe used in our simulations and present the model thatis applied to generate the network traffic.

We denote NNode as the number of nodes in the networkand NDC as the number of nodes connected to the datacenter. We consider the Pan-European network [20] com-posed of 28 nodes (i.e., NNode � 28) and 41 links as thereference network topology. We assume that 25% of the net-work nodes are connected to a data center, that is,NDC � 7.In each simulation we randomly connect the data centers todifferent nodes of the network. We assume that all the datacenters have the same size and are equipped with 76,800servers organized in racks. In each rack, 48 servers are con-nected to a ToR switch using dedicated 1 Gbps links [21].The number of ToR switches per data center is given by theratio between the number of servers and the number ofracks, that is, Nj

ToR � NToR � 1600 ∀ j ∈ NDC. As manyas 64 ToR switches are connected to a HOS aggregationswitch using 40 Gbps links. We obtain that each datacenter is equipped with Nj

Aggr � NAggr � 25 ∀ j ∈ NDCHOS aggregation nodes. Each HOS aggregation node isconnected to the HOS core node inside the data centerusing one fiber. The HOS core switch inside a data centeris equipped with 25 fiber ports for interconnecting all theHOS aggregation switches. In addition it employs 7 fiberports for the interconnection toward the Pan-Europeannetwork. Thus, it has in total 32 fiber ports. The numberof fiber ports for the interconnection between a data centerand the Pan-European network has been chosen accordingto [3], where it is reported that currently 76% of the trafficgenerated inside a data center is directed to a server withinthe same data center (internal traffic). We assume thateach core node in the Pan-European network also providesedge functionality. As described before, each data center isconnected to a network node of the Pan-European networkusing seven fibers. To ensure that the network nodes haveenough capacity to support the connection toward a datacenter without becoming a bottleneck, we assume that eachlink in the network is composed of four fibers. We alsoassume that each HOS core node is connected to the corre-sponding HOS edge node using a number of fibers that isequal to the node degree. As a result, the number of fibersattached to the ith HOS edge node (NEdge;i

F ) is equal to thenode degree. The number of fibers connected to the ith HOScore node (NCore;i

F ) is equal to five times the node degree(four times the node degree for the interconnection towardother HOS core nodes and one time the node degree for theinterconnection toward the HOS edge node), plus sevenfibers in the case that the HOS core node is directly con-nected to a data center. Each fiber carries 64 wavelengthchannels (NW � 64), each of which is operated at 40 Gbps.As for the edge caching, we assume that the networknodes that are not directly connected to a data centerare equipped with the same number of cache servers(Ni

CS � NCS ∀ i ∈ NNode). The network nodes that connect

TABLE IPOWER CONSUMPTION OF THE NETWORK

COMPONENTS [9,11]

Components Power [W]

Electronic switching block per port (PES) 320Traffic aggregation block per port (PA) 159Content tracker (PCT) 330Top-of-rack switch (PToR) 650Cache server (PCS) 450GMPLS control layer per port (PGMPLS) 6.75HOS forwarding layer (PHOS) 570Switch control unit (PSC) 300SOA switch per port (PSOA) 20MEMS switch per port (PMEMS) 0.1Tunable WC (PTWC) 1.69Control information E./R. (PCIE∕R) 17Optical amplifiers (PEDFA) 14

426 J. OPT. COMMUN. NETW./VOL. 6, NO. 4/APRIL 2014 Fiorani et al.

Page 7: Energy Efficiency

data centers with the HOS core network do not compriseany cache servers.

The cache size of a HOS edge node is defined as the sumof the storage capacities, expressed in bytes, of all the videoservers hosted in the node. Furthermore, we define thevideo content hit rate as the probability that a video re-quest arriving at a HOS edge node determines a cachehit and thus is served using the local cache servers. Thecache hit rate depends mainly on the cache size. Severalstudies report the cache hit rate as a function of the cachesize in real networks based on the YouTube video distribu-tion infrastructure [22,23]. The results of these studiesshow that high video hit rates can be achieved even withsmall cache sizes and that the cache hit rate exhibits alogarithmic growth as a function of the cache size. As a con-sequence, increasing the cache size over a certain value hasa limited impact on the video content hit rate. In our sim-ulationmodel, we assume that the video content popularitybelongs to a Zipf distribution with a library of 2 million ob-jects and a skew parameter equal to 0.6. These are typicalassumptions for simulating a YouTube-like video contentdelivery service [14,15] which lead to cache hit rates con-sistent with those presented in [22,23]. We also assumethat the size of the video contents is uniformly distributedbetween 100 and 500 MByte [15], with an average videosize of 300 MByte, and, consequently, a library thatamounts on average to 600 TByte.

The IP traffic arriving at the HOS edge nodes from leg-acy IP networks is modeled using a Poisson distribution.We assumed that 57% of this traffic consists of requestsfor video content [2]. A request for video content can be ei-ther served locally by the video cache servers in the HOSedge node if the required content is available in the cache,or can be forwarded to the original server located in one ofthe data centers. In our simulations we also take into ac-count the possibility that some of the traffic that arrives ata HOS edge node is destined to another network node, thatis, not to the data center. We refer to this traffic as edge-to-edge traffic. Even if it is not directly related to our analysis,the edge-to-edge traffic is important because it has animpact on the data losses and the delays as well as onthe energy consumption. For the traffic generated by theservers, we implemented a more complex traffic model.According to [24], the interarrival rate distribution of thepackets generated inside a data center can be modeled us-ing a lognormal distribution. We then model the servers asfinite-state machines with two states, namely the lognor-mal state and the video-transfer state. In the lognormalstate, the servers generate IP packets with a lognormal-dis-tributed interarrival time. The IP packets generated by theservers in the lognormal state can be addressed either to aserver in the same data center, to a server in a different datacenter, or to a specific legacy IP network connected to aHOSedge node. When a server receives a request for video con-tent from an edge node, it switches to the video-transferstate. In the video-transfer state the server transmits IPpackets at a constant bit-rate to the requesting HOS edgenode. When all the video content has been transmitted,the server automatically switches back to the state withthe longnormal interarrival rate distribution.

C. Performance Metrics

The performance of the proposed integrated intra-data-center and core network architecture based on HOS isassessed in terms of energy consumption, average delay,and average data loss. The energy consumption is mea-sured in terms of joules per bit (J/b) and is computed asthe ratio between the total network power consumptionin watts and the total network throughput in bits persecond.

The delay is defined as the time difference between whenan IP packet is generated (i.e., either by a server in a datacenter or a cache server in a HOS edge node or a user of anetwork connected to edge HOS nodes) and the time the IPpacket is received (i.e., either by the destination server orthe destination HOS edge node). The global average net-work delay is defined as the mean value of the delays overall IP packets measured during a simulation run. The IPpackets that traverse the HOS network can be carried overdifferent transport mechanisms. We refer then to thepacket delay as the delay experienced by IP packets thatare transmitted as optical packets through the HOS net-work. Similarly, the short burst delay, long burst delay,and circuit delay are the delays experienced by IP packetsthat are transmitted through the HOS network over ashort burst, a long burst, or a circuit, respectively.

While computing the data loss rates, we assume that allthe electronic switches introduce negligible losses. As aconsequence, the losses in the core and intra-data-centernetworks may happen only in the HOS core switches.We define the packet loss rate as the ratio between thenumber of optical packets that are lost along a paththrough the HOS network and the total number of gener-ated packets. Similarly, the short burst and the long burstloss rates are defined as the ratio between the number oflost and the number of generated short and long bursts,respectively. Circuits are established using a two-wayreservation mechanism, and consequently the data trans-mitted over circuits do not experience any losses. However,in heavily loaded networks a circuit establishment requestcould be refused (i.e., blocked) by a core node. As a conse-quence, we define the circuit establishment failure proba-bility as the ratio between the number of blocked and thenumber of generated circuits.

We evaluate the above-mentioned performance for dif-ferent values of the network load. We define the load asthe ratio between the total amount of traffic offered tothe network by external sources (servers and legacy IP net-works) and the maximum amount of traffic that can behandled by the network, that is, the network capacity.

V. NUMERICAL RESULTS

This section presents a performance analysis of the pro-posed integrated intra-data-center and core HOS networkarchitecture with edge caching. First we comment on thebenefits that a carrier cloud operator can achieve by em-ploying the integrated HOS network instead of either a

Fiorani et al. VOL. 6, NO. 4/APRIL 2014/J. OPT. COMMUN. NETW. 427

Page 8: Energy Efficiency

nonintegrated HOS network or a conventional IP network.Then we present and discuss the impact of using distrib-uted cache servers on an integrated HOS network.

A. Integrated HOS Network

To better understand the results presented in thissection, in the following, we first explain the differencebetween the integrated and nonintegrated HOS architec-tures. In the nonintegrated HOS architecture, to intercon-nect data centers and core networks, we employ (i) HOSedge nodes (one per data center) at the core network sideand (ii) HOS data-center-to-core interfaces at the datacenter side. These components are shown in Fig. 3. TheHOS data-center-to-core interfaces perform traffic classifi-cation, conditioning, and assembling according to the datacenter policies, while the HOS edge nodes perform thesame functions according to the policies used in the corenetwork. The internal architecture of the HOS data-center-to-core interfaces is the same as the one of the

HOS aggregation switches used inside the data center.In the integrated HOS network, there is no need for usingHOS edge nodes and HOS data-center-to-core interfaces toconnect data centers to the core network because both datacenter and core network policies are considered whenprocessing the data center traffic in the HOS aggregationnodes. Here, the HOS edge nodes are only needed at thecustomer (cloud consumer) end.

In Fig. 4 we compare the energy consumption per bit as afunction of the network load for the integrated HOS net-work, the nonintegrated HOS network, and a conventionalIP network. The conventional IP network has a core and anintra-data-center network based on electronic switching.For comparative purposes, we also consider an IP core net-work able to put the LCs into sleep mode dynamically dur-ing idle times [18,19]. In our simulations, we assumed thatall the network nodes which are not directly connected to adata center are equipped with NCS � 10 cache servers, re-sulting in a total cache size of 10 TByte, which correspondsto 1/60 of the library.

Fig. 3. Interconnection between data center and core network in the nonintegrated HOS architecture.

5

10

15

20

25

30

35

40

45

50

30 % 35 % 40 % 45 % 50 % 55 % 60 % 65 % 70 % 75 % 80 % 85 % 90 %

Ene

rgy

cons

umpt

ion

per

bit (

nJ/b

it)

Load

Integrated HOS networkNonintegrated HOS networkElectronic IP network w sleepingElectronic IP network w/o sleeping

(a)

0

10

20

30

40

50

60

70

80

30 % 35 % 40 % 45 % 50 % 55 % 60 % 65 % 70 % 75 % 80 % 85 % 90 %

Ene

rgy

cons

umpt

ion

per

bit (

nJ/b

it)

Load

Integrated HOS networkNonintegrated HOS networkElectronic IP network w sleepingElectronic IP network w/o sleeping

Intra-data-center network

Core network

(b)

Fig. 4. Energy consumption per bit as a function of the input load. The energy consumption per bit is the ratio between the networkpower consumption and the network throughput. (a) Overall for core and intra-data-center networks and (b) core and intra-data-centernetworks shown separately.

428 J. OPT. COMMUN. NETW./VOL. 6, NO. 4/APRIL 2014 Fiorani et al.

Page 9: Energy Efficiency

In Fig. 4(a) we show the overall energy consumption perbit. The figure shows that by employing a sleep-based tech-nique it is possible to achieve large energy savings withrespect to current electronic IP networks, which leave allLCs in the active mode during the idle times. It is also evi-dent that, even if a sleep-based technique is employed in anIP network, a HOS network is still able to achieve signifi-cantly lower energy consumption values. This is becausethe HOS networks are based on an energy-efficient opticalswitching technology that benefits from transmitting cir-cuits and long bursts using an optical switchwith low powerconsumption and relatively slow switching timewhile usinga small number of fast optical switches for the transmissionof packets and short bursts. The benefit of using HOS be-comes more evident for high loads, where sleeping is notable to provide a significant improvement. Figure 4(a) alsoshows the improvement in energy efficiency offered by theintegrated HOS network with respect to the nonintegratedHOS network. This increment in energy efficiency mayseem small, but it is worth noting that at very high amountsof network traffic, such as those forecasted in [2,3] and as-sumed in this paper, evena reduction of a fewnanojoules perbit can result in significant overall energy savings. For in-stance, at a network load of 35% the integrated HOS net-work consumes 4 nJ∕b less than the nonintegrated HOSnetwork. This translates into a total of almost 2 MW saved.

In Fig. 4(b) we show separately the energy consumptionper bit in the core network and the intra-data-center net-works. The energy consumption per bit of the core networkis given as the ratio of the core network power consumptionand the core network throughput. In the nonintegratedHOS network the power consumption of the core includesthe HOS edge nodes dedicated to the interconnection to-ward the data centers. Similarly, the energy consumptionper bit of the intra-data-center networks is givenas the ratioof the total power consumption of the intra-data-center net-works and the total throughput of the intra-data-centernetworks. In the nonintegrated HOS architecture thepower consumption of the intra-data-center networksinclude the HOS aggregation switches dedicated to the

interconnection toward the core. From the figure we havetwo important observations. First, core networks are moreenergy efficient than intra-data-center networks. In fact, ac-cording to our calculations the power consumption of theintra-data-center networks is always much higher thanthe power consumption of the core network for similaramounts of carried traffic. The difference is mainly comingfrom the ToR switches introducing an extra level of aggre-gation in the intra-data-center networks, which is notpresent in the core network [see Eqs. (2) and (8)]. TheToR switches consume a very large amount of power anddominate the power consumption of the intra-data-centernetworks because of the very large number of ToR switchesin currenthigh-capacitydata centers based on the three-tierfat-tree network topology. This fact is more evident for theintegrated HOS network where we observe that at a net-work load of 35% the energy consumption per bit of the corenetwork is 5 times lower compared to the intra-data-centernetwork. Second, the integrated approach has a higher ben-eficial impact on the energy consumption per bit of the corenetwork than of the intra-data-center networks. In fact,when comparing the energy consumption of the integratedand the nonintegrated HOS networks, we observe that at anetwork load of 35%, the integrated approach reduces theenergy consumption per bit by 30.5% of the core networkand by 3.5% in the case of the intra-data-center network.This is because the additional HOS edge nodes, used in thenonintegratedHOSnetwork to connect toward thedata cen-ters, have a strong impact on the total power consumptionof the core network. This impact is higher than theimpact of the additional HOS aggregation switches used in-side the intra-data-center networks.

In Fig. 5 we compare the values of the average networkdelays as a function of the network load. Figure 5(a) showsthe average delays in the integrated HOS network, whileFig. 5(b) presents the average delays in the nonintegratedHOS network. The figures demonstrate clearly that the in-tegrated approach leads to a better delay performance andreduces the global average delays of IP packets by alwaysmore than 1 ms. In particular, the integrated approach

(a) (b)

Fig. 5. Average network delays as a function of the input load for the integrated and the nonintegrated HOS networks. (a) IntegratedHOS network and (b) nonintegrated HOS network.

Fiorani et al. VOL. 6, NO. 4/APRIL 2014/J. OPT. COMMUN. NETW. 429

Page 10: Energy Efficiency

significantly reduces the delays of IP packets transmittedover short and long bursts. This is due to the fact thatbursts employ a mixed timer-length assembly algorithm[9] that may take from several hundreds of microsecondsup to a few milliseconds. In the nonintegrated HOS net-work, the bursts must be disassembled and assembledagain in the electronic interfaces between a data centerand the core network, leading to a strong increase in theoverall network delay.

In this paper we assume that the electronic componentsintroduce negligible losses. As a consequence, the data lossrates in the integrated HOS network and in the noninte-grated HOS network are the same. In Fig. 6 we show theaverage data loss rates as a function of the network load.The optical packets are scheduled with the lowest priority,and thus they experience the highest losses. Optical burstsare scheduled a priori due to the offset time so that theyreceive a sort of prioritized handling in comparison to pack-ets. In particular, long bursts are characterized by long off-set times and show loss rates almost three orders ofmagnitude lower than packets and almost two orders ofmagnitude lower than short bursts. Finally, circuits arescheduled with the highest priority and achieve a losslessoperation andnegligible establishment failure probabilitiesin our simulations. To understand where in the network weobserve the highest losses, we plot in Fig. 6 the average lossrates in the intra-data-center, inter-data-center, and server-to-edge interconnections. We observe that the average lossrates in inter-data-center interconnections are always thehighest. This is because in the inter-data-center intercon-nections the data needs to cross on average the highestnumber of HOS core switches (in both the HOS core net-work and intra-data-center network). The lowest averageloss rates are instead achieved by the intra-data-center in-terconnections where data always have to cross a singleHOS core switch inside the data center.

B. Impact of Edge Caching

In Fig. 7 we show the energy consumption per bit of theintegrated HOS network against the network load and for

different values of the cache size. To vary the cache size wechange the number of cache servers per HOS edge node,that is, the value of NCS. We always assume that the net-work nodes connected to a data center are not equippedwith local cache servers. To understand the resultsshown in Fig. 7 it should be noted that we do not considerdynamic switching-off of the electronic LCs, and conse-quently the energy consumption of the electronic compo-nents is independent on the network load. Only thepower consumption of the optical switching fabric of theHOS core nodes, that is, POSF, changes with the networkload. Furthermore, it should be noted that the energy con-sumption per bit is defined as the ratio between the net-work power consumption given in watts and the networkthroughput given in bits per second. Figure 7 shows thatat low and moderate loads, the higher the cache size, thehigher the energy consumption per bit. In fact, in our sim-ulations, the increase in the storage energy consumptionintroduced by the distributed cache servers (PCache) is al-ways higher than the reduction of the transport energythat is obtained by switching off the unused optical switchports of the HOS core nodes. When increasing the load, weobserve that the larger the cache size, the faster the dec-rement of the energy consumption per bit. This is due tothe fact that increasing the number of distributed cacheservers reduces the average data loss rates in the network.The larger the cache size, the higher the network through-put, especially at high loads. However, the networkthroughput does not increase linearly with the cache size.In fact, as shown in [22,23], the network throughput in-creases in a log-like way with the increase in the cachesize. This means that the network throughput becomessaturated when increasing the cache size over a certainvalue. On the other hand, increasing the cache size leadsto an almost linear increase of the storage power consump-tion. As a consequence, at high loads we observe that thereis a trade-off between cache size and energy consumptionper bit. In our simulations, when the load is higher than50%, the best results in terms of energy consumption areachieved using 1/60 of the size of the library, that is, set-ting NCS � 10.

Fig. 6. Average data loss rate as a function of the input load.

5

10

15

20

25

30

30 % 35 % 40 % 45 % 50 % 55 % 60 % 65 % 70 % 75 % 80 % 85 % 90 %

Ene

rgy

cons

umpt

ion

per

bit (

nJ/b

it)

Load

Without cacheCache 1/120 of the library (5 TByte)Cache 1/60 of the library (10 TByte)Cache 1/30 of the library (20 TByte)Cache 1/15 of the library (40 TByte)

Fig. 7. Energy consumption per bit against the input load fordifferent cache sizes.

430 J. OPT. COMMUN. NETW./VOL. 6, NO. 4/APRIL 2014 Fiorani et al.

Page 11: Energy Efficiency

In Fig. 8(a) we present the global average network delayas a function of the network load for different values of thecache size. The figure highlights that the larger the cachesize, the lower the global average network delay. In particu-lar, increasing the cache size from 0 to 1/30 of the size of thelibrary (i.e., from 0 to 20 TByte) leads to a reduction of theglobal average delay in the network by about 2 ms. A fur-ther increase of the cache size from 1/30 to 1/15 of the size ofthe library (i.e., from 20 to 40 TByte) has a very limitedimpact on the global average network delays.

Finally, in Figs. 8(b)–8(d) we show the average loss ratesof packets, short bursts, and long bursts as a function of thenetwork load for different values of the cache size. The cir-cuit establishment failure probability is always null in theconsidered configurations. The figures show that the largerthe cache size, the lower the average loss rates. This is dueto the fact that increasing the cache size keeps the trafficmore local, which corresponds to a higher amount of re-quests from the end users served by the cache servers. Thisleads to a reduction of the traffic in the core and in theintra-data-center networks and consequently to lower lossratios. Figure 8 also shows that by increasing the cache sizefrom 0 to 1/60 of the size of the library (i.e., from 0 to

10 TByte) we achieve a high reduction in the loss rates,while increasing the cache size over 1/60 of the size ofthe library (i.e., over 10 TByte) has a very limited impacton the loss rates.

VI. CONCLUSIONS

In this paper we have proposed a unified networkarchitecture that provides both intra-data-center andinter-data-center connectivity together with interconnec-tion toward legacy IP networks. This architecture is tail-ored for the future carrier cloud operators running boththe data centers and the core network. The architectureis referred to as an integrated core and intra-data-centernetwork and is based on the HOS technology. The main ad-vantage of the integration of core and intra-data-centernetworks in a single infrastructure comes from avoidingelectronic interfaces between the data centers and the corenetwork. We evaluated the energy consumption along withthe delay and loss performance of the integrated HOSnetwork and made extensive comparisons with respectto a nonintegrated HOS solution and a conventional IP

11

11.5

12

12.5

13

13.5

14

14.5

15

15.5

30 % 35 % 40 % 45 % 50 % 55 % 60 % 65 % 70 % 75 % 80 % 85 % 90 %

Del

ays

(ms)

Load

Without cacheCache 1/120 of the library (5 TByte)Cache 1/60 of the library (10 TByte)Cache 1/30 of the library (20 TByte)Cache 1/15 of the library (40 TByte)

(a)

10-6

10-5

10-4

10-3

10-2

10-1

25 % 30 % 35 % 40 % 45 % 50 % 55 % 60 % 65 % 70 % 75 % 80 % 85 % 90 %

Pack

ets

loss

rat

es

Load

Without cacheCache 1/120 of the library (5 TByte)Cache 1/60 of the library (10 TByte)Cache 1/30 of the library (20 TByte)Cache 1/15 of the library (40 TByte)

(b)

10-6

10-5

10-4

10-3

10-2

10-1

25 % 30 % 35 % 40 % 45 % 50 % 55 % 60 % 65 % 70 % 75 % 80 % 85 % 90 %

Shor

t bur

st lo

ss r

ates

Load

Without cacheCache 1/120 of the library (5 TByte)Cache 1/60 of the library (10 TByte)Cache 1/30 of the library (20 TByte)Cache 1/15 of the library (40 TByte)

(c)

10-6

10-5

10-4

10-3

10-2

10-1

25 % 30 % 35 % 40 % 45 % 50 % 55 % 60 % 65 % 70 % 75 % 80 % 85 % 90 %

Lon

g bu

rst l

oss

rate

s

Load

Without cacheCache 1/120 of the library (5 TByte)Cache 1/60 of the library (10 TByte)Cache 1/30 of the library (20 TByte)Cache 1/15 of the library (40 TByte)

(d)

Fig. 8. Average delays and average data loss rates as a function of the input load for different values of the cache size. (a) Averagenetwork delays, (b) packet loss rates, (c) short burst loss rates, and (d) long bursts loss rates.

Fiorani et al. VOL. 6, NO. 4/APRIL 2014/J. OPT. COMMUN. NETW. 431

Page 12: Energy Efficiency

network based on electronic switching. We conclude thatthe integrated HOS network achieves by far the highestenergy efficiency. Furthermore, we demonstrated thatthe integrated HOS network reduces considerably theaverage network delays with respect to a nonintegratedHOS solution. As a consequence, we conclude that theintegrated HOS network is well suited for application incarrier clouds.

Furthermore, we studied the impact of distributed videocache servers on the energy consumption as well as the de-lay and loss performance of the integrated HOS network.The existing literature on this topic only takes into accountconventional core and intra-data-center networks based onIP electronic switching, which are characterized by low en-ergy efficiency. The aim of this study is to identify whethera carrier cloud operator that relies on the integrated HOSnetwork concept could increase energy efficiency by em-ploying edge caching. Therefore, we extended HOS edgenode architecture to include cache servers and contenttrackers. The content trackers interact with the HOS con-trol plane for updating the servers and processing incomingvideo requests. We also developed a novel analytical modelfor evaluating the energy consumed by the cache. Accord-ing to the results we conclude that to achieve both low de-lay and data loss as well as high energy efficiency in anintegrated HOS network, a careful dimensioning of thecache size is needed. In particular, at low and moderateloads we observed the highest energy efficiency is achievedin the case without any edge caching. Furthermore, ouranalysis also leads to the following general conclusion:when deciding to upgrade the traditional electronic switch-ing-based network to amore-energy efficient one, operatorshave to reconsider their edge caching strategy in order toachieve the best network performance.

REFERENCES

[1] “SMART2020: Enabling the Low Carbon Economy in theInformation Age,” The Climate Group, Global eSustainabilityInitiative, Tech. Rep., 2008 [Online]. Available: www.smart2020.org.

[2] “Cisco Visual Networking Index: Forecast and Methodology,2012–2017,” Cisco White Paper, May 2013.

[3] “Cisco Global Cloud Index: Forecast and Methodology,2011–2016,” Cisco White Paper, May 2012.

[4] Y. Zhang, P. Chowdhury, M. Tornatore, and B. Mukherjee,“Energy efficiency in telecom optical networks,” IEEECommun. Surv. Tutorials, vol. 12, no. 4, pp. 441–458, FourthQuarter 2010.

[5] R. S. Tucker, “Green optical communications part II: Energylimitations in networks,” IEEE J. Sel. Top. Quantum Elec-tron., vol. 17, no. 2, pp. 245–260, Mar./Apr. 2011.

[6] “Where does power go?” GreenDataProject, 2008 [Online].Available: http://www.greendataproject.org.

[7] C. Kachris and I. Tomkos, “A survey on optical interconnectsfor data centers,” IEEE Commun. Surv. Tutorials, vol. 14,no. 4, pp. 1021–1036, Fourth Quarter 2012.

[8] R. Veisllari, S. Bjornstad, and D. Hjelme, “Experimental dem-onstration of high throughput, ultra-low delay variation

packet/circuit fusion network,” Electron. Lett., vol. 49, no. 2,pp. 141–143, Jan. 2013.

[9] M. Fiorani, M. Casoni, and S. Aleksic, “Hybrid optical switch-ing for energy-efficiency and QoS differentiation in corenetworks,” J. Opt. Commun. Netw., vol. 5, no. 5, pp. 484–497, May 2013.

[10] O. Liboiron-Ladouceur, I. Cerutti, P. Raponi, N. Andriolli, andP. Castoldi, “Energy-efficient design of a scalable opticalmultiplane interconnection architecture,” IEEE J. Sel.Top. Quantum Electron., vol. 17, no. 2, pp. 377–383, Mar./Apr. 2011.

[11] M. Fiorani, S. Aleksic, and M. Casoni, “Hybrid optical switch-ing for data center networks,” J. Electr. Comput. Eng.,vol. 2014, 139213, 2014.

[12] J. Baliga, R. Ayre, K. Hinton, and R. S. Tucker, “Architecturesfor energy-efficient IPTV networks,” inOptical Fiber Commu-nication Conf. (OFC), 2009, paper OThQ5.

[13] C. Jayasundara, A. Nirmalathas, E. Wong, and C. Chan,“Energy efficient content distribution for VoD services,” inOptical Fiber Communication Conf. (OFC), 2011, pa-per OWR3.

[14] C. Chan, E. Wong, A. Nirmalathas, A. Gygax, and C. Leckie,“Energy efficiency of on-demand video caching systems anduser behavior,” Opt. Express, vol. 19, no. 26, pp. B260–B269, Dec. 2011.

[15] N. Osman, T. El-Gorashi, and J. Elmirghani, “The impact ofcontent popularity distribution on energy efficient caching,”in Proc. Int. Conf. on Transparent Optical Networks(ICTON), 2013, pp. 1–6.

[16] D. Cai and S. Natarajan, “The evolution of the carrier cloudnetworking,” in Proc. IEEE Symp. on Service-Oriented SystemEngineering (SOSE), 2012, pp. 286–291.

[17] A. Autenrieth, J. Elbers, P. Kaczmarek, and P. Kostecki,“Cloud orchestration with SDN/OpenFlow in carrier trans-port networks,” in Proc. Int. Conf. on Transparent OpticalNetworks (ICTON), 2013, pp. 1–4.

[18] F. Idzikowski, S. Orlowski, C. Raack, H. Woesner, and A.Wolisz, “Saving energy in IP-over-WDM networks byswitching off line cards in low-demand scenarios,” Proc. Conf.on Optical Network Design and Modeling (ONDM), 2010,pp. 1–6.

[19] S. Nedevschi, L. Popa, G. Iannaccone, S. Ratnasamy, and D.Wetherall, “Reducing network energy consumption viasleeping and rate-adaptation,” in Proc. USENIX Symp. onNetworked Systems Design and Implementation, 2008,pp. 323–336.

[20] A. Betker, C. Gerlach, R. Hulsermann, M. Jager, M. Barry, S.Bodamer, J. Spath, C. Gauger, andM. Kohn, “Reference trans-port network scenarios,” MultiTeraNet Report, July 2003.

[21] “Connectivity solutions for the evolving data center,” EmulexWhite Paper, May 2011.

[22] M. Zink, K. Suh, Y. Gu, and J. Kurose, “Characteristics ofYouTube network traffic at a campus network: Measure-ments, models, and implications,” Comput. Netw., vol. 53,no. 4, pp. 501–514, Mar. 2009.

[23] L. Braun, A. Klein, G. Carle, H. Reiser, and J. Eisl, “Analyzingcaching benefits for YouTube traffic in edge networks: Ameasurement-based evaluation,” in Proc. IEEE Network Op-erations andManagement Symp. (NOMS), 2012, pp. 311–318.

[24] T. Benson, A. Akella, and D. A. Maltz, “Network trafficcharacteristics of data centers in the wild,” in Proc. InternetMeasurement Conf. (IMC), 2010, pp. 267–280.

432 J. OPT. COMMUN. NETW./VOL. 6, NO. 4/APRIL 2014 Fiorani et al.