Top Banner
© The Author 2011. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved. For Permissions, please email: [email protected] doi:10.1093/comjnl/bxr110 Strategies and Metric for Resilience in Computer Networks Ronaldo M. Salles and Donato A. Marino Jr. Instituto Militar de Engenharia, Seção de Engenharia de Computação Praça Gen. Tibúrcio 80, Rio de Janeiro, RJ 22290-270, Brazil Corresponding author: [email protected] The use of the Internet for business-critical and real-time services is growing day after day. Random node (link) failures and targeted attacks against the network affect all types of traffic, but mainly critical services. For these services, most of the time it is not possible to wait for the complete network recovery; the best approach is to act in a proactive way by improving redundancy and network robustness. In this paper, we study network resilience and propose a resilience factor to measure the network level of robustness and protection against targeted attacks. We also propose strategies to improve resilience by simple alterations in the network topology. Our proposal is compared with previous approaches, and experimental results on selected network topologies confirmed the effectiveness of the approach. Keywords: network resilience; network topology; graph theory; k-connectivity; targeted attacks and failures Received 29 April 2011; revised 29 July 2011 Handling editor: Ing-Ray Chen 1. INTRODUCTION Today, computer networks are highly complex heterogeneous systems. This wis mostly due to the exponential growth of the Internet and its use as a convergence medium for all sorts of traffic and applications. There is a common perception in the field that engineers and researchers have the right knowledge to design and operate such complex networks but they still lack a thorough understanding of system behaviour under stress conditions and anomalies. Providing fault tolerance and prompt attack recovery capabilities to networks are still a matter for further investigation. Random node/link failures and targeted attacks against the network affect all types of traffic, but mainly critical services (e.g. e-commerce and e-government in the civilian sphere; comand and control data transmission in military tactical operations). For these services, most of the time it is not possible to wait for complete network recovery; the best approach is to act in a proactive way by improving redundancy and network robustness beforehand. Network resilience against failures and attacks relies mostly on the topology redundancy and the nodes’ connectivity. For instance if a single link failure disconnects network nodes, it is implied that the network is not robust enough given that redundancy is weak. On the other hand, assuming a full mesh topology where each node is connected to all others, if a given set of nodes are destroyed by a targeted attack, the remaining nodes are still connected and may continue to communicate with each other. It is therefore important to provide a way to quantify this notion in order to evaluate the resilience capacity of computer networks, mainly the ones that operate in critical scenarios. Regarding the Internet, the authors in [1] showed that although it is susceptible to both random failures and malicious attacks, the latter problem can cause greater damage to the network due to the particular Internet topological characteristics [2]. A target attack on a highly connected node (hub) can severely degrade performance, disconnect a whole region or isolate some network section from crucial services. An important aspect to consider is that nodes may have different roles in the network. Some nodes are central, acting as network hubs; if they fail (or are destroyed), there will be a considerable impact to the network since the network depends heavily on them. Other peripheral nodes may not have such an impact if they are put out of service. Social network metrics are useful to determine the degree of centrality of each network node. Hence, strategies to improve network resilience should not only consider redundancy by adding extra links between nodes The Computer Journal, 2011 The Computer Journal Advance Access published October 19, 2011 by guest on October 20, 2011 http://comjnl.oxfordjournals.org/ Downloaded from
12
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Strategies and Metric for Resilience in Computer Networks

© The Author 2011. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.For Permissions, please email: [email protected]

doi:10.1093/comjnl/bxr110

Strategies and Metric for Resiliencein Computer Networks

Ronaldo M. Salles∗

and Donato A. Marino Jr.

Instituto Militar de Engenharia, Seção de Engenharia de ComputaçãoPraça Gen. Tibúrcio 80, Rio de Janeiro, RJ 22290-270, Brazil

∗Corresponding author: [email protected]

The use of the Internet for business-critical and real-time services is growing day after day. Randomnode (link) failures and targeted attacks against the network affect all types of traffic, but mainlycritical services. For these services, most of the time it is not possible to wait for the complete networkrecovery; the best approach is to act in a proactive way by improving redundancy and networkrobustness. In this paper, we study network resilience and propose a resilience factor to measurethe network level of robustness and protection against targeted attacks. We also propose strategiesto improve resilience by simple alterations in the network topology. Our proposal is comparedwith previous approaches, and experimental results on selected network topologies confirmed the

effectiveness of the approach.

Keywords: network resilience; network topology; graph theory; k-connectivity; targeted attacks and failures

Received 29 April 2011; revised 29 July 2011Handling editor: Ing-Ray Chen

1. INTRODUCTION

Today, computer networks are highly complex heterogeneoussystems. This wis mostly due to the exponential growth of theInternet and its use as a convergence medium for all sorts oftraffic and applications. There is a common perception in thefield that engineers and researchers have the right knowledgeto design and operate such complex networks but they stilllack a thorough understanding of system behaviour under stressconditions and anomalies. Providing fault tolerance and promptattack recovery capabilities to networks are still a matter forfurther investigation.

Random node/link failures and targeted attacks against thenetwork affect all types of traffic, but mainly critical services(e.g. e-commerce and e-government in the civilian sphere;comand and control data transmission in military tacticaloperations). For these services, most of the time it is not possibleto wait for complete network recovery; the best approach is toact in a proactive way by improving redundancy and networkrobustness beforehand.

Network resilience against failures and attacks relies mostlyon the topology redundancy and the nodes’ connectivity. Forinstance if a single link failure disconnects network nodes, itis implied that the network is not robust enough given thatredundancy is weak. On the other hand, assuming a full mesh

topology where each node is connected to all others, if a givenset of nodes are destroyed by a targeted attack, the remainingnodes are still connected and may continue to communicatewith each other. It is therefore important to provide a way toquantify this notion in order to evaluate the resilience capacityof computer networks, mainly the ones that operate in criticalscenarios.

Regarding the Internet, the authors in [1] showed thatalthough it is susceptible to both random failures and maliciousattacks, the latter problem can cause greater damage to thenetwork due to the particular Internet topological characteristics[2]. A target attack on a highly connected node (hub) canseverely degrade performance, disconnect a whole region orisolate some network section from crucial services.

An important aspect to consider is that nodes may havedifferent roles in the network. Some nodes are central, actingas network hubs; if they fail (or are destroyed), there will be aconsiderable impact to the network since the network dependsheavily on them. Other peripheral nodes may not have such animpact if they are put out of service. Social network metricsare useful to determine the degree of centrality of each networknode.

Hence, strategies to improve network resilience should notonly consider redundancy by adding extra links between nodes

The Computer Journal, 2011

The Computer Journal Advance Access published October 19, 2011 by guest on O

ctober 20, 2011http://com

jnl.oxfordjournals.org/D

ownloaded from

Page 2: Strategies and Metric for Resilience in Computer Networks

2 R.M. Salles and D.A. Marino Jr.

but also try to reduce network dependency on some centralnodes. Resilience strategies may by applied on the design ofnew network topologies or on the modification of existing ones.

The contribution of this paper is 2-fold:

(i) The resilience factor is proposed to quantify the degreeof resilience of a given network topology. It is based onthe k-connectivity property of a graph;

(ii) Two strategies are proposed to improve networkresilience: Proposed preferential Addition (PropAdd)and Proposed preferential Rewiring (PropRew). Thestrategies use social network centrality metrics andemploy link addition and rewiring.

The remaining of this paper is organized as follows. InSection 2 related works on network resilience are discussed.Section 3 reviews some important concepts applied in ourproposal. Section 4 presents the resilience factor along withsome numerical results. The proposed strategies are presentedin Section 5 as well as numerical results comparing them withsimilar approaches. Finally, the work is concluded in Section 6and a list of references cited in this paper is presented.

2. NETWORK RESILIENCE

Resilience is a broad term used to study several different typesof systems and networks, ranging from socioeconomic, financeand even terrorist networks [3] to computer networks. In [4]the authors comment that resilience is an important serviceprimitive for various computer systems and networks but itsquantification has not been done well.

According to [5], network resilience is defined as: theability for an entity to tolerate (resist and autonomicallyrecover from) three types of severe impacts on the network andits applications: challenging network conditions, coordinateattacks and traffic anomalies.

Challenging network conditions occur, for instance, in somedynamic and hostile environments where nodes are weaklyand episodically connected by wireless links given their highmobility and topography conditions.

Coordinate attacks can be logical or physical. In the first case,the main target are network protocols and services; such attacksare typically classified as denial of service attacks (DoS or thedistributed DoS) [6, 7]. Physical attacks consist of networkinfrastructure destruction by the enemy in a war operation,terrorism or even natural disasters.

Traffic anomalies are any kind of unpredictable behaviouror failure that severely impacts network services, especiallymission critical applications.

Thus, network resilience is a broad topic of research andmay include or be related to robustness, survivability, networkrecovery, fault and disruption tolerance. The next subsectionspresent the context on which this work focuses.

2.1. Related work

One of the first works that studied network resilience andpresented a measure to evaluate network fault tolerance wasdue to [8]. The measure was defined as the number of faults thata network may suffer before being disconnected. The authorscomputed an analytical approximation of the probability of thenetwork to become disconnected and validated their proposalusing Monte Carlo simulation results. The simulation scenarioemployed three particular classes of graphs to represent networktopologies: cube connected-cycles, torus and n-binary cubes—all of them are symmetric and with fixed node degrees.

Percolation theory has also been employed to characterizenetwork robustness and fragility. The idea is to determine acertain threshold, pc, which represents a fraction of networknodes and their connections, that when removed the integrityof the network is compromised—the network disintegrates intosmaller, disconnected parts [9]. Below such a critical thresholddisconnections may occur but there still exists a large connectedcomponent spanning most of the network. A network with ahigher percolation threshold is preferred in terms of resiliencesince it will be more difficult to break it. Other works in theliterature also studied resilience under a percolation perspective,for instance, in [10] percolation was used to characterize Internetresilience.

The main goal of the work in [11] is to quantify networkresilience so that it is possible to compare different networksaccording to this property. The authors used as resilience metricsthe percentage of traffic loss due to network failures; theyalso considered a scalability parameter in respect to networksize, fault probability and network traffic volume. Resiliencewas evaluated taking into account uniform traffic patternsand dependent or independent link failures, with or withoutprotection. Input traffic is also modelled by Poisson processes.The authors concluded that complete topologies (mesh) are themost resilient; and considering regular topologies with the samenumber of nodes and links, Moore graph topologies presenteda better performance.

The work in [8], and later [11], presented resiliencemetrics based on probability computations, and also usingparticular environments with uniform traffic patterns andregular network topologies. It is important to evaluate resiliencein a more realistic scenario, considering for instance realnetwork topologies.

Another important work that studies resilience in packet-switched networks is [12]. The authors pointed out that somenetwork failure combinations may lead to loss of ingress–egressconnectivity or to severe congestion due to rerouted traffic. Theyprovided a thorough framework to analyse resilience accordingto failures and changed interdomain routing of traffic. Theframework is based on the calculation of the border-to-borderavailability of a network and the complementary cumulativedistribution function of the link load depending on networkfailures and traffic fluctuations.

The Computer Journal, 2011

by guest on October 20, 2011

http://comjnl.oxfordjournals.org/

Dow

nloaded from

Page 3: Strategies and Metric for Resilience in Computer Networks

Strategies and Metric for Resilience in Computer Networks 3

Dekker and Colbert [13] associated the concept of resilienceto the capacity of a given network to tolerate attacks andfaults on nodes. The work focused on network topologiesand studied connectivity and symmetry properties of thecorresponding graphs. They also considered other metrics intheir evaluation, such as link connectivity, distance betweennodes, graph regularity (degree distribution of nodes), etc.Network topologies were divided and studied in groups: Cayleygraphs, random graphs, two-level networks and scale-freenetworks. Their main conclusion is that to reach a good levelof resilience, the network should have a high degree and lowdiameter, be symmetric and contain no large subrings. However,they did not evaluate realistic networks in terms of resiliencenor proposed ways to improve their robustness.

Resilience of networks against attacks is also studied in [14],where the authors modelled the cost to the attacker to disable anode as a function of the node degree. Depending on this costfunction a certain type of network (Poissonian, Power Law, etc.)may become easier (harder) to crack, i.e. less (more) resilient.Such a cost function may depend on the particular scenariounder study and also may be difficult to determine.

The work in [15] considered three important metrics toevaluate network resilience with respect to random failuresand targeted attacks. The first one is the largest connectedcomponent (LCC), which gives the size of the largest subgraphthat still remains connected after the network is attacked ordisconnected by failures. The second metric studied was theaverage shortest path length (ASPL), which varies according totopology alterations. In fact, the authors used the average inverseshortest path (AISPL) to avoid numerical problems when nodesbecome disconnected. Finally, the network diameter was alsoconsidered to assess network robustness.

2.2. Research problems

Related works provided a glimpse of the vast literature underthe topic of network resilience, they also showed differentapproaches to the problem. In this subsection, we direct ourattention to some points that deserve further investigation andconstitute the focus of this work.

First of all, it has been shown that current communicationnetworks have topological weakness that seriously reduce theirattack survivability [1, 2]. Such weakness could be exploited bymalicious agents through the execution of directed attacks. Thiswork studies topological aspects of the network and focuses onresilience against target attacks on nodes.

Another observation is that most of previous worksconcentrate on synthetic topologies for analysis and some usedthese topologies to test their proposals. We realize that a morepractical and direct approach was missing and could add value tothe problem. Hence, our approach focused on working directlywith real topologies of network domains although we illustratedin some parts of the text how our approach works for someregular and uniform topologies (full mesh, line).

Both [13, 15] worked with scale-free topologies, whichpresent heavy-tailed degree distributions following a power law.The Internet and other large-scale networks (biological, social,etc.) are proved to exhibit such property [2, 16, 17]. However,network operators and ISPs are most interested in studyingresilience of their own backbones and domains, which are notnecessarily modelled by scale-free graphs. This paper dealswith this issue and investigates network resilience consideringrealistic ISPs topologies.

The work in [15] adopted three different metrics (LCC,ASPLand diameter) to represent resilience, however, we argue thatthose metrics are not consistent in several cases. For instance,ASPL and diameter are inconsistent when the network getsdisconnected, while the LCC does not give any informationabout the remaining subgraphs. This work investigates thismatter and proposes a new metric, the resilience factor RF,that does not suffer from those problems.

Finally, [15] also proposed strategies to improve the resilienceof a given network, however, they based on the strategies onrandom link additions and rewirings. We believe that moresophisticated approaches may achieve a better performance. Infact, this paper proposes novel strategies based on graph analysisand nodes qualification (centrality metrics [18]). Such strategiesprovide a better reasoning to link additions and rewiringsreflecting on improved performance.

3. BASIC CONCEPTS

The network topology is modelled by a graph G = (V , E),whereV is the set of vertices or network nodes (routers, switchesor any other infrastructure equipment) and E is the set of edgesor network links (fibre, cable, wireless, etc.) connecting twonodes. A graph G can be represented by the connectivity matrixM of size n × n, where n = ‖V ‖. Each element of M is 1 ifthere is a corresponding link in E, otherwise it is 0.

The degree of a node v, d(v), is the number of linksconnecting the node:

∑i∈V ;i �=v M(v, i). For undirected graphs,

M is symmetric and d(v) could also be defined in terms ofM(i, v). Note that this is the case considered in this paper,however, directed graphs can also be applied without loss ofgenerality.

The average degree of a network topology is given by

d(G) = 1

n

∑v∈V

d(v) (1)

and the degree distribution

P(k) = n(k)

n, (2)

where n(k) is the number of elements v ∈ V such that d(v) = k.For instance, in scale-free networks Equation (2) follows apower-law distribution, i.e. P(k) ∼ k−γ , where 2 < γ < 3for most real networks [19].

The Computer Journal, 2011

by guest on October 20, 2011

http://comjnl.oxfordjournals.org/

Dow

nloaded from

Page 4: Strategies and Metric for Resilience in Computer Networks

4 R.M. Salles and D.A. Marino Jr.

The distance between nodes is also an important propertyto be studied. The geodesic path, g(u, v) where u, v ∈ V ,is defined as the number of edges (links) in the shortest pathconnecting u to v. From this concept the network diameter (D)is defined as the largest geodesic path in the network,

D = maxuv

g(u, v) ∀u, v ∈ V. (3)

A low D may indicate a redundant and robust topology. Forinstance D = 1 in a full-mesh network and D = n in a linenetwork.

Another important concept related to paths is the AverageShortest Path Length (ASPL), which is given by

ASPL = 2∑

u,v∈V g(u, v)

n(n − 1). (4)

It is also common to consider the inverse parameter, AISPL,since in the study of resilience the network may get disconnectedyielding in some g(u, v) = ∞.

AISPL = 2∑

u,v∈V (1/g(u, v))

n(n − 1). (5)

Connectivity is also a fundamental property of a graphespecially in the study of network resilience. A graph is saidto be connected if there is a path between any two nodes in V .

If the graph is not connected, the size of the largest connectedsubgraph is usually considered instead. The LCC is computedusing breadth-first search or depth-first search techniques [20].Note that in case the graph is connected, the LCC is the graphitself. The LCC is usually given as the diameter of the largestconnected subgraph.

One of the properties of a graph that is mostly related torobustness, fault tolerance and redundancy is k-connectivity[21]. Several works in the literature employ k-connectivityto represent resilience in different applications and networkscenarios [22–24]. This concept is based on the Menger’theorem [25]: let G = (V , E) be a graph and u and v betwo non-adjacent vertices; then the minimum cut for u and v

is equal to the maximum number of disjoint paths connectingu and v.

According to [26], k-connectivity can be defined as follows.

Definition 3.1. Let G be a k-connected graph; then, for anytwo vertices u and v, there are at least k-vertice disjoint pathsbetween u and v.

A direct implication from the above definition1 is that agraph is k-connected when removing any of its k − 1 vertices,the resulting subgraph remains connected. This property isusually related to network resilience since it indicates topologytolerance to faults and/or attacks on nodes. Algorithms to

1There is also a similar definition considering edges (k-edge-connectivity)instead of vertices (k-vertice-connectivity). However, in this work we focus onk-vertice-connectivity given its closer relation to attacks on nodes.

compute the k-connectivity of a graph are based on Min-CutMax-Flow theorems [27, 28].

Regarding network nodes, centrality metrics play an essentialrole to characterize the relative importance of each node in thetopology. These metrics are commonly employed in the theoryof social networks [29]. In general terms a central node can beseen as a popular actor in the social topology.

In the case studied in this paper, a node with much highercentrality than all others may configure a problem for thenetwork in terms of resilience. An attack or failure on thisnode may disconnect the network, thus it is important to studycentrality and investigate how it relates to network resilience.

The first and simpler measure of centrality is known as DegreeCentrality (DC). It is simply defined as the degree of a node, infact it has already been studied before as d(v). If a given nodev has d(v) = 1, there is no further implication for the networkin terms of resilience since this node is a network leaf. On theother hand, if d(v) is high, v can be considered an importantnode for the connectivity of the network.

The second measure of centrality considered in this work isCloseness Centrality (CC), which defines how close the nodeis from the centre of the network. It is computed from the sumof distances between the node and all others. The lower thesum the closer is the node from the centre, or in other words, itrequires fewer intermediate nodes to reach all other nodes. Thefollowing Equation (6) defines CC for node u [18]:

CC(u) =⎡⎣ ∑

v �=u,v∈V

g(u, v)

⎤⎦

−1

. (6)

The third and last centrality measure studied in this paperis Betweenness Centrality (BC), which defines how often nodetakes part in the geodesic paths between all other nodes. Nodesthat occur on many shortest paths between other nodes havehigher betweenness than those that do not. The followingEquation (7) defines BC for node u [18]:

BC(u) =∑j<k

σu(j, k)

σ (j, k)j, k ∈ V, (7)

where σ(j, k) is the number of shortest paths from j to k, andσu(j, k) is the number of shortest paths from j to k that passthrough a node u.

4. THE RESILIENCE FACTOR

From the study of previous works in the literature it can be saidthat connectivity plays a major role regarding network resilienceand should be considered in the construction of the resiliencefactor. The analysis of attacks over networks, either physical orcyber attacks, shows that topology may become disconnectedand so high node connectivity is desired for the topology as aproactive protection [13, 30–32].

The Computer Journal, 2011

by guest on October 20, 2011

http://comjnl.oxfordjournals.org/

Dow

nloaded from

Page 5: Strategies and Metric for Resilience in Computer Networks

Strategies and Metric for Resilience in Computer Networks 5

The principal metrics that describes connectivity isk-connectivity, however, other metrics such as AISPL and LCC(see Section 3) have also been proposed in previous works andwill be discussed and analysed in this section.

One key observation towards our proposal is that by analysingmost of the network commercial backbones, it can be verifiedthat such topologies are 2-connected (or 3-connected); i.e.if a node (or any pair of nodes) is put out of service, thenetwork continues to operate maintaining the remaining nodesconnected.

The question that follows is whether all those topologieshave the same level of resilience, or in other words, how cantwo different 2-connected topologies be compared in terms ofresilience.

4.1. Proposal

The k-connectivity property of a graph is used as a basis for theproposed resilience factor of network topologies. However, asposed by the above question, our proposal is to specialize thisproperty to better express the notion of resilience. We proposethe use of the partial k-connectivity property of a graph torepresent the resilience factor.

The idea behind partial-k-connectivity is explained asfollows. Assume the there are two different 2-connectednetwork topologies, T1 and T2, with the same number of nodes.Since they are 2-connected, any node can be deleted withoutcausing disconnections but there is at least one pair of nodes thatif deleted disconnects the topologies—they are not 3-connected.However, assume that T1 has just a single pair of nodes that whenremoved disconnects the topology, and T2 has two different pairsof nodes that cannot be deleted (one at a time); otherwise T2 getdisconnected.

Although both T1 and T2 are not 3-connected, T1 is betterthan T2 in terms of 3-connectivity since there is just one case(one pair of nodes) that is capable of disconnecting it, while inT2 chances are twice as high (two pairs of nodes). It is thereforenatural to consider that T2 is somehow less resilient than T1. Weconsolidate this idea in the resilient factor (RF) below:

RF =∑n−1

i=2 k(i)

(n − 2), (8)

where n is the number of nodes in the topology and k(i)

the percentage of node combinations that guarantees partiali-connectivity. It is assumed that all networks considered are1-connected and n-connected, thus these cases are excludedfrom computations.

Note that, for a line network topology RF = 0 and for afull-mesh topology RF = 1 (100%), all other arrangementsfall in between these two cases. This conforms to the idea thata line topology presents very poor levels of resilience whilea full-mesh topology enjoys full resilience, which makes RF

consistent in this sense. Another important aspect is that the use

of percentages gives a certain independence for the factor interms of the number of nodes in the topology.

Figure 1 illustrates the computation of RF for the networktopology in Fig. 1a. It can be seen from Fig. 1b that the topologyis 2-connected since if a single node fails or is put out of servicethe remaining topology is still connected, or in other words,there are at least two disjoint paths connecting any pair of nodes.According to our notation, k(2) = 1 in this case.

Figure 1c checks the topology for 3-connectivity. The onlycase that fails is when nodes 1 and 3 are deleted at the

(a)

(b)

(c)

(d)

(e)

FIGURE 1. Example on the computation of the resilience factor—RF.(a) Network topology. (b) Subgraphs with one node deleted, k(2) =55 = 1. (c) Subgraphs with two nodes deleted, k(3) = 9

10 = 0.9. (d)

Subgraphs with three nodes deleted, k(4) = 710 = 0.7. (e) Subgraphs

with four nodes deleted.

The Computer Journal, 2011

by guest on October 20, 2011

http://comjnl.oxfordjournals.org/

Dow

nloaded from

Page 6: Strategies and Metric for Resilience in Computer Networks

6 R.M. Salles and D.A. Marino Jr.

same time, and node 2 gets disconnected from nodes 3 and4 (second subgraph). Hence, the topology is not 3-connected.However, this is the only case out of 10 possibilities and so suchinformation should be taken into account for resilience matters.This is considered by partial 3-connectivity, k(3) = 0.9. Then,in Fig. 1d, k(4) is computed. Figure 1e and a are extremecases (assumed to be connected) and not considered in theRF as described in (8): RF = (1 + 0.90 + 0.7)/3 ⇒ RF =0.8666 (86.66%).

The next subsection compares the proposed resilience factorwith other resilience metrics and verifies whether the proposalis consistent or not.

4.2. Numerical results

In this subsection, we evaluate the proposed resilience factor(RF) and compare its results with other well-known metrics(AISPL and Diameter) already discussed in previous sections.

Results are obtained for three real network topologies:Telcordia (Fig. 2), Cost239 (Fig. 3) and UKnet (Fig. 4). Thetopologies were used to represent typical network domains aswe wish to consider real networks in our study to be closerto practical scenarios. The topologies differ in the number ofnodes, links, layout and degree distribution; we believe theyreasonably cover the scenario we wanted to study.

The letters ‘B’, ‘C’ and ‘D’ in the topologies represent thehighest BC, CC and DC nodes, respectively, while number ‘2’is used when the same node is both CC and BC (Fig. 3) or BCand DC (Fig. 4).

Figures 5–7 show the results obtained for RF againstAISPL (as in Equation (5)) and Diameter (as in Equation (3))for TelCordia, Cost239 and UKnet topologies, respectively.However, in order to display the three metrics in the same scale,the Diameter was normalized (Diameter*)—it is important toobserve the variations of this parameter for each situation.

FIGURE 2. Telcordia/Bellcore topology (based on the New Jerseycommercial backbone): 15 nodes, 28 links (bidirectional), degreedistribution [2(13%), 3(47%), 4(13%), 5(13%),6(13%)].

FIGURE 3. Cost239 topology (see Ref. [33]): 19 nodes, 40 links(bidirectional), degree distribution [3(21%), 4(47%), 5(21%), 6(11%)].

FIGURE 4. UKnet topology (based on the UK national backbone): 30nodes, 52 links (bidirectional), degree distribution [2(20%), 3(40%),4(23%), 5(13%), 6(3%)].

FIGURE 5. RF, AISPL and diameter comparison for the Telcordiatopology.

The Computer Journal, 2011

by guest on October 20, 2011

http://comjnl.oxfordjournals.org/

Dow

nloaded from

Page 7: Strategies and Metric for Resilience in Computer Networks

Strategies and Metric for Resilience in Computer Networks 7

FIGURE 6. RF, AISPL and diameter comparison for the Cost239topology.

FIGURE 7. RF, AISPL and diameter comparison for the UKnettopology.

The topologies were subjected to seven different situations:original topology, highest DC node removed, highest 10%DC nodes removed, highest CC node removed, highest 10% CCnodes removed, highest BC node removed and highest 10% BCnodes removed indicating different anomalies in the topologies.RF,AISPL and Diameter* were computed for each situation andresults displayed in bar graphs.

The first black bar in the graphs represents the resilience ofthe original topology according to the three different metrics.It can be observed that RF and AISPL results are close for allFigs. 5–7. In fact, the RF and AISPL results are of samemagnitude in all situations, however, RF provides greater dis-crimination between different situations. In Fig. 5 it can beseen that although the RF absolute results are closer to AISPL,

RF variations followed a better Diameter* profile. Similarbehaviour is also observed for the other topologies in Figs. 6and 7. This suggests that RF enjoys important properties ofboth AISPL and Diameter metrics.

Another important result that comes out from the graphsis that the Cost239 topology is more resilient than the othertwo topologies. In Fig. 6, the RF results are higher for allseven situations when compared with Fig. 5 (Telcordia) andFig. 7 (UKnet). This was somehow expected given the morehomogeneous layout of Cost239 and can be reassured by thedegree distribution of nodes. Looking at Fig. 3 caption, Cost239has no node of degree 2 and a higher percentage of degree 4, 5and 6 nodes. Also, it does not suffer severely from the removalof nodes; in Fig. 6, results within the same metrics are close forall situations.

Regarding Telcordia and UKnet, a different behaviour isobserved.While the Telcordia original topology is more resilientthan UKnet, it suffers a greater impact when nodes are removed.RF results indicate that when 10% of highest BC or DC nodesare removed, the resulting subgraphs are even less resilient thanUKnet subgraphs.

Finally, it is important to realize that in all situations presentedthe topologies and their corresponding subgraphs (after noderemovals) remained connected, i.e. there was always a pathconnecting any pair of remaining nodes. This was necessary tomaintainAISPL and Diameter metrics consistent and to ensure afair comparison withRF. BothAISPL and Diameter are based onshortest paths computation; as long as nodes are removed fromthe topology, paths grow in size providing a steady behaviour tothe metrics (either they increase or decrease). However, when anode removal produces a disconnected topology (for instance,dividing nodes in two disconnected subgraphs), there is no moresense in taking into account the computation of shortest paths.This constitutes a serious drawback for most metrics but it doesnot affect RF since it is in fact based on connectivity properties.

4.3. Discussion concerning RF

The first issue important to discuss about RF is the complexityinvolving its computation. It is necessary to evaluate allpossible combinations of node removals (excluding the trivialcases: no removals and (n − 1) node removals) to obtainRF, which requires 2n − (n + 1) tests. However, there aresome considerations that make this calculation possible for theproblems of interest.

The resilience factor was primarily proposed to assess theresilience of a single network domain against attacks. Thus thenumber of nodes, n, is constrained by the size of real networkdomains, which we consider to be <40 in general (Figs 2–4illustrate some network domains used in our study). For a verylarge network that comprises several domains, our approachcould be applied if each domain is taken separately one at atime and then the backbone that connects all domains.

The Computer Journal, 2011

by guest on October 20, 2011

http://comjnl.oxfordjournals.org/

Dow

nloaded from

Page 8: Strategies and Metric for Resilience in Computer Networks

8 R.M. Salles and D.A. Marino Jr.

Another important aspect concerning the computation ofRF is that it is obtained off-line before a problem occurs.As already mentioned, our study focuses on proactive waysto improve resilience and not on how to react to a networkanomaly condition. Hence, there is no strict restrictions aboutthe computational time to return RF, provided it is viable.

In the particular case a single network domain is very largemaking the computation of RF not viable in practice, we canstill apply our approach with the following small modification.Instead of considering all node combinations, which in this casemay be prohibitive, RF can be computed using only an initialset of combinations, say combinations of up to 10% of n. It isimportant to realize that what really impact the calculation ofRF are the middle cases where the number of combinations ismaximum,

(n

k

)for k around n/2 (Fig. 1b and c in the example).

Regarding the definition of RF as presented in Equation (8),it can be seen that all k(i) were equally weighted in themean. From a failure perspective, for instance, this may notbe the case since a single failure is more likely to happenthan a double failure and so on. Thus, according to this vieweach individual component k(i) could be associated to adifferent weight wi in the computation of RF, for instanceRF = ∑n−1

i=2 wik(i)/∑n−1

i=2 wi (if all wi = 1, we go back toEquation (8)). However, our main focus in this paper is onresilience against targeted attacks, and in this case strategiesmay be employed to impact a single node or multiple nodesat the same time. It is not straightforward to tell which strategyis more likely to happen and how they could be differentlyweighted. Another example is natural disasters where it is alsodifficult to predict beforehand the size of the impact over the net-work. Therefore, we decided to keep RF in the simplest form notdepending on the definition of extra n−2 parameters (weights).

It is important to briefly discuss how RF relates to otherpossible metrics, besides diameter and AISPL, also used tocharacterize resilience. One of these metrics presented inSection 2.1 to represent resilience is percolation. The main focusof percolation theory applied to network resilience is to searchfor phase transitions, or in other words, from what point or abovewhich threshold the network percolates and loses its previousstructure. If this point is more difficult to be achieved, givena higher percolation threshold, the network can be consideredmore robust than another. In our proposal we did not focus on asingle specific property of the network; actually we consideredall disconnection scenarios through the computation of RF.In fact, one may say that RF takes into account all scenariosbelow and above percolation. We also believe that percolationtheory is more relevant when the network under investigation isreally large (some studies consider infinite topologies) such thatsimple disconnections do not affect the system; in this case itis more relevant to search for phase transitions that may indeedcompromise the structure of the system as a whole. However, inthe case of a single network domain it is important to considerevery impact to the network and the proposed resilience factoraccounts for all of that.

Finally, the proposed factor does not take into accountindividual link problems, but only the links connected tothe removed nodes. Also it does not evaluate the impact oftraffic losses due to network alterations since this requires theknowledge of traffic matrices, routing protocols and servicelevel agreements of the networks under investigation, whichis not part of our topological study. Such investigation iscomplementary to our approach and can also be consideredto provide a wider view of the problem regarding the specificnetwork operation.

5. STRATEGIES TO IMPROVE NETWORKRESILIENCE

After studying resilience and proposing the resilience factorRF in the previous section, we are ready to move forward andinvestigate what can be done to improve network resilience.

It is important to evaluate if topology alterations wouldimprove the resilience of a given network and what alterationswould be more advantageous for the case under study. Of course,alterations are not always possible given physical and financialrestrictions, but the objective here is to provide insightfulinformation to support network manager decisions on how toachieve a more resilient network.

Basically, there are two types of alterations that are commonlyemployed to increase network robustness and tolerance tofailures and attacks: link additions and link rewirings (notethat a common procedure to upgrade network infrastructureis to increase link capacity, however, this operation does notprovide alterations in connectivity or in the general layout of thetopology and thus it is not related to resilience in our context).

In practical terms, link additions tend to be more effectivethan rewirings since they directly increase redundancy andnetwork resources; however, there may be situations whererewirings are preferred. For instance suppose there is a radiolink connecting nodes A and B, but the network manager comesto the conclusion that a much more important connection for thenetwork would be between nodes A and C. Then, he may decideto remove radio equipment from node B and install it in node C,establishing the radio link now between nodes A and C. Suchrewiring operation may be several times less expensive thancontracting a new link between those nodes. Therefore, unlesssome practical restriction is imposed to the problem, we treatadditions and rewirings as possible strategies to be consideredin the effort to improve network resilience.

5.1. Previous work

The work in [15] proposed two addition and four rewiringstrategies to improve network robustness: random addition(S1a), preferential addition (S1b), random edge rewiring (S2a),random neighbour rewiring (S2b), preferential rewiring (S3a)and preferential random edge rewiring (S3b). According to the

The Computer Journal, 2011

by guest on October 20, 2011

http://comjnl.oxfordjournals.org/

Dow

nloaded from

Page 9: Strategies and Metric for Resilience in Computer Networks

Strategies and Metric for Resilience in Computer Networks 9

results presented in the paper, the best rewiring strategy wasachieved by S3a and the best addition strategy by S1b:

(i) preferential addition (S1b): add a new edge byconnecting two unconnected nodes having the lowestdegrees in the network;

(ii) preferential rewiring (S3a): disconnect a random edgefrom a highest-degree node, and reconnect that edge toa random node.

The strategies proposed in this paper were motivated andcompared with those presented in [15], particularly S1b and S3a.

5.2. The proposed strategies

Before analysing rewiring and addition strategies, it is importantto study the role each node may play towards achieving networkresilience.

Regarding the degree of nodes, low DC nodes are verycritical since they limit network connectivity. Let κ be the k-connectivity of the network and DCmin the minimum DC; it canbe directly proved that κ ≤ DCmin [34]. Thus, a reasonable andsimple strategy towards improving network resilience is to addlinks to the lowest degree nodes in order to increase DCmin; thatis what is done with S1b.

On the other hand, to decrease the impact of targeted attacks,it is important to reduce network dependency on certain nodes.For instance a node of very high degree may be selected asa good target for attack in the network; this is the reasoningbehind S3a.

However, we believe that strategies S1b and S3a could stillbe improved to provide better resilience to the network. Thestrategies are based on a single centrality metric (DC) and donot take into account other important information provided byCC or BC. Morevoer, S3a provides reconnection to a randomnode, which may end up sometimes being ineffective or evenworsen network resilience since there is no control over thischoice.

In this work we propose the following addition and rewiringstrategies:

(i) PropAdd: add a new edge by connecting the lowest DCnode to the lowest CC node.

(ii) PropRew: disconnect from the highest-degree node theedge with its highest CC neighbour, and reconnect thatedge to the lowest DC node in the network.

While S1b improves only DC, PropAdd tries to improve twocentrality metrics CC and DC at the same time.

Regarding the proposed rewiring strategy, it tries to regularizethe network topology by decreasing the degree of the highestDC node and increasing the degree of the lowest DC node.In addition to that, PropRew also brings the lowest DC nodecloser to the network centre since it is connected to the highestCC neighbour. The effect of that is a possible reduction in thenetwork diameter.

It is important to observe that all strategies discussed in thissection do not take into account practical issues, such as: coststo launch or rent a communication link, technology, type oflink (wired/wireless), length of links and geographic distances,political matters, etc. Therefore, they may return alterations inthe topology that are not viable in a real network scenario.

It is up to the network manager to decide whether he canadopt the returned solutions or not. Such a decision is verymuch related to the particular network under administration andthe corresponding practical restrictions. The strategies are alsoimportant for the network manager to get insights about goodmoves towards improving resilience.

The next subsection evaluates the results presented by ourproposed strategies (PropAdd and PropRew) when comparedwith the previous S1b and S3a.

5.3. Numerical results

The next figures illustrate the results obtained when employingthe strategies presented in Section 5 to the three topologies understudy: Telcordia (Fig. 8), Cost239 (Fig. 9) and UKnet (Fig. 10).In the graphs: PropAdd results are presented in blue solid lineswith dots, S1b in red solid lines with triangles, PropRew ingreen dashed lines with stars and S3a in brown dashed lineswith squares.

Each strategy starts from the original topology (step 0) andis applied again in five consecutive iterations (steps 1–5). Weconsidered that in a practical scenario it may not be feasible toproduce a large number of alterations in the network topology,and hence we allowed no more than five alterations

Since the objective of the strategies is to improve resilience,the resilience factor (RF) should increase at each step. For allthe 60 points presented in the graphs, the resilience factor hasdecreased in just six cases: 1 for PropRew (Fig. 8 from step 4 tostep 5) and 5 for S3a (3 in Fig. 8 and 2 in Fig. 9). Note that forthe only case where PropRew failed to produce a positive result,

FIGURE 8. Strategies comparison for the Telcordia topology.

The Computer Journal, 2011

by guest on October 20, 2011

http://comjnl.oxfordjournals.org/

Dow

nloaded from

Page 10: Strategies and Metric for Resilience in Computer Networks

10 R.M. Salles and D.A. Marino Jr.

FIGURE 9. Strategies comparison for the Cost239 topology.

FIGURE 10. Strategies comparison for the UKnet topology.

the decrease in RF was minimal (<1%) and the two points maybe considered equivalent in a practical situation. S3a producedthe worst results among all the strategies, providing a decreasein RF even in the first iteration for the Telcordia and Cost239topologies.

Some general conclusions about the behaviour of thestrategies can be observed from the results presented in thegraphs and will be discussed next.

First of all, strategies based on link addition (PropAdd andS1b) provided a better performance than those based on linkrewiring (PropRew and S3a). This was expected since a linkaddition in fact adds resources to the network and should directlycontribute to increase redundancy.

The proposed strategies (PropAdd and PropRew) presentedbetter results than the previous ones (S1b and S3a). Thisconfirmed our expectations about the simplicity of S1b and S3a,which were based on random procedures and used only DC asmetrics.

Among all four strategies PropAdd provided the highest gainsin terms of resilience in all cases for the three topologies under

TABLE 1. PropAdd gains on resilience (RF) over original topologiesand S1b strategy.

Telcordia (%) Cost239 (%) UKnet (%)

Original topology 37.84 22.97 41.71S1b (max) 9.74 7.15 6.26S1b (step 5) 3.05 7.15 6.25

TABLE 2. PropRew gains on resilience (RF) over original topologiesand S3a strategy.

Telcordia (%) Cost239 (%) UKnet (%)

original topology 18.15 11.22 23.30S3a (max) 12.94 9.66 13.48S3a (step 5) 8.77 9.38 13.48

study. In fact, PropAdd provided a steady increase in resiliencewith all results above the other strategies. If link additions arepossible in the network under study, PropAdd should be adoptedas the main strategy. However, if rewirings are preferred by thenetwork operator, PropRew may also be a good choice giventhe gain observed over the previous rewiring strategy (S3a).

S3a performed poorly when compared with the otherstrategies and could be discarded. The poor results may be dueto the two random operations employed by S3a.

Tables 1 and 2 summarize the gains on RF obtained by theproposed strategies when compared with the resilience of theoriginal topologies (Telcordia, Cost 239 and UKnet) and alsothe gains over previous strategies S1b and S3a.

It can be seen from Table 1 that PropAdd improved theresilience of the UKnet topology by more than 40%. PropAddalso provided an improvement over S1b of about 6%, moreprecisely, a maximum gain of 6.26% and a 6.25% gain at step5. The best result of PropAdd when compared with S1b occurredfor the Telcordia topology: 9.74%.

Table 2 shows the results obtained with PropRew. PropRewimproved the resilience of the UKnet topology by 23.30% andprovided a gain of more than 13% when compared with R3a.

6. CONCLUSIONS

This paper studied resilience in computer networks. We showedthe importance of quantifying the notion of resilience in orderto measure the capacity of the network to tolerate failures andtargeted attacks. A resilience factor (RF) was proposed basedon connectivity properties of a graph representing the networktopology under study.

The resilience factor can be applied in practice as an importanttool for the network manager (designer) to evaluate how a givenalteration in the topology impacts resilience against targetedattacks. It can also be used to construct strategies for future

The Computer Journal, 2011

by guest on October 20, 2011

http://comjnl.oxfordjournals.org/

Dow

nloaded from

Page 11: Strategies and Metric for Resilience in Computer Networks

Strategies and Metric for Resilience in Computer Networks 11

network expansions or protection. The factor was comparedwith other metrics previously employed to quantify resiliencein computer networks and the advantages on using our approachwere shown.

After that, the resilience factor was employed to evaluatetwo proposed strategies designed to improve the resilience ofa given network. The strategies were based on link additionsand rewirings, and also on centrality properties of the topologygraph.

The strategies do not take into account some practicalrestrictions (geographic, economic, political, etc.) that mayaffect the network; such issues depend on the particular scenariounder investigation, being out of the scope of this work.However, even in the cases where the strategy suggests a notviable network alteration, it can still be used as a reference forthe network manager to compare with other planned actions andto give insights about good moves towards improving networkresilience. The strategies were compared with previous workand shown to have a better performance.

Future work intends to study resilience in mobile wirelessnetworks where topology changes frequently according to themovement of nodes. Another interesting study is to apply ourapproach with the focus on the link, and so work with partialk-link connectivity instead of partial k-node connectivity. Thisprovides better adherence to the problems affecting links, suchas failures, but increases computational complexity since thenumber of links is usually higher than the number of nodes in anetwork topology.We also want to apply our study in other typesof networks such as electric power distribution, water supply,social networks, etc.

ACKNOWLEDGEMENTS

The authors thank all the support received from the MilitaryInstitute of Engineering (Brazilian Army) during this research.

FUNDING

This work was also sponsored by the CNPq (Brazilian Ministryof Science and Technology) under grant number 305626/2007-8.

REFERENCES

[1] Park, S., Khrabrov, A., Pennock, D., Lawrence, S., Giles, C. andUngar, L. (2003) Static and Dynamic Analysis of the InternetsSusceptibility to Faults and Attacks. Proc. IEEE INFOCOM2003, San Francisco, USA, March 30–April 3, pp. 2144–2154.IEEE, NJ, USA.

[2] Albert, R., Jeong, H. and Barabási, A.-L. (2000) Error and attacktolerance of complex networks. Nature, 406, 378—482.

[3] Gutfraind, A. (2010) Optimizing topological cascade resiliencebased on the structure of terrorist networks. PLoS ONE, 5, 1–20.

[4] Trivedi, K.S., Kim, D.S. and Ghosh, R. (2009) Resiliencein Computer Systems and Networks. Proc. IEEE/ACM Int.Conf. Computer-Aided Design, San Jose, USA, November 2–5,pp. 74–77. IEEE, NJ, USA.

[5] Aggelou, G. (2008) Wireless Mesh Networking. McGraw-HillProfessional, ISBN 0071482563.

[6] Douligeris, C. and Mitrokosta, A. (2004) DDos attacks anddefense mechanisms: classification and state-of-the-art. Comput.Netw., 44, 643–666.

[7] Liu, S. (2009). Surviving distributed denial-of-service attacks. ITProf., 11, 51–53.

[8] Najjar,W. and Gaudiot, J.L. (1990) Network resilience: a measureof network fault tolerance. IEEE Trans. Comput., 39, 174–181.

[9] Callaway, D., Newman, M.E.J., Strogatz, S. and Watts, D.J.(2000) Network robustness and fragility: percolation on randomgraphs. Phys. Rev. Lett., 85, 5568–5471.

[10] Cohen, R., Erez, K.,Avraham, D. and Halvin, S. (2000) Resilienceof the internet to random breakdowns. Phys. Rev. Lett., 85,4626–46285.

[11] Liu, G. and Ji, C. (2009) Scalability of network-failureresilience: analysis using multi-layer probabilistic graphicalmodels. IEEE/ACM Trans. Netw., 17, 319–331.

[12] Menth, M., Duelli, M., Martin, R. and Milbrandt, J. (2009)Resilience analysis of packet-switched communication networks.IEEE/ACM Trans. Netw., 17, 1950–1963.

[13] Dekker, A.H. and Colbert, B.D. (2004) Network Robustnessand Graph Topology. Proc. 27th Australasian Conf. ComputerScience, Dunedin, New Zealand, January 26, pp. 359–368. ACM,Australian Computer Society.

[14] Annibale, A., Coolen, A.C.C. and Bianconi, G. (2010) Networkresilience against intelligent attacks constrained by degree-dependent node removal cost. J. Phys. A: Math. Theor., 43, 1–25.

[15] Beygelzimer, A., Grinstein, G., Linsker, R. and Rish I. (2005)Improving network robustness by edge modification. Phys. A:Stat. Mech. Appl., 357, 593–612. Comput. Commun. Rev., 29,251–262.

[16] Faloutsos, M., Faloutsos, P. and Faloutsos, C. (1999). OnPower-Law Relationships of the Internet Topology. Proc.SIGCOMM’99, Cambridge, USA, August 31–September 3,pp. 251–262. ACM, New York, USA.

[17] Barabasi, A.-L., Ravasz, E. and Vicsek, T. (2001) Deterministicscale-free networks. Phys. A: Stat. Mech. Appl., 299, 559–564.

[18] Wasserman, S., Faust, K. and Iacobucci, D. (1994) Social NetworkAnalysis : Methods and Applications. Cambridge UniversityPress.

[19] Barabasi, A.-L. and Albert, R. (2002) Statistical mechanics ofcomplex networks. Rev. Mod. Phys., 74, 47–97.

[20] Hopcroft, J. and Tarjan, R. (1973) Efficient algorithms for graphmanipulation. Commun. ACM, 16, 372–378.

[21] Bertsekas, D. and Gallager, R. (1987) Data Networks. Prentice-Hall, Inc., Upper Saddle River, NJ, USA.

[22] Yang, L. (2006) Building k-connected neighborhood graphs forisometric data embedding. IEEE Trans. Pattern Anal. Mach.Intell., 28, 827–831.

[23] Jia, X., Kim, D., Makki, S., Wan, P. and Yi, C. (2005) Powerassignment for k-connectivity in wireless ad hoc networks.J. Comb. Optim. (Kluwer Academic Publishers), 9, 213–222.

The Computer Journal, 2011

by guest on October 20, 2011

http://comjnl.oxfordjournals.org/

Dow

nloaded from

Page 12: Strategies and Metric for Resilience in Computer Networks

12 R.M. Salles and D.A. Marino Jr.

[24] Bredin, J., Demaine, E.D., Hajiaghayi, M. and Rus, D. (2005)Deploying Sensor Networks with Guaranteed Capacity andFault Tolerance. Proc. 6th ACM Int. Symp. MOBIle ad HOCNetworking and Computing, IL, USA, May 25–28, pp. 309–319.ACM, New York, USA.

[25] Menger, K. (1927) Zur allgemeinen Kurventheorie. Fundam.Math., 10, 96–115.

[26] Skiena, S. (2008) The Algorithm Design Manual. Springer.[27] Kleitman, D.J. (1969) Methods for investigating connectiv-

ity of large graphs. IEEE Trans. Circuit Theory, CT-16,232–243.

[28] Kammer, F. and Täubig, H. (2004) Graph Connec-tivity. Institut für Informatik, Technischen UniversitätMünchen.

[29] Freeman, L.C. (1979) Centrality in social networks: conceptualclarification. Soc. Netw., 1, 215–239.

[30] Sam, S.B., Sujatha, S. Kannan A. and Vivekanandan, P. (2006)Network topology against distributed denial of service attacks.Inf. Technol. J., 5, 489–493.

[31] Dekker, A.H. and Colbert, B. (2004) Scale-Free Networks andRobustness of Critical Infrastructure Networks. Proc. 7th Asia-Pacific Conf. Complex Systems, Cairns, Australia, December 6–10, pp. 1–15.

[32] Frantz, T. and Carley, K.M. (2005) Relating Network Topologyto the Robustness of Centrality measures. Technical Report.CMU-ISRI-05-117, 1–24. School of Computer Science, CarnegieMellon University, USA.

[33] O’Mahony, M.J. (1996) Results form the COST 239 Project.Proc. 22nd European Conf. Optical Communication, Oslo,Norway, September 19, pp. 3–11. IEEE, NJ, USA.

[34] Gibbons, A. (1985) Algorithmic Graph Theory. CambridgeUniversity Press, Cambridge, NY, USA.

The Computer Journal, 2011

by guest on October 20, 2011

http://comjnl.oxfordjournals.org/

Dow

nloaded from