712 NATURE PHYSICS | VOL 10 | OCTOBER 2014 | www.nature.com/naturephysics
news & views
Natural complex systems evolve according to chance and necessity — trial and error — because they
are driven by biological evolution. The expectation is that networks describing
natural complex systems, such as the brain and biological networks within the cell, should be robust to random failure. Otherwise, they would have not survived under evolutionary pressure. But many
natural networks do not live in isolation; instead they interact with one another to form multilayer networks — and evidence is mounting that random networks of networks are acutely susceptible to failure.
MULTILAYER NETWORKS
Dangerous liaisons?Many networks interact with one another by forming multilayer networks, but these structures can lead to large cascading failures. The secret that guarantees the robustness of multilayer networks seems to be in their correlations.
Ginestra Bianconi
experiment: put a candle at a distance and try to extinguish it by either exhaling or inhaling. You’ll find that reversing the sign of the boundary conditions does not simply reverse the flow — at large Reynolds numbers fluid dynamics is highly nonlinear and convective effects dominate. These flows are also prone to deterministic chaos, known in this context as turbulence.
These effects are all captured by the Navier–Stokes equations, which also describe the intricate flow patterns of whirling eddies, turbulent flows and the shock waves of transonic flights. Computing high-Reynolds-number flows is still very demanding, even on the fastest computers available. Although knowing the exact flow patterns in detail is an appealing idea, in the end one is quite often interested only in scalar quantities — in this case, the swimming speed of fish. Furthermore, in biology there is no need to squeeze out the last digit of precision, as is necessary, for example, in turbine design. Gazzola et al.1 therefore took a promising approach by estimating the magnitude of such scalar quantities based on available experimental data.
The speed of swimming is determined by a balance of thrust and drag. Hydrodynamic friction arises from the relative motion of the fish skin with respect to the surrounding liquid. Specifically, the rate at which the fluid is sheared shows a characteristic decay as a function of distance from the swimmer, defining the boundary layer in which the viscous losses take place and kinetic energy is dissipated as heat2. This boundary layer becomes thinner, the faster the flow — a classic effect, well known to engineering students for the more simplified geometry of a flat plate. More than a century ago, Blasius investigated this type of problem3. He found self-similar solutions of the velocity profile, rescaled according to the Reynolds number. Gazzola et al.1 applied this idea of
a viscous boundary layer to estimate the friction of a marine swimmer (Fig. 1), and its dependence on the swimmer’s size, to derive a scaling exponent for the swimming speed. The theoretical prediction is indeed consistent with the biological data, as long as the amplitude of the undulatory body movements is smaller than the thickness of the boundary layer.
What happens for swimmers that are even faster? At such high Reynolds numbers, the viscous boundary layer is very thin and the deceleration of the fluid towards the body becomes important, resulting in a load through the conversion of kinetic energy into dynamic pressure, as known from Bernoulli’s law. This effect is used by pilots, for example, when measuring their velocity with a pitot tube. Gazzola et al.1 found a second scaling relation for this regime of high Reynolds numbers.
The authors’ analysis showed that data from fish larvae, goldfish, alligators and whales can all be fitted with these two scaling laws, revealing a cross-over between viscous- and pressure-dominated regimes. An extensive set of two-dimensional simulations — treating the swimming creatures essentially as waving sheets — corroborates their findings. Two-dimensional calculations have a long tradition in fluid dynamics and have already been used4 to understand self-propulsion at low Reynolds numbers. Strictly speaking, ignoring the third spatial dimension is a strong simplification. However, Gazzola et al.1 compared selected three-dimensional simulations, some of them the largest ever conducted, to their two-dimensional results, and confirmed an analogous scaling relation.
What remains elusive is the transition point between the drag and pressure force regimes. It is an appealing idea that biology may have found ways to shift this point to low values and minimize the overall losses.
Indeed, at high Reynolds numbers, sharks are known to reduce drag through special patterning of their skin5.
The present work is an example of how physical laws — in this case, the physics of fluid flow — determine the operational range of biological mechanisms such as swimming. Physics sets effective constraints for biological evolution. The beauty of physical descriptions is that they often hold irrespective of a given length scale, and can thus describe phenomena occurring over a wide range of sizes. The absolute scale of lengths, times and forces can always be eliminated from a physical equation, leaving only dimensionless physical quantities. As these dimensionless quantities usually reflect biological design principles that are conserved across scales, universal scaling laws emerge.
It is interesting to compare this instance of physics constraining biological function to earlier work of allometric scaling laws, where it was argued that the hydrodynamics of blood flow in the transport networks of terrestrial animals define scaling relations that relate body size and metabolic activity6. The dawn of quantitative biology may yet reveal novel examples of such general scaling laws. ❐
Johannes Baumgart and Benjamin M. Friedrich are at the Max Planck Institute for the Physics of Complex Systems, 01187 Dresden, Germany. e-mail: [email protected]
References1. Gazzola, M., Argentina, M. & Mahadevan, L. Nature Phys.
10, 758–761 (2014).2. Schlichting, H. & Gersten K. Boundary-Layer Theory 8th edn
(Springer, 2000).3. Blasius, H. Grenzschichten in Flüssigkeiten mit kleiner Reibung
[in German] PhD thesis, Univ. Göttingen (1907).4. Taylor, G. Proc. R. Soc. Lond. A 211, 225–239 (1952).5. Reif, W.-E. & Dinkelacker, A. Neues Jahrbuch für Geologie und
Paläontologie, Abhandlungen 164, 184–187 (1982).6. West, G. B., Brown, J. H. & Enquist, B. J. Science 276, 122–126 (1997).
Published online: 14 September 2014
© 2014 Macmillan Publishers Limited. All rights reserved
NATURE PHYSICS | VOL 10 | OCTOBER 2014 | www.nature.com/naturephysics 713
news & views
Writing in Nature Physics, Saulo Reis and colleagues1 have now identified the key correlations responsible for maintaining robustness within these multilayer networks.
In the past fifteen years, network theory2,3 has granted solid ground to the expectation that natural networks resist failure. It has also extended the realm of robust systems to man-made self-organized networks that do not obey a centralized design, such as the Internet or the World Wide Web. In fact, it has been shown that many isolated complex biological, technological and social networks are scale free, meaning that their nodes are characterized by a large heterogeneity in terms of the number of connections. Moreover, we now know that the universally observed scale-free property of complex networks is responsible for the robustness of networks in isolation. If we damage the nodes of a large network with probability 1 – p, where p represents the probability of not damaging the nodes, the critical value (p = pc) required to destroy the connected component of the network comprising an extensive number of nodes is pc ≈ 0. In other words, the network always contains a connected component formed by an extensive number of nodes.
The realization that many networks do not live in isolation, but instead interact with one another, is reasonably recent4,5. But multilayer networks are everywhere: from infrastructure networks formed by systems such as the power grid, the Internet and financial markets, to multilayer social and communication networks, to the brain and biological networks of the cell. As we cannot understand the living cell if we do not integrate information from all of its biological networks, often the function of one network is interdependent on the function of another network. The same applies in infrastructures and financial markets.
When links are placed between different networks forming multilayer network structures, the robustness of the entire system can be strongly affected6–8. In particular, these interconnections can imply interdependencies, meaning that a node in one network is damaged if any of the interdependent nodes in the other layers is damaged. In such cases, the interlinks can trigger cascades of failures propagating back and forth between the two layers and destroying the two networks in an abrupt way characterized by a discontinuous percolation transition at p = pc. In this way, multilayer networks can show a surprising fragility with respect to random damage.
Although this scenario clearly explains the fragility of complex infrastructures such as power grids or financial networks
and the interdependent Internet6, this finding can seem in contradiction with the expected robustness of natural multilayer networks. So what are the properties that keep natural multilayer networks stable? Reis et al.1 solved this puzzle by investigating theoretical models of multilayer networks and studying interconnected brain networks that were found to validate their theory.
Whereas previous results had considered only the effect of random links between the layers, Reis et al.1 revealed the important role of correlations in these networks of networks. Correlations are known to be ubiquitous in complex networks9, but there can be different types in multilayer networks10,11 and this study succeeded in determining the kind of correlation responsible for improved robustness.
This means we can distinguish between trustworthy and dangerous links — those that either improve or reduce the robustness of the multilayer networks (Fig. 1). Whereas random links between interdependent networks represent dangerous liaisons, enhancing the fragility of the entire system, the trustworthy interlinks between the networks are not random, but correlated in a specific way.
The multilayer network formed by scale-free networks exhibits improved robustness indicated by a smaller value of the percolation threshold (pc) when two
conditions are fulfilled. First, the interlinks between the layers must be such that the highly connected nodes, or hubs, of the single layers are also the nodes with more interlinks. And second, there must be multilayer assortativity. This means that for two layers, A and B, the hubs in layer A (layer B) are more likely to be linked with the nodes in layer B (layer A) that are connected with other hubs in layer B (layer A).
Reis et al.1 analysed the percolation properties of two interacting scale-free networks, in which each node of one network could interact with several nodes in the other network. Two different dynamical rules determining the role of interlinks were considered: the conditional interaction and the redundant interaction. When the conditional interaction was taken into account, a node in layer A could not function, and was removed from the network, if all its connectivity with layer B was removed. A similar rule was adopted for nodes in layer B. When the redundant interaction was considered, a node in layer A was deemed functional if it belonged to the giant component in layer A, irrespective of the state of the linked nodes in the other layer, with a similar rule for nodes in layer B. The trustworthy interlinks were found to improve the robustness of multilayer networks with both conditional and redundant interactions.
a
b
Figure 1 | Reis et al.1 have shown that correlations between intra- (red) and interlayer (blue dotted) interactions influence the robustness of multilayer networks. a, In the brain, each network layer has multilayer assortativity and the hubs in each layer are also the nodes with more interlinks, so liaisons between layers are trustworthy. b, In complex infrastructures (such as power grids and the Internet), if the interlinks are random, the resulting multilayer network is affected by large cascades of failures6, and liaisons can be considered dangerous.
© 2014 Macmillan Publishers Limited. All rights reserved
714 NATURE PHYSICS | VOL 10 | OCTOBER 2014 | www.nature.com/naturephysics
news & views
The theory was validated by looking at the properties of two multilayer brain networks12 reconstructed from functional magnetic resonance imaging studies. The first multilayer brain network was extracted from resting-state data and the second was obtained from dual-task data. The multilayer brain networks reconstructed from these studies organized into local brain networks formed by strongly correlated links connected by weak interlinks between the layers.
The two dynamical rules considered in the study1 are both highly relevant to local brain networks. Tasks like processing different sensory features are independent of the interactions with other local networks (as in the redundant-interaction scenario), whereas processes like integrating perceptual information are possible only due to the coordinated activity of different local brain networks (as for conditional interactions). By analysing the brain as a multilayer network, Reis et al.1 found that
the interlinks are trustworthy, in agreement with their theoretical results and wider expectations related to the robustness of biological networks.
Reis and colleagues show that degree correlations might allow the realization of multilayer networks with improved robustness properties. This pivotal result solves the puzzle central to the existence of multilayer structure of biological networks. Moreover, a new question arises from this result: how we can we economically design robust multilayer networks of infrastructures or financial networks with trustworthy links? ❐
Ginestra Bianconi is in the School of Mathematics at Queen Mary University of London, London E1 4NS, UK. e-mail: [email protected]
References1. Reis, S. D. S. et al. Nature Phys. 10, 762–767 (2014).2. Albert, R. & Barabási, A.-L. Rev. Mod. Phys. 74, 47–97 (2002).3. Newman, M. E. J. SIAM Rev. 45, 167–256 (2003).
4. Boccaletti, S. et al. Phys. Rep. http://doi.org/vhg (2014).5. Kivelä, M. et al. Preprint at http://arxiv.org/abs/1309.7233 (2013).6. Buldyrev, S. V., Parshani, R., Paul, G., Stanley, H. E. & Havlin, S.
Nature 464, 1025–1028 (2010).7. Gao, J., Buldyrev, S. V., Stanley, H. E. & Havlin, S. Nature Phys.
8, 40–48 (2012).8. Bianconi, G. & Dorogovtsev, S. N. Phys. Rev. E 89, 062814 (2014).9. Pastor-Satorras, R., Vázquez, A. & Vespignani, A. Phys. Rev. Lett.
87, 258701 (2001).10. Min, B., Do Yi, S., Lee, K. M. & Goh, K.-I. Phys. Rev. E
89, 042811 (2014).11. Nicosia, V. & Latora, V. Preprint at http://arxiv.org/
abs/1403.1546 (2014).12. Bullmore, E. & Sporns, O. Nature Rev. Neurosci.
10, 186–198 (2009).
Published online: 14 September 2014
© 2014 Macmillan Publishers Limited. All rights reserved
LETTERSPUBLISHED ONLINE: 14 SEPTEMBER 2014 | DOI: 10.1038/NPHYS3081
Avoiding catastrophic failure in correlatednetworks of networksSaulo D. S. Reis1,2, Yanqing Hu1, Andrés Babino3, José S. Andrade Jr2, Santiago Canals4,Mariano Sigman3,5 and Hernán A. Makse1,2,3*Networks in nature donot act in isolation, but instead exchangeinformation and depend on one another to function properly1–3.Theory has shown that connecting random networks mayvery easily result in abrupt failures3–6. This finding reveals anintriguing paradox7,8: if natural systems organize in intercon-nected networks, how can they be so stable? Here we providea solution to this conundrum, showing that the stability of asystem of networks relies on the relation between the internalstructure of a network and its pattern of connections to othernetworks. Specifically,wedemonstrate that if interconnectionsare provided by network hubs, and the connections betweennetworksaremoderately convergent, the systemofnetworks isstable and robust to failure. We test this theoretical predictionon two independent experiments of functional brain networks(in taskand resting states),which showthat brainnetworksareconnected with a topology that maximizes stability accordingto the theory.
The theory of networks of networks relies largely on unstructuredpatterns of connectivity between networks3,4,6. When two stablenetworks are fully interconnected with one-to-one randomconnections, such that every node in a network depends on arandomly chosen node in the other network, small perturbations inone network are amplified by the interaction between networks3,6.This process leads to cascading failures, which are thought tounderpin catastrophic outcomes in man-made infrastructures,such as blackouts in power grids3,4.
By contrast, many stable living systems, including the brain9
and cellular networks10, are organized in interconnected networks.Random networks are very efficient mathematical constructs todevelop theory, but the majority of networks observed in natureare correlated11,12. Correlations, in turn, provide structure and areknown to influence the dynamical and structural properties ofinterconnected networks, as has been recently shown13.
Most natural networks form hubs, increasing the relevance ofcertain nodes. This adds a degree of freedom to the system, indetermining whether hubs broadcast information to other networksor, conversely, whether cross-network communication is governedby nodes with less influence in their own network.
We develop a full theory for systems of structured networks,identifying a structural communication protocol that ensures thesystem of networks is stable (less susceptible to catastrophic failure)and optimized for fast communication across the entire system. Thetheory establishes concrete predictions of a regime of correlatedconnectivity between the networks composing the system.
We test these predictions with two different systems of brainconnectivity based on functional magnetic resonance imaging(fMRI) data. The brain organizes in a series of interactingnetworks9,14, presenting a paradigmatic case study for a theory ofconnected correlated networks. We show that for two independentexperiments of functional networks in task and resting states inhumans, the systems of brain networks organize optimally, aspredicted by the theory.
Our results provide a plausible explanation for the observationthat natural networks do not show frequent catastrophic failureas expected by theory. They offer a specific theoretical predictionof how structured networks should be interconnected to bestable. And they demonstrate, using two examples of functionalbrain connectivity, that the structure of cross-network connectionscoincides with theoretical predictions of stability for differentfunctional architectures.
We present a theory based on a recursive set of equations to studythe cascading failure and percolation process for two correlatedinterconnected networks. The theory is a generalization of ananalytical approach for single networks previously developed15 tostudy cascading behaviour in interconnected correlated networks(analytic details in Supplementary Section I). Here we refer to themost important aspects of the theory and the corresponding setof predictions. The theory can be extended to n-interconnectednetworks by following ref. 16.
We consider two interconnected networks, each one havinga power-law degree distribution characterized by exponent γ ,P(kin)∼ k−γin , valid up to a cutoff kmax imposed by their finite size.Here kin is the number of links of a node towards nodes in the samenetwork. This power law implies that a few nodes will be vastlyconnected within the network (hubs) whereas the majority of nodeswill be weakly connected to other nodes in the network.
The structure between interconnected networks can becharacterized by two parameters: α and β (Fig. 1a). The parameterα, defined as
kout∼kαin (1)
where kout is the degree of a node towards nodes in the othernetwork, determines the likelihood that hubs of each network arealso the principal nodes connecting both networks. For α > 0the nodes in network A and B which connect both networkswill typically be hubs in A and B respectively (Fig. 1a, rightpanels). Instead, for α < 0 the two networks will be connected
1Levich Institute and Physics Department, City College of New York, New York, New York 10031, USA, 2Departamento de Física, Universidade Federal doCeará, 60451-970 Fortaleza, Ceará, Brazil, 3Departamento de Física, FCEN-UBA, Ciudad Universitaria, (1428) Buenos Aires, Argentina, 4Instituto deNeurociencias, CSIC-UMH, Campus de San Juan, Avenida Ramón y Cajal, 03550 San Juan de Alicante, Spain, 5Universidad Torcuato Di Tella, SáenzValiente 1010, C1428BIJ Buenos Aires, Argentina. *e-mail: [email protected]
762 NATURE PHYSICS | VOL 10 | OCTOBER 2014 | www.nature.com/naturephysics
© 2014 Macmillan Publishers Limited. All rights reserved
NATURE PHYSICS DOI: 10.1038/NPHYS3081 LETTERS
> 0β
< 0β
< 0α
Failure
Failure
Stage 1 Stage 2 Stage 3
Stage 1 Stage 2 Stage 3
Stage 4 Stage 5
A
A
AB
A B B
B
A B
A B
Conditional interaction
Redundant interactionc
a b
> 0α
Figure 1 | Modelling degree–degree correlations between interconnected networks. a, Hubs (red nodes) and non-hubs (blue nodes) have kout outgoinglinks (wiggly blue links) according to the parameter α. When α<0, the outgoing links are more likely to be found attached to non-hub nodes. When α>0,hubs are favoured over non-hub nodes. Nodes from di�erent networks are connected according to β . When β>0, nodes with similar degree prefer toconnect between themselves, and when β<0, nodes connect disassortatively. For simplicity we exemplify the outgoing links emanating from only a fewnodes in network A according to (α,β). b, Conditional mode of failure: a node fails every time it becomes disconnected from the largest component of itsown network, or loses all its outgoing links. All stable nodes have at least one outgoing link. We exemplify only one cascading path for simplicity. In reality,we investigate the cascading produced by removal of 1−p nodes from both networks. With the failure of the hub indicated in the figure (Stage 1), all itsnon-hub neighbours also fail because they become isolated from the giant component in A (Stage 2). In Stage 3 the upper hub from network B fails, owingto the conditional interaction, because it loses connectivity with network A even though it is still connected in B. With the failure of this second hub all itsnon-hub neighbours become isolated, leading to their failure (Stage 4). This leads to a further removal of the second outgoing link and the cascading failurepropagates back to network A (Stage 5). Because no more nodes become isolated, the cascading failure stops with the mutual giant component shown inStage 5. At this point we measure the fraction of nodes in the giant component of A and B. c, Redundant interaction: the failure of a node only leads tofurther failure if its removal isolates its neighbours in the same network. The failure of the hub (Stage 1) does not propagate the damage to the othernetwork (Stage 2 and 3) and therefore there is no cascading in this interaction. We measure the fraction of nodes in the mutually connected giantcomponent. We note that nodes can be stable even if they do not have outgoing links, as long as they belong to the mutually connected component.Thus, the mutually connected giant component may contain nodes which are not part of the single giant component of one of the networks, as shown inStage 3, network A.
preferentially by nodes of low degree within each network (Fig. 1a,left panels).
The parameter β defines the indegree–indegree internetworkcorrelations as11,12:
knnin ∼kβ
in (2)
where knnin is the average indegree of the nearest neighbours ofa node in the other network. It determines the convergence ofconnections between networks—that is, the likelihood that a linkconnecting networks A and B coincides in the same type of node.Intuitively, equations (1) and (2) can be seen as a compromisebetween redundancy and reach of connections between bothnetworks. For β>0 connections between networks are convergent(assortative, Fig. 1a, top panels), whereas forβ<0 they are divergent(disassortative, Fig. 1a, bottom panels). Uncorrelated networks haveα=0 and β=0.
We analyse how the system of two correlated networks breaksdown after random failure (random attack) of a fraction 1− pof nodes for different patterns of between-network connectivitycharacterized by (α, β). We adopt the conventional percolationcriterion of stability and connectivity measuring how the largest
connected component breaks down following the attack3. In classicpercolation of single networks, two nodes of a network are randomlylinked with probability p (ref. 17). For low p, the network isfragmented into subextensive components. Percolation theory ofrandom networks demonstrates that as p increases, there is acritical phase transition in which a single extensive cluster or giantcomponent spans the system (the critical p is referred to as pc).
A robust notion of stability in a system of networks can beobtained by identifying pc at which a cohesive mutually connectednetwork breaks down into disjoint subcomponents under differentforms of attack. Network topologies with low pc are robust, as thisindicates that the majority of nodes ought to be removed to break itdown. In contrast, high values of pc are indicative of a fragile networkwhich breaks down by removing only a few nodes.
Here we analyse two qualitatively different manners in which thenetworks interact and propagate failure. In one mode (conditionalinteraction, Fig. 1b) a node in network B cannot function (and henceis removed) if it loses all connectivity with network A after theattack3. In the second condition (redundant interaction, Fig. 1c) anode in network B may survive even if it is completely decoupledfrom network A, if it remains attached to the largest componentof network B (ref. 4). To understand why these two responses to
NATURE PHYSICS | VOL 10 | OCTOBER 2014 | www.nature.com/naturephysics 763
© 2014 Macmillan Publishers Limited. All rights reserved
LETTERS NATURE PHYSICS DOI: 10.1038/NPHYS3081
α α
20
(%)
(%)
60
50
40
30
7.5
8.5
8.0
9.0
9.5
10
ββ
−1.0 −0.5 0.0 0.5 1.0−0.5
0.0
0.5
1.0
−0.5
0.0
0.5
1.0
−1.0 −0.5 0.0 0.5 1.0
γ = 2.5, kmax = 100
α βpc( , ): conditional α βpc( , ): redundanta b
Figure 2 | Stability phase diagram of pc(α,β) for conditional and redundant failure. Percolation threshold pc(α,β) predicted by theory for couplednetworks for generic values γ =2.5 and kmax= 100 in conditional interaction (a) and redundant interaction (b). We use a bounded power law for closercomparison with experimental data. For a given system, the results are independent of a large enough cuto�. For the conditional interaction the system ismore stable (low value of pc) when α<0 as well as for α≈ 1 and β>0, and exhibits a maximum in pc (unstable) close to α≈0.25 and β<0. Theredundant interaction is instead more unstable for α<0 and becomes stable for α≈ 1 and β>0. Thus the best compromise between both modes of failureis for values located in the upper-right quadrant (α≈ 1, β>0).
100
100 101 102
10−2
10−4
10−6
−2.85 −2.25
1.000.80 0.85 0.90 0.950.0
0.1
0.2
0.3
Frac
tion
of th
e la
rges
tco
mpo
nent
Frac
tion
of th
e la
rges
tco
mpo
nent
1.000.80 0.85 0.90 0.950.0
0.1
0.2
Resting state Dual task
Resting stateDual task
Resting stateDual task
Strong linksat Tc < T
Strong linksat Tc < T
Weak linksat 0.781 ≤ T < Tc
Weak linksat 0.864 ≤ T < Tc
Resting state Dual task
T
Tc = 0.854 Tc = 0.914
T
P(k in
)
100
10−2
10−4
10−2
101
101
10010−6
P(k in
)
kin
100 101 102
kin
100 101 102
kin
100 101 102
kin
k out
k innn
a b
c d e f
Figure 3 | Analysis of interconnected functional brain networks. a, Clustering analysis to obtain the system of networks for resting-state data for a typicalsubject out of 12 scans analysed. Left plot shows the fraction of nodes in the largest network versus T. We identify one percolation-like transition with thejump at Tc=0.854. Strong ingoing links define the networks and correspond to T>Tc (ref. 14). At Tc, the two largest networks, shown in the right panel inthe network representation and in the inset in the brain, merge. Interconnecting weak outgoing links are defined for 0.781≤T<Tc (plotted in grey). b, Thesame clustering analysis is done to identify the interconnected network in dual task14. We show a typical scan out of a total of 16 subjects. The strongingoing links have T>Tc=0.914, and weak outgoing links are defined for 0.864≤T<Tc. c, Indegree kin distribution for the resting-state experiment.d, Indegree kin distribution for the dual-task experiment. The black lines in c and d are fits to the data in accordance with the methods presented inSupplementary Section IIA. The tail of the distributions follows P(kin)∼kγin, with γ =−2.85 and γ =−2.25 respectively. e, Outdegree kout as a function ofkin for resting-state and dual-task experiments, according to equation (1). f, knn
in as a function of kin for resting-state and dual-task experiments, accordingto equation (2). The black lines in e and f are linear fits to the data.
failure are pertinent in real networks it helps to exemplify theinteraction between power and data networks. If electricity canflow only through the cables of the power network, a node in thedata network unplugged from the power system shuts off and stopsfunctioning. This situation corresponds to two networks coupledin a conditional manner; a case treated in ref. 3 considering one-to-one random connections between networks. Consider insteadthe case of a printer or any peripheral which can be plugged tothe main electricity network but can also receive power through aUSB cable from the computer. A node may still function even if itis disconnected from the other network, if it remains connected toits local network. This corresponds to the redundant interaction astreated by ref. 4 in the unstructured case.
We first investigate the stability of two interacting scale-freenetworks for a value of γ set arbitrarily to 2.5 and kmax= 100 in aregime where each isolated network is stable and robust to attack18.The attack starts with the removal of a fraction of 1−p nodes chosenat random from both networks. This attack produces extra failuresof, for instance, nodes in B. In the case of conditional interaction:if the nodes in B disconnect from the giant component of networkA or disconnect from the giant component of B. In the case ofredundant interaction: if the nodes in B disconnect from the giantcomponent of network A and the giant component of network B.In the conditional mode, this process may lead to new failures innetwork A producing a cascade if they lose connectivity in B. Othernodes in A may also fail as they get disconnected from the giant
764 NATURE PHYSICS | VOL 10 | OCTOBER 2014 | www.nature.com/naturephysics
© 2014 Macmillan Publishers Limited. All rights reserved
NATURE PHYSICS DOI: 10.1038/NPHYS3081 LETTERS
−1.0 −0.5 0.0 0.5 1.0−0.5
0.0
0.5
1.0
5.4
6.0
5.6
6.2
6.6
6.8
6.4
5.8
−0.5
0.0
0.5
1.0
15
50
40
30
25
45
35
20
10
−1.0 −0.5 0.0 0.5 1.0−0.5
0.0
0.5
1.0
12.0
15.0
14.5
13.5
14.0
13.0
11.5
12.5
−0.5
0.0
0.5
1.0a
b
−1.0 −0.5 0.0 0.5 1.0
30
70
60
50
45
65
55
40
25
35
α
−1.0 −0.5 0.0 0.5 1.0α α
α
(%)
(%)
(%)
(%)
ββ β
β
Resting state: γ = 2.85, kmax = 133
Dual tasking: γ = 2.25, kmax = 139
aa α βpc ( , ): conditional α βpc ( , ): redundant
α βpc ( , ): redundantα βpc ( , ): conditional
Figure 4 | Stability phase diagram for brain networks. Percolation threshold pc(α,β) obtained from theory for two coupled networks with power-lawexponents and cuto� given by the brain networks in resting state (a) and dual task (b). The left panels are for conditional interactions and the right panelsfor redundant interactions. The white circles represent the data points of the real brain networks. They indicate that the brain structure results from acompromise of optimal stability between both modes of failure.
component in A, and the cascading process iterates until convergingto a final configuration. By definition, only the conditional modemay produce cascading effects, but not the redundant mode. Thetheoretical analysis of this process leads to a set of recursiveequations (Supplementary Section I) that provides a stability phasediagram for the critical percolation threshold pc(α,β) under attackin redundant and conditional failures for a given (γ , kmax), asseen in Fig. 2.
Figure 2 reveals that the relation between a network’s internalstructure and the pattern of connection between networks criticallydetermines whether attacks lead to catastrophic cascading failures(high pc) or not (low pc). For conditional interactions, the systemof networks is stable when α < 0 (indicated by low pc(α, β), left-blue region in Fig. 2a) or for α& 0.5 and β > 0 (light blue top-right quadrant), and becomes particularly unstable for intermediatevalues of 0<α< 0.5 and β < 0. This result shows that the systemof networks is stable when the hubs are protected (α < 0) bybeing isolated from network–network connectivity or when, incontrast, the bulk of connectivity within and across networks issustained exclusively by a very small set of hubs (large α, β).Intermediate configurations, where hubs interconnect with low-degree nodes, are highly unstable because hubs can be easilyattacked via conditional interactions, and lead to catastrophiccascading after attack. Similar unstable configurations appear in theone-to-one random interconnectivity3.
When two networks interact in a redundant manner, the systemof networks is less vulnerable to attacks (Fig. 2b). This expectedresult is manifested by the fact that, even for small values of p∼0.1,
the system of networks remains largely connected for any (α, β).The non-intuitive observation is that the relation between networkinternal structure and the pattern of connection between networkswhich optimizes stability differs from the conditional interaction(Fig. 2a). In fact, α<0 leads to the less stable configurations (largervalue of pc in Fig. 2b, red region), and the only region whichmaximizes stability corresponds to high values of α and β>0 (blueregion in Fig. 2b)—that is, an interactionwhere connection betweennetworks is highly redundant and carried only by a few hubs ofeach network. Thus, the parameters that maximize stability for bothinteractions lie in the region α≈1 and β>0.
Systems of brain networks present an ideal candidate to examinethis theory for the following two reasons. First, local-brain networksorganize according to a power-law degree distribution19,20. Second,some aspects of local function are independent of long-range globalinteractions with other networks (as in the redundant interaction),such as the processing of distinct sensory features, whereas otheraspects of local connectivity can be shut-down when connectivity toother networks is shut down (as in conditional interaction), such asintegrative perceptual processing21. Hence, the theory predicts thatto assure stability for both modes of dependencies, brain networksought to be connectedwith positive andhigh values ofα andpositivevalues of β .
Next, we examine this hypothesis for two independent functionalmagnetic resonance imaging (fMRI) experiments: human resting-state data obtained from the NYU public repository22 and humandual-task data23 previously used to investigate brain networktopology14,24,25 (see Methods and Supplementary Section II for
NATURE PHYSICS | VOL 10 | OCTOBER 2014 | www.nature.com/naturephysics 765
© 2014 Macmillan Publishers Limited. All rights reserved
LETTERS NATURE PHYSICS DOI: 10.1038/NPHYS3081
Table 1 | Parameters characterizing the studied human brain networks.
Data set γ α β kmax
Human resting state 2.85±0.04 1.02±0.02 0.66±0.03 133Human dual task 2.25±0.07 0.92±0.02 0.79±0.04 139
details). We first identify functional networks (resting state, Fig. 3aand dual task, Fig. 3b) made of nodes connected by strong links—that is, by highly correlated fMRI signals14. These networks areinterconnected by weak links (low correlation in the fMRI signal)following the methods of ref. 14. The indegree distribution of thesystem of networks follows a bounded power law (Fig. 3c,d andTable 1) and the exponents α and β show high positive values forboth experiments (Fig. 3e,f and Table 1).
To examine whether these values are optimal for the specific(γ , kmax)-parameters of these networks, for each experiment, weprojected the measured values of α and β to the theoreticallyconstructed stability phase diagram quantified by pc(α, β) inthe conditional and redundant modes (Fig. 4). Remarkably, theexperimental values of α and β (white circles) lie within therelatively narrow region of parameter space that minimizes failurefor conditional and redundant interaction. Overall these resultsdemonstrate that brain networks tested under distinct mental statesshare the topological features that confer stability to the system.
Our result hence provides a theoretical revision to the currentview that systems of networks are highly unstable. We show that forstructured networks, if the interconnections are provided by hubsof the network (α>0.5) and for moderate degrees of convergenceof internetwork connection (β > 0), the systems of network arestable. This stability holds in the conditional interaction3 and ina more robust topology of redundant interaction4. The redundantcondition is equivalent to stating that the system of networksmergesinto a single network (ingoing and outgoing links are treated asthe same). Hence the condition of optimality for this topologyequates to saying that the size of the giant component formed bythe connection of both networks is optimized. As a consequence,the maximization of robustness for both conditions is equivalentto maximizing robustness in the more conventional conditionalinteraction, where links of one network are strictly necessary forproper function of the other network, and a notion of informationflow and storage using the classic percolation theory definition ofthe size of the maximal mutual component across both networks. Inother words, these parameters form a set of interacting nodes whichare maximally large in size and robust to failure.
The most natural metaphors for man-made systems of networksare electricity (wires) and the Internet or voice connectivity (data).A more direct analogue to this case in a living system such asthe brain would be the interaction between anatomic, metabolicand vascular networks (wires) and their coupling to functionalcorrelations (data)26. Here instead we adopted the theory ofnetworks of networks to investigate the optimality of coupledfunctional brain modules. The consistency between experimentaldata and theoretical predictions even in this broader notion ofcoupled networks is suggestive of the possible broad scope ofthe theory, making it a candidate to study a wider range ofinterconnected networks27.
MethodsExperimental analysis. The interdependent functional brain networks areconstructed from fMRI data following the methods of ref. 14. First, the bloodoxygen level dependent (BOLD) signal from each brain voxel (node) is used toconstruct the functional network topology based on standard methods19,20 usingthe equal-time cross-correlation matrix, Cij, of the activity of pairs of voxels(Supplementary Section II).
The derivation of a binary graph from a continuous connectivity matrixrelies on a threshold T , where the links between two nodes (voxels) i and j areoccupied if T<Cij (refs 14,19), such as in bond percolation. A natural andnon-arbitrary choice of threshold can be derived from a clustering bondpercolation process. The size of the largest connected component of voxels as afunction of T reveals clear percolation-like transitions14 in the two data sets,identified by the jumps in the size of the largest component in Fig. 3a,b. Theemergent networks in resting state correspond to the medial prefrontal cortex,posterior cingulate and lateral temporoparietal regions, all of them part of thedefault mode network (DMN) typically seen in resting state data22. In dual task,as expected for an experiment involving visual and auditory stimuli andbi-manual responses, the responsive regions include bilateral visualoccipito-temporal cortices, bilateral auditory cortices, motor, premotor andcerebellar cortices, and a large-scale bilateral parieto-frontal structure.
Scaling of correlations in the brain. We identify functional networks (seeFig. 3a,b right panels) made of nodes connected by strong links (strong BOLDsignal correlation Cij) which are interconnected by weak links (weak BOLD signalcorrelation)14,28. Statistical analysis based on standard maximum likelihood andKS methods29 (Supplementary Section IIA) yield the values of the indegreeexponents of each functional brain network: γ =2.85±0.04 and kmax=133 forresting state and γ =2.25±0.07, kmax=139 for dual task (Fig. 3c,d). The obtainedexponents α show high positive values for both experiments: α=1.02±0.02 and0.92±0.02 for resting-state and dual-task data, respectively (Fig. 3e). Theinternetwork connections show positive exponents for both systems:β=0.66±0.03 and β=0.79±0.04 for resting state and dual task,respectively (Fig. 3f).
Hence, in accordance with the predictions of the theory, these twointerdependent brain networks derived from qualitatively distinct mental states(resting states and strong engagement in a task which actively coordinates visual,auditory and motor function) show consistently high values of α and positivevalues of β . Figure 4 shows the theoretical phase diagram pc(α,β) in conditionaland redundant modes calculated for coupled networks with the experimentalvalues γ =2.25 and 2.85. Left panels show the prediction of pc(α,β) in theconditional mode of failure and right panels correspond to the redundant mode.The experimental (α,β) are shown as white circles lying in stable regions of thephase diagram (low pc). Interestingly, the convergence of internetworkconnections, β , is slightly higher under task conditions, adding a new degree offreedom to the system of networks, the dynamic allocation of functionalconnections governed by context-dependent processes such as attention orlearning for the case of brain networks. Further research is planned to investigatethe neuronal mechanisms underlying internetwork communication routinesspecified by β .
Received 22 March 2014; accepted 30 July 2014;published online 14 September 2014
References1. Little, R. G. Controlling cascading failure: Understanding the vulnerabilities of
interconnected infrastructures. J. Urban Technol. 9, 109–123 (2002).2. Rosato, V. Modeling interdependent infrastructures using interacting
dynamical models. Int. J. Crit. Infrastruct. 4, 63–79 (2008).3. Buldyrev, S. V., Parshani, R., Paul, G., Stanley, H. E. & Havlin, S. Catastrophic
cascade of failures in interdependent networks. Nature 464, 1025–1028 (2010).4. Leicht, E. A. & D’Souza, R. M. Percolation on interacting networks. Preprint at
http://arxiv.org/abs/0907.0894 (2009).5. Brummitt, C. D., D’Souza, R. M. & Leicht, E. A. Suppressing cascades of
load in interdependent networks. Proc. Natl Acad. Sci. USA 109,E680–E689 (2012).
6. Gao, J., Buldyrev, S. V., Stanley, H. E. & Havlin, S. Networks formed frominterdependent networks. Nature Phys. 8, 40–48 (2012).
7. Bianconi, G., Dorogovtsev, S. N. & Mendes, J. F. F. Mutuallyconnected component of network of networks. Preprint athttp://arxiv.org/abs/1402.0215 (2014).
8. Bianconi, G. & Dorogovtsev, S. N. Multiple percolation transitions in aconfiguration model of network of networks. Phys. Rev. E 89, 062814 (2014).
9. Dosenbach, N. U. F. et al. Distinct brain networks for adaptive and stable taskcontrol in humans. Proc. Natl Acad. Sci. USA 104, 11073–11078 (2007).
10. Vidal, M., Cusick, M. E. & Barabási, A-L. Interactome networks and humandisease. Cell 144, 986–998 (2011).
11. Pastor-Satorras, R., Vázquez, A. & Vespignani, A. Dynamical and correlationproperties of the Internet. Phys. Rev. Lett. 87, 258701 (2001).
12. Gallos, L. K., Song, C. & Makse, H. A. Scaling of degree correlationsand its influence on diffusion in scale-free networks. Phys. Rev. Lett. 100,248701 (2008).
766 NATURE PHYSICS | VOL 10 | OCTOBER 2014 | www.nature.com/naturephysics
© 2014 Macmillan Publishers Limited. All rights reserved
NATURE PHYSICS DOI: 10.1038/NPHYS3081 LETTERS13. Radicchi, F. Driving interconnected networks to supercriticality. Phys. Rev. X 4,
021014 (2014).14. Gallos, L. K., Makse, H. A. & Sigman, M. A small world of weak ties provides
optimal global integration of self-similar modules in functional brain networks.Proc. Natl Acad. Sci. USA 109, 2825–2830 (2012).
15. Moore, C. & Newman, M. E. J. Exact solution of site and bond percolation onsmall-world networks. Phys. Rev. E 62, 7059–7064 (2000).
16. Gao, J., Buldyrev, S. V., Havlin, S. & Stanley, H. E. Robustness of a networkformed by n interdependent networks with a one-to-one correspondence ofdependent nodes. Phys. Rev. E 85, 066134 (2012).
17. Bollobás, B. Random Graphs (Academic, 1985).18. Cohen, R., Ben-Avraham, D. & Havlin, S. Percolation critical exponents in
scale-free networks. Phys. Rev. E 66, 036113 (2002).19. Eguiluz, V. M., Chialvo, D. R., Cecchi, G. A., Baliki, M. & Apkarian, A. V.
Scale-free brain functional networks. Phys. Rev. Lett. 94, 018102 (2005).20. Bullmore, E. & Sporns, O. Complex brain networks: Graph theoretical
analysis of structural and functional systems. Nature Rev. Neurosci. 10,186–198 (2009).
21. Sigman, M. et al. Top-down reorganization of activity in the visual pathwayafter learning a shape identification task. Neuron 46, 823–835 (2005).
22. Shehzad, Z., Kelly, A. M. C. & Reiss, P. T. The resting brain: Unconstrained yetreliable. Cereb. Cortex 10, 2209–2229 (2009).
23. Sigman, M. & Dehaene, S. Brain mechanisms of serial and parallel processingduring dual-task performance. J. Neurosci. 28, 7585–7598 (2008).
24. Russo, R., Herrmann, H. J. & de Arcangelis, L. Brain modularity controls thecritical behavior of spontaneous activity. Sci. Rep. 4, 4312 (2014).
25. Gallos, L. K., Sigman, M. & Makse, H. A. The conundrum of functional brainnetworks: small-world efficiency or fractal modularity. Front. Physiol. 3,123 (2012).
26. Honey, C. J. et al. Predicting human resting-state functional connectivity fromstructural connectivity. Proc. Natl Acad. Sci. USA 106, 2035–2040 (2009).
27. Schneider, C. M., Yazdani, N., Araújo, N. A. M., Havlin, S. & Herrmann, H. J.Towards designing robust coupled networks. Sci. Rep. 3, 1969 (2013).
28. Schneidman, E., Berry, M. J., Segev, R. & Bialek, W. Weak pairwise correlationsimply strongly correlated network states in a neural population. Nature 440,1007–1012 (2006).
29. Clauset, A., Shalizi, C. R. & Newman, M. E. J. Power-law distributions inempirical data. SIAM Rev. 51, 661–703 (2009).
AcknowledgementsThis work was funded by NSF-PoLS PHY-1305476 and NIH-NIGMS 1R21GM107641.We thank N. A. M. Araújo, S. Havlin, L. Parra, L. Gallos, A. Salles and T. Bekinschtein forclarifying discussions. Additional financial support was provided by the Brazilianagencies CNPq, CAPES and FUNCAP, the Spanish MINECO BFU2012-39958,CONICET and the James McDonnell Foundation 21st Century Science Initiative inUnderstanding Human Cognition—Scholar Award. The Instituto de Neurociencias is aSevero Ochoa center of excellence.
Author contributionsAll authors contributed equally to the work presented in this paper.
Additional informationSupplementary information is available in the online version of the paper. Reprints andpermissions information is available online at www.nature.com/reprints.Correspondence and requests for materials should be addressed to H.A.M.
Competing financial interestsThe authors declare no competing financial interests.
NATURE PHYSICS | VOL 10 | OCTOBER 2014 | www.nature.com/naturephysics 767
© 2014 Macmillan Publishers Limited. All rights reserved
SUPPLEMENTARY INFORMATION
Avoiding catastrophic failure in correlated network of networks
Reis, Hu, Babino, Andrade, Canals, Sigman, Makse
I. THEORY OF CORRELATED NETWORK OF NETWORKS
We first illustrate the theory to calculate the percolation threshold for a single uncorre-
lated network following the standard calculations done by Moore and Newman [1]. We then
generalize this theory to the case of two correlated interconnected networks to calculate pc
under redundant and conditional modes of failures.
A. Calculation of percolation threshold for a single network [1]
The percolation problem of a single network can be solved by the calculation of the
probability X to reach the giant component by following a randomly chosen link [1]. First,
choose a link of a single network at random. After that, select one of its ends with equal
probability. The probability 1 X is the probability that, by following this link using the
chosen direction, we do not arrive at the giant component, but instead we connect to a finite
component.
Since the degree distribution of an end node of a chosen link is given by kP (k)/hki, onecan write down a recursive equation for X as:
X = 1X
k
kP (k)
hki (1X)k−1. (1)
The sum is for the probability that, by following the chosen link, we arrive at a node
with degree k which is not attached to the giant component through its remaining k 1
connections. We rewrite the previous equation as follows:
X = 1X
k
kP (k)
hki G(X), (2)
where
G(X) = (1X)k−1. (3)
Once the probability X is known, we can use it to write the probability 1 S that a
randomly chosen node does not belong to the giant component. Again, this is a sum of
1
Avoiding catastrophic failure in correlated networks of networks
SUPPLEMENTARY INFORMATIONDOI: 10.1038/NPHYS3081
NATURE PHYSICS | www.nature.com/naturephysics 1
© 2014 Macmillan Publishers Limited. All rights reserved.
probabilities: the probability that this node has no links attached to it, plus the probability
that this node has one link and this link does not lead to the giant component, plus the
probability that this node has two links and none of them leads to the giant component,
and so on. In other words:
1 S =X
k
P (k)(1X)k. (4)
Again, we can rewrite this equation as:
S = 1X
k
P (k)H(X), (5)
where
H(X) = (1X)k. (6)
Note that the probability S not only stands for the probability of choosing one node from the
giant component at random, but also provides the fraction of nodes in the network occupied
by the giant component. SI-Equation (5) provides the probability of a node to belong to
the giant component and is the main quantity to be calculated by the theory from where
the value of the percolation threshold can be calculated as the largest value of pc such that
S(pc) = 0.
B. Analytical approach for two interconnected networks with correlations
Now, we present a generalization of the above approach suited to both problems studied in
our work, namely, the redundant and conditional interactions of two interconnected networks
with generic correlations. We have also developed an analogous theoretical framework based
on the generating approach used in Ref. [2]. However, we find that the generating function
approach [2] is more mathematically cumbersome if one wants to take into account the
correlations between the networks to calculate the mutually connected giant component.
Since the size of the giant component is the only quantity needed in this study, we find that
the approach of Moore and Newman is more transparent and, furthermore, allows us to take
into account both modes of failure in a single theory. Indeed, the whole theory can be cast
into a few number of equations, while the generating function approach is more involved.
We define two probabilities for network A (and their equivalents for network B). As we
did for the case of a single network, we will take advantage of functions similar to G(X) and
2
H(X). By doing this, the following recursive equations are general and can be applied to
the redundant and to the conditional interaction cases depending of the way the functions
G(•) and H(•) are written for each case. Therefore, below we develop the theory for both
modes of failure and later we specialize on each interaction.
First, we define the probability XA
, as the probability that, by following a randomly
chosen link of network A, we reach a node from the largest connected component of network
A. The second probability, YkAin
, is the probability of choosing at random a node from
network A with in-degree kA
in
connected with a node from the largest component of network
B. Analogously, we define probabilities XB
and YkBin
for network B.
Thus, if we initially remove a fraction 1 pAof nodes from network A chosen at random,
and a fraction 1 pBof nodes from network B, we can write X
A
and XB
in analogy with
SI-Eq. (2) [we note that when network A and network B have the same number of nodes,
p = (pA+ p
B)/2]:
XA
= pA
2
41X
kAin
,kAout
kA
in
P�kA
in
, kA
out
�
hkA
in
i G(XA
, YkAin
, kA
in
, kA
out
)
3
5 . (7)
Here, the correlations between kA
in
and kA
out
from Eq. (1) in the main text are quantified by
P (kA
in
, kA
out
), which is the joint probability distribution of in- and out-degrees of nodes from
network A from where Eq. (1) in the main text can be derived. The probability function
G(XA
, YkAin
, kA
in
, kA
out
) in SI-Eq. (7) is analogous to SI-Eq. (3). It stands for the probability
that, by following a randomly chosen link from network A, we reach a node which is not part
of the giant component of network A, which has in-degree kA
in
and out-degree kA
out
and/or is
not connected with a node from the giant component network B (here and in what follows,
“and/or” refers to the nature of the two cases of study: the redundant and conditional
interactions, respectively). To write down SI-Eq. (7) we use the joint in- and out-degree
distribution of an end node of a randomly chosen in-link kA
in
P (kA
in
, kA
out
)/hkA
in
i. Finally, the
terms in the squared brackets stand for the probability XA
= XA
(pA= 1) before removing
the fraction 1 pA, which is the generalization of SI-Eq. (2). Thus, after the removal of
a fraction 1 pA, the probability of following a randomly selected in-link to reach a node
which belongs to the giant cluster of A is XA
(pA= 1) times the probability p
Afor this node
being a survival node. In a similar fashion, we write the probability XB, the joint degree
distribution P (kB
in
, kB
out
) and the probability function G(XB
, YkBin
, kB
in
, kB
out
) for network B:
32 NATURE PHYSICS | www.nature.com/naturephysics
SUPPLEMENTARY INFORMATION DOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
probabilities: the probability that this node has no links attached to it, plus the probability
that this node has one link and this link does not lead to the giant component, plus the
probability that this node has two links and none of them leads to the giant component,
and so on. In other words:
1 S =X
k
P (k)(1X)k. (4)
Again, we can rewrite this equation as:
S = 1X
k
P (k)H(X), (5)
where
H(X) = (1X)k. (6)
Note that the probability S not only stands for the probability of choosing one node from the
giant component at random, but also provides the fraction of nodes in the network occupied
by the giant component. SI-Equation (5) provides the probability of a node to belong to
the giant component and is the main quantity to be calculated by the theory from where
the value of the percolation threshold can be calculated as the largest value of pc such that
S(pc) = 0.
B. Analytical approach for two interconnected networks with correlations
Now, we present a generalization of the above approach suited to both problems studied in
our work, namely, the redundant and conditional interactions of two interconnected networks
with generic correlations. We have also developed an analogous theoretical framework based
on the generating approach used in Ref. [2]. However, we find that the generating function
approach [2] is more mathematically cumbersome if one wants to take into account the
correlations between the networks to calculate the mutually connected giant component.
Since the size of the giant component is the only quantity needed in this study, we find that
the approach of Moore and Newman is more transparent and, furthermore, allows us to take
into account both modes of failure in a single theory. Indeed, the whole theory can be cast
into a few number of equations, while the generating function approach is more involved.
We define two probabilities for network A (and their equivalents for network B). As we
did for the case of a single network, we will take advantage of functions similar to G(X) and
2
H(X). By doing this, the following recursive equations are general and can be applied to
the redundant and to the conditional interaction cases depending of the way the functions
G(•) and H(•) are written for each case. Therefore, below we develop the theory for both
modes of failure and later we specialize on each interaction.
First, we define the probability XA
, as the probability that, by following a randomly
chosen link of network A, we reach a node from the largest connected component of network
A. The second probability, YkAin
, is the probability of choosing at random a node from
network A with in-degree kA
in
connected with a node from the largest component of network
B. Analogously, we define probabilities XB
and YkBin
for network B.
Thus, if we initially remove a fraction 1 pAof nodes from network A chosen at random,
and a fraction 1 pBof nodes from network B, we can write X
A
and XB
in analogy with
SI-Eq. (2) [we note that when network A and network B have the same number of nodes,
p = (pA+ p
B)/2]:
XA
= pA
2
41X
kAin
,kAout
kA
in
P�kA
in
, kA
out
�
hkA
in
i G(XA
, YkAin
, kA
in
, kA
out
)
3
5 . (7)
Here, the correlations between kA
in
and kA
out
from Eq. (1) in the main text are quantified by
P (kA
in
, kA
out
), which is the joint probability distribution of in- and out-degrees of nodes from
network A from where Eq. (1) in the main text can be derived. The probability function
G(XA
, YkAin
, kA
in
, kA
out
) in SI-Eq. (7) is analogous to SI-Eq. (3). It stands for the probability
that, by following a randomly chosen link from network A, we reach a node which is not part
of the giant component of network A, which has in-degree kA
in
and out-degree kA
out
and/or is
not connected with a node from the giant component network B (here and in what follows,
“and/or” refers to the nature of the two cases of study: the redundant and conditional
interactions, respectively). To write down SI-Eq. (7) we use the joint in- and out-degree
distribution of an end node of a randomly chosen in-link kA
in
P (kA
in
, kA
out
)/hkA
in
i. Finally, the
terms in the squared brackets stand for the probability XA
= XA
(pA= 1) before removing
the fraction 1 pA, which is the generalization of SI-Eq. (2). Thus, after the removal of
a fraction 1 pA, the probability of following a randomly selected in-link to reach a node
which belongs to the giant cluster of A is XA
(pA= 1) times the probability p
Afor this node
being a survival node. In a similar fashion, we write the probability XB, the joint degree
distribution P (kB
in
, kB
out
) and the probability function G(XB
, YkBin
, kB
in
, kB
out
) for network B:
3NATURE PHYSICS | www.nature.com/naturephysics 3
SUPPLEMENTARY INFORMATIONDOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
XB
= pB
2
41X
kBin
,kBout
kB
in
P�kB
in
, kB
out
�
hkB
in
i G(XB
, YkBin
, kB
in
, kB
out
)
3
5 . (8)
For the probability YkAin
of choosing at random a node from the network A with degree kA
in
connected through an out-link with a node from the giant component of B, we write down
the following expression:
YkAin
= pB
2
41X
kBin
P�kB
in
|kA
in
�(1X
B
)kB
in
3
5 . (9)
The term inside the squared brackets is the probability of choosing a node from network B
which is not part of the giant component of B and it is connected with a node from network
A of in-degree kA
in
. Naturally, YkAin
is this probability times the probability pBof the B-node
being a survival node after the removal of a fraction 1pBof nodes from network B. To write
down this equation, we use the conditional probability P (kB
in
|kA
in
) of a node from network B
with in-degree kB
in
being connected with a node with in-degree kA
in
from network A, and the
probability that, by following an in-link from B, we do not reach the giant component of
B, (1XB
). The conditional probability P�kB
in
|kA
in
�quantify the correlations expressed by
Eq. (2) in the main text. Similar equation can be written for YkBin
:
YkBin
= pA
2
41X
kAin
P�kA
in
|kB
in
�(1X
A
)kA
in
3
5 . (10)
With XA
, XB
, YkAin
, and YkBin
on hand, it is possible to compute the fraction of survival
nodes in the giant component of network A, SA
, and in network B, SB
, through the relations
analogous to SI-Eq. (5):
SA
= pA
2
41X
kAin
,kAout
P (kA
in
, kA
out
)H(XA
, YkAin
, kA
in
, kA
out
)
3
5 , (11)
and
SB
= pA
2
41X
kBin
,kBout
P (kB
in
, kB
out
)H(XB
, YkBin
, kB
in
, kB
out
)
3
5 . (12)
The probability function H(XA
, YkAin
, kA
in
, kA
out
) generalizes SI-Eq. (6), and stands for the
probability of randomly selecting a node from network A with in-degree kA
in
and out-degree
kA
out
, which is not in the giant component of A and/or it is not connected with the giant
4
component of B (again, and/or refers to redundant and conditional modes of interaction,
respectively).
Due to the di↵erent meanings that the probability function H(XA
, YkBin
, kA
in
, kA
out
) may
assume depending of the mode of interaction, for this general approach the nature of the
quantities SA and SB di↵er conceptually from the quantity S presented by SI-Eq. (5) for
a single network. See SI-Fig. 1 for more details. For the conditional mode, a node, or a
set of nodes from network A, for example, will fail if (i) it loses connection with the largest
component of network A, or if (ii) it loses connection with the largest component of network
B. Thus H(XA
, YkBin
, kA
in
, kA
out
) is the probability function that describes the probability of
picking a node at random from network A that is not part of the largest component of A
(due to condition (i) this node will fail) or that is not connected to the largest cluster of
network B (due to condition (ii) this node will also fail). Thus, SA (and its counterpart SB
for network B) is the fraction occupied by the largest component of survival node in network
A. For a finite size network, SA = nA/NA, where nA is the number of nodes in the largest
component and NA the number of nodes in network A. It is important to note that due
to the condition (ii) this fraction is necessarily the same as the size of the giant connected
component of network A. SA may be interpreted also as the fraction from network A that
is part of the mutually connected giant component SAB
, as in Ref. [2]. The same applies to
network B. In other words, the number of nodes in the mutually connected giant component
belonging to B is the same as the number of nodes in the giant connected component of B
as calculated after the attack as if B was a single network.
For the redundant mode, since there is no cascading propagation of damage due to the
failure of a neighbor, H(XA
, YkBin
, kA
in
, kA
out
) is the function that describes the probability of
picking a node at random, for example from network A, which is not connected to the
largest component from its own network, network A, and is not connected to the largest
component of network B via an out-going link. Therefore, the quantity SA provides the
fraction of “active” nodes, or in other words, the fraction of survival nodes that may be
part of the largest component of network A, and in addition a fraction from network A that
are disconnected from that largest component of network A, but are not failed because they
are still connected to the largest component of network B via an out-going link. Thus, the
mutually connected giant component SAB
has a di↵erent structure in this mode compared
to the conditional mode. This situation is illustrated in Fig. 1c in the main text and SI-Fig.
54 NATURE PHYSICS | www.nature.com/naturephysics
SUPPLEMENTARY INFORMATION DOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
XB
= pB
2
41X
kBin
,kBout
kB
in
P�kB
in
, kB
out
�
hkB
in
i G(XB
, YkBin
, kB
in
, kB
out
)
3
5 . (8)
For the probability YkAin
of choosing at random a node from the network A with degree kA
in
connected through an out-link with a node from the giant component of B, we write down
the following expression:
YkAin
= pB
2
41X
kBin
P�kB
in
|kA
in
�(1X
B
)kB
in
3
5 . (9)
The term inside the squared brackets is the probability of choosing a node from network B
which is not part of the giant component of B and it is connected with a node from network
A of in-degree kA
in
. Naturally, YkAin
is this probability times the probability pBof the B-node
being a survival node after the removal of a fraction 1pBof nodes from network B. To write
down this equation, we use the conditional probability P (kB
in
|kA
in
) of a node from network B
with in-degree kB
in
being connected with a node with in-degree kA
in
from network A, and the
probability that, by following an in-link from B, we do not reach the giant component of
B, (1XB
). The conditional probability P�kB
in
|kA
in
�quantify the correlations expressed by
Eq. (2) in the main text. Similar equation can be written for YkBin
:
YkBin
= pA
2
41X
kAin
P�kA
in
|kB
in
�(1X
A
)kA
in
3
5 . (10)
With XA
, XB
, YkAin
, and YkBin
on hand, it is possible to compute the fraction of survival
nodes in the giant component of network A, SA
, and in network B, SB
, through the relations
analogous to SI-Eq. (5):
SA
= pA
2
41X
kAin
,kAout
P (kA
in
, kA
out
)H(XA
, YkAin
, kA
in
, kA
out
)
3
5 , (11)
and
SB
= pA
2
41X
kBin
,kBout
P (kB
in
, kB
out
)H(XB
, YkBin
, kB
in
, kB
out
)
3
5 . (12)
The probability function H(XA
, YkAin
, kA
in
, kA
out
) generalizes SI-Eq. (6), and stands for the
probability of randomly selecting a node from network A with in-degree kA
in
and out-degree
kA
out
, which is not in the giant component of A and/or it is not connected with the giant
4
component of B (again, and/or refers to redundant and conditional modes of interaction,
respectively).
Due to the di↵erent meanings that the probability function H(XA
, YkBin
, kA
in
, kA
out
) may
assume depending of the mode of interaction, for this general approach the nature of the
quantities SA and SB di↵er conceptually from the quantity S presented by SI-Eq. (5) for
a single network. See SI-Fig. 1 for more details. For the conditional mode, a node, or a
set of nodes from network A, for example, will fail if (i) it loses connection with the largest
component of network A, or if (ii) it loses connection with the largest component of network
B. Thus H(XA
, YkBin
, kA
in
, kA
out
) is the probability function that describes the probability of
picking a node at random from network A that is not part of the largest component of A
(due to condition (i) this node will fail) or that is not connected to the largest cluster of
network B (due to condition (ii) this node will also fail). Thus, SA (and its counterpart SB
for network B) is the fraction occupied by the largest component of survival node in network
A. For a finite size network, SA = nA/NA, where nA is the number of nodes in the largest
component and NA the number of nodes in network A. It is important to note that due
to the condition (ii) this fraction is necessarily the same as the size of the giant connected
component of network A. SA may be interpreted also as the fraction from network A that
is part of the mutually connected giant component SAB
, as in Ref. [2]. The same applies to
network B. In other words, the number of nodes in the mutually connected giant component
belonging to B is the same as the number of nodes in the giant connected component of B
as calculated after the attack as if B was a single network.
For the redundant mode, since there is no cascading propagation of damage due to the
failure of a neighbor, H(XA
, YkBin
, kA
in
, kA
out
) is the function that describes the probability of
picking a node at random, for example from network A, which is not connected to the
largest component from its own network, network A, and is not connected to the largest
component of network B via an out-going link. Therefore, the quantity SA provides the
fraction of “active” nodes, or in other words, the fraction of survival nodes that may be
part of the largest component of network A, and in addition a fraction from network A that
are disconnected from that largest component of network A, but are not failed because they
are still connected to the largest component of network B via an out-going link. Thus, the
mutually connected giant component SAB
has a di↵erent structure in this mode compared
to the conditional mode. This situation is illustrated in Fig. 1c in the main text and SI-Fig.
5NATURE PHYSICS | www.nature.com/naturephysics 5
SUPPLEMENTARY INFORMATIONDOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
1. At the end of the attack process, there is a remaining node in network A which is not
connected to the giant component of A calculated as if it is a single network. Such a node
is still “on” since it is connected to B via an out-going link. Thus, the mutually connected
giant component contains this node.
Furthermore, a node that has lost all its out-going link will fail in the conditional inter-
action, even if it is still connected to its own giant component. However, in the redundant
mode, a node without out-going links may still function as long as it is still connected to the
giant component of its own single network. For instance, many nodes are still functioning
in Fig. 1c main text, redundant mode, even though they are not interconnected. However,
in conditional interaction Fig. 1b main text, all stable nodes needs to have out-going links.
That is, in redundant mode, the nodes can still receive power via the same network or the
other network, while in the conditional node, they need out-going connectivity all the time.
Taking into account these considerations, the value of pc is obtained from the behavior of the
giant component of either of the networks in the conditional mode, while in the redundant
mode, the value of pc is obtained from the size of the mutually giant connected component.
However, in this last case, it is statistically the same to obtain pc from the giant components
of one of the networks as well. In what follows the calculations of the giant components
are done by considering two networks of equal size N and damaging each network with a
fraction 1 p of nodes.
Next, we explicitly write the probability functions G and H for both, conditional and
redundant interactions, respectively, to occur on interactive networks after a random failure
of 1 pAand 1 p
Bnodes. It is important to note that the probabilities G and H describe
the probability of randomly choosing a node which is not part of the giant component of
one network and/or is not connected to a node from the giant component of the adjacent
network. In other words, this node picked at random is not part of the giant component of
the whole network. We test the general case where both networks are attacked: pA6= 1 and
pB6= 1. The theory can be used to attacking only one network by setting p
B= 1.
Redundant interaction: We consider the total fraction 1 p of nodes removed from
the two networks. If network A and network B have the same number of nodes, then
p = (pA+ p
B)/2. For redundant interaction two events are important. Both events are
defined as follows. The first is the probability that, by following a randomly chosen link
of a network, we do not reach the giant component of that network. For network A, this
6
probability can be written as (1XA
). The second is the probability of choosing at random
a node from one network, say network A, with in-degree kA
in
which is not connected with a
node from the giant component of network B. This probability can be written as (1YkAin
).
In the case of redundant interaction (with no cascading due to conditional mode) these two
probabilities are independent, since the lack of connectivity with network B does not imply
failure of a node from network A. Thus, the probability function G(XA
, YkAin
, kA
in
, kA
out
) that,
by following a randomly selected link we arrive at a node with in-degree kA
in
and out-degree
kA
out
which is not part of the giant cluster of its own network and is not connected with a
node from the giant cluster of the adjacent network can be written as:
G(XA
, YkAin
, kA
in
, kA
out
) = (1XA
)kA
in
−1(1 YkAin
)kA
out . (13)
Similarly, the probability functionH(XA
, YkAin
, kA
in
, kA
out
) of picking a node, at random, with
in-degree kA
in
and out-degree kA
out
from one network which is not part of the giant cluster of
its own network and is not connected with a node from the giant cluster from the adjacent
network is:
H(XA
, YkAin
, kA
in
, kA
out
) = (1XA
)kA
in(1 YkAin
)kA
out . (14)
Again, we can write equivalent expressions for G(XB
, YkBin
, kB
in
, kB
out
) and H(XB
, YkBin
, kB
in
, kB
out
)
as
G(XB
, YkBin
, kB
in
, kB
out
) = (1XB
)kB
in
−1(1 YkBin
)kB
out , (15)
and
H(XB
, YkBin
, kB
in
, kB
out
) = (1XB
)kB
in(1 YkBin
)kB
out . (16)
Conditional interaction: This interaction leads to cascading processes. In the condi-
tional interaction process, we are interested in the cascading e↵ects on the coupled networks,
A and B, due to an initial random failure of a portion of nodes in both networks, where
pA6= 1 and p
B6= 1. In the case of attacking network A only, the fraction p
Bis set to be equal
to one, such that a node from network B can only fail due to the conditional interaction.
For the conditional interaction, G(XA
, YkAin
, kA
in
, kA
out
) depends on the probability that, by
following a link from network A, we do not arrive at a node with in-degree kin
connected to
the giant component of its own network, (1XA
)kin−1, and on the probability of randomly
choosing a node from network A with kout
outgoing links towards network B, (1 YkAin
)kout .
Also, we have the probability H(XA
, YkAin
, kA
in
, kA
out
) of picking up a node from one network
76 NATURE PHYSICS | www.nature.com/naturephysics
SUPPLEMENTARY INFORMATION DOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
1. At the end of the attack process, there is a remaining node in network A which is not
connected to the giant component of A calculated as if it is a single network. Such a node
is still “on” since it is connected to B via an out-going link. Thus, the mutually connected
giant component contains this node.
Furthermore, a node that has lost all its out-going link will fail in the conditional inter-
action, even if it is still connected to its own giant component. However, in the redundant
mode, a node without out-going links may still function as long as it is still connected to the
giant component of its own single network. For instance, many nodes are still functioning
in Fig. 1c main text, redundant mode, even though they are not interconnected. However,
in conditional interaction Fig. 1b main text, all stable nodes needs to have out-going links.
That is, in redundant mode, the nodes can still receive power via the same network or the
other network, while in the conditional node, they need out-going connectivity all the time.
Taking into account these considerations, the value of pc is obtained from the behavior of the
giant component of either of the networks in the conditional mode, while in the redundant
mode, the value of pc is obtained from the size of the mutually giant connected component.
However, in this last case, it is statistically the same to obtain pc from the giant components
of one of the networks as well. In what follows the calculations of the giant components
are done by considering two networks of equal size N and damaging each network with a
fraction 1 p of nodes.
Next, we explicitly write the probability functions G and H for both, conditional and
redundant interactions, respectively, to occur on interactive networks after a random failure
of 1 pAand 1 p
Bnodes. It is important to note that the probabilities G and H describe
the probability of randomly choosing a node which is not part of the giant component of
one network and/or is not connected to a node from the giant component of the adjacent
network. In other words, this node picked at random is not part of the giant component of
the whole network. We test the general case where both networks are attacked: pA6= 1 and
pB6= 1. The theory can be used to attacking only one network by setting p
B= 1.
Redundant interaction: We consider the total fraction 1 p of nodes removed from
the two networks. If network A and network B have the same number of nodes, then
p = (pA+ p
B)/2. For redundant interaction two events are important. Both events are
defined as follows. The first is the probability that, by following a randomly chosen link
of a network, we do not reach the giant component of that network. For network A, this
6
probability can be written as (1XA
). The second is the probability of choosing at random
a node from one network, say network A, with in-degree kA
in
which is not connected with a
node from the giant component of network B. This probability can be written as (1YkAin
).
In the case of redundant interaction (with no cascading due to conditional mode) these two
probabilities are independent, since the lack of connectivity with network B does not imply
failure of a node from network A. Thus, the probability function G(XA
, YkAin
, kA
in
, kA
out
) that,
by following a randomly selected link we arrive at a node with in-degree kA
in
and out-degree
kA
out
which is not part of the giant cluster of its own network and is not connected with a
node from the giant cluster of the adjacent network can be written as:
G(XA
, YkAin
, kA
in
, kA
out
) = (1XA
)kA
in
−1(1 YkAin
)kA
out . (13)
Similarly, the probability functionH(XA
, YkAin
, kA
in
, kA
out
) of picking a node, at random, with
in-degree kA
in
and out-degree kA
out
from one network which is not part of the giant cluster of
its own network and is not connected with a node from the giant cluster from the adjacent
network is:
H(XA
, YkAin
, kA
in
, kA
out
) = (1XA
)kA
in(1 YkAin
)kA
out . (14)
Again, we can write equivalent expressions for G(XB
, YkBin
, kB
in
, kB
out
) and H(XB
, YkBin
, kB
in
, kB
out
)
as
G(XB
, YkBin
, kB
in
, kB
out
) = (1XB
)kB
in
−1(1 YkBin
)kB
out , (15)
and
H(XB
, YkBin
, kB
in
, kB
out
) = (1XB
)kB
in(1 YkBin
)kB
out . (16)
Conditional interaction: This interaction leads to cascading processes. In the condi-
tional interaction process, we are interested in the cascading e↵ects on the coupled networks,
A and B, due to an initial random failure of a portion of nodes in both networks, where
pA6= 1 and p
B6= 1. In the case of attacking network A only, the fraction p
Bis set to be equal
to one, such that a node from network B can only fail due to the conditional interaction.
For the conditional interaction, G(XA
, YkAin
, kA
in
, kA
out
) depends on the probability that, by
following a link from network A, we do not arrive at a node with in-degree kin
connected to
the giant component of its own network, (1XA
)kin−1, and on the probability of randomly
choosing a node from network A with kout
outgoing links towards network B, (1 YkAin
)kout .
Also, we have the probability H(XA
, YkAin
, kA
in
, kA
out
) of picking up a node from one network
7NATURE PHYSICS | www.nature.com/naturephysics 7
SUPPLEMENTARY INFORMATIONDOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
which is not part of the giant component of its own network or picking up one node from
one network which is not connected with one node from the giant component of the adjacent
network, which is also dependent of the probabilities (1XA
) and (1 YkAin
).
Di↵erent from the redundant mode, these probabilities, (1XA
) and (1 YkAin
), are not
mutually exclusive in the conditional interaction. Thus:
G(XA
, YkAin
, kA
in
, kA
out
) = (1XA
)kA
in
−1 + (1 YkAin
)kA
out (1XA
)kA
in
−1(1 YkAin
)kA
out , (17)
and
H(XA
, YkAin
, kA
in
, kA
out
) = (1XA
)kA
in + (1 YkAin
)kA
out (1XA
)kA
in(1 YkAin
)kA
out . (18)
We can write the equivalent expressions for G(XB
, YkBin
, kB
in
, kB
out
) and H(XB
, YkBin
, kB
in
, kB
out
) as
follows:
G(XB
, YkBin
, kB
in
, kB
out
) = (1XB
)kB
in
−1 + (1 YkBin
)kB
out (1XB
)kB
in
−1(1 YkBin
)kB
out , (19)
and
H(XB
, YkBin
, kB
in
, kB
out
) = (1XB
)kB
in + (1 YkBin
)kB
out (1XB
)kB
in(1 YkBin
)kB
out . (20)
With the set of SI-Eqs. (13)-(14) and SI-Eqs. (17)-(18), and their equivalents for net-
work B, SI-Eqs. (15)-(16) and SI-Eqs. (19)-(20), it is possible to solve both problems, the
redundant and the conditional interactions, on a system of two coupled networks inter-
connected through degree-degree correlated outgoing nodes. The correlation between the
coupled networks is represented by the in- out-degree distribution P (kA
in
, kA
out
) and by the
conditional probability P(kB
in
|kA
in
). In the following section, we present the network model
used to generate a system of two networks interconnected with correlations described by
power law functions with the exponents ↵ and β. These networks are used on the calcula-
tions of the distribution P (kA
in
, kA
out
) and P(kB
in
|kA
in
)for each pair of (↵, β). The final result
is the probability for a node to belong to the giant component of network A or B– as given
by SI-Eq. (11) and SI-Eq. (12)– as a function of the fraction of removed nodes 1 p (with
pA = pB = p) from where the percolation threshold pc can be evaluated from SA(pc) = 0 and
SB(pc) = 0 as a function of the three exponents defining the networks: γ, ↵ and β, and the
cuto↵ in the degree distribution kmax
. We use two networks of equal size N = 1500 nodes,
each.
8
C. Network model. Test of theory
In order to test the percolation theory using the above formalism, we need to generate
a system of interacting networks with the prescribed set of exponents and degree cuto↵.
The first step of our network model is to generate two networks, A and B, with the same
number N of nodes and with the desired in-degree distribution P (kin
) as defined by γ and
the maximum degree kmax
. To do this we use the standard “configuration model” which
has been extensively used to generate di↵erent network topologies with arbitrary degree
distribution [3]. The algorithm of the configuration model basically consists of assigning a
randomly chosen degree sequence to the N nodes of the networks in such a way that this
sequence is distributed as P (kin
) ⇠ k−γin
with 1 kin
kmax
and P (kin
) = 0 for kin
> kmax
.
After that, we select a pair of nodes at random, both with kin
> 0, and we connect them.
The next step of the model is to connect networks A and B in such a way that their
outgoing nodes have degree-degree correlations that can be described by the parameters ↵
and β as defined in Eqs. (1) and (2) in the main text. In order to do this, we use an
algorithm inspired by the configuration model. First, we assign a sequence of out-degrees
kout
to the nodes of each network. This process is performed independently to each network
by adding the same number of outgoing links. Each outgoing link is added individually to
nodes chosen at random with a probability that is proportional to k↵in
. Thus, an out-degree
sequence is assigned to the nodes in each network in such a way that kout
⇠ k↵in
according
to Eq. (1) main text. This process results in a set of outgoing stubs attached to every node
in network A and B. The next step is to join these stubs in such a way that we satisfy the
correlations given by Eq. (2) main text.
The next step is to choose two nodes, one from each network, such that hknn
in
i = A⇥ kβin
,
and then, we connect them if they have available outgoing links. Here, we choose the factor
A such that hknn
in
i = 1 for kin
= 1 when β = 1, and hknn
in
i = kmax
for kin
= 1 and β = 1.
Thus, we write the value of the factor as A = A(kmax
, β) = k(1−β)/2max
.
The algorithm works as follows: we randomly choose one node i from one network. After
that, we choose another node j, from the second network, with in-degree kjin
with probability
that follows a Poisson distribution P (kjin
,λ), where the mean value λ = hknn
in
i. We connect
nodes i and j if they are not connected yet.
It should be noted that Eqs. (1) and (2) in the main text may not be self-consistent for
98 NATURE PHYSICS | www.nature.com/naturephysics
SUPPLEMENTARY INFORMATION DOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
which is not part of the giant component of its own network or picking up one node from
one network which is not connected with one node from the giant component of the adjacent
network, which is also dependent of the probabilities (1XA
) and (1 YkAin
).
Di↵erent from the redundant mode, these probabilities, (1XA
) and (1 YkAin
), are not
mutually exclusive in the conditional interaction. Thus:
G(XA
, YkAin
, kA
in
, kA
out
) = (1XA
)kA
in
−1 + (1 YkAin
)kA
out (1XA
)kA
in
−1(1 YkAin
)kA
out , (17)
and
H(XA
, YkAin
, kA
in
, kA
out
) = (1XA
)kA
in + (1 YkAin
)kA
out (1XA
)kA
in(1 YkAin
)kA
out . (18)
We can write the equivalent expressions for G(XB
, YkBin
, kB
in
, kB
out
) and H(XB
, YkBin
, kB
in
, kB
out
) as
follows:
G(XB
, YkBin
, kB
in
, kB
out
) = (1XB
)kB
in
−1 + (1 YkBin
)kB
out (1XB
)kB
in
−1(1 YkBin
)kB
out , (19)
and
H(XB
, YkBin
, kB
in
, kB
out
) = (1XB
)kB
in + (1 YkBin
)kB
out (1XB
)kB
in(1 YkBin
)kB
out . (20)
With the set of SI-Eqs. (13)-(14) and SI-Eqs. (17)-(18), and their equivalents for net-
work B, SI-Eqs. (15)-(16) and SI-Eqs. (19)-(20), it is possible to solve both problems, the
redundant and the conditional interactions, on a system of two coupled networks inter-
connected through degree-degree correlated outgoing nodes. The correlation between the
coupled networks is represented by the in- out-degree distribution P (kA
in
, kA
out
) and by the
conditional probability P(kB
in
|kA
in
). In the following section, we present the network model
used to generate a system of two networks interconnected with correlations described by
power law functions with the exponents ↵ and β. These networks are used on the calcula-
tions of the distribution P (kA
in
, kA
out
) and P(kB
in
|kA
in
)for each pair of (↵, β). The final result
is the probability for a node to belong to the giant component of network A or B– as given
by SI-Eq. (11) and SI-Eq. (12)– as a function of the fraction of removed nodes 1 p (with
pA = pB = p) from where the percolation threshold pc can be evaluated from SA(pc) = 0 and
SB(pc) = 0 as a function of the three exponents defining the networks: γ, ↵ and β, and the
cuto↵ in the degree distribution kmax
. We use two networks of equal size N = 1500 nodes,
each.
8
C. Network model. Test of theory
In order to test the percolation theory using the above formalism, we need to generate
a system of interacting networks with the prescribed set of exponents and degree cuto↵.
The first step of our network model is to generate two networks, A and B, with the same
number N of nodes and with the desired in-degree distribution P (kin
) as defined by γ and
the maximum degree kmax
. To do this we use the standard “configuration model” which
has been extensively used to generate di↵erent network topologies with arbitrary degree
distribution [3]. The algorithm of the configuration model basically consists of assigning a
randomly chosen degree sequence to the N nodes of the networks in such a way that this
sequence is distributed as P (kin
) ⇠ k−γin
with 1 kin
kmax
and P (kin
) = 0 for kin
> kmax
.
After that, we select a pair of nodes at random, both with kin
> 0, and we connect them.
The next step of the model is to connect networks A and B in such a way that their
outgoing nodes have degree-degree correlations that can be described by the parameters ↵
and β as defined in Eqs. (1) and (2) in the main text. In order to do this, we use an
algorithm inspired by the configuration model. First, we assign a sequence of out-degrees
kout
to the nodes of each network. This process is performed independently to each network
by adding the same number of outgoing links. Each outgoing link is added individually to
nodes chosen at random with a probability that is proportional to k↵in
. Thus, an out-degree
sequence is assigned to the nodes in each network in such a way that kout
⇠ k↵in
according
to Eq. (1) main text. This process results in a set of outgoing stubs attached to every node
in network A and B. The next step is to join these stubs in such a way that we satisfy the
correlations given by Eq. (2) main text.
The next step is to choose two nodes, one from each network, such that hknn
in
i = A⇥ kβin
,
and then, we connect them if they have available outgoing links. Here, we choose the factor
A such that hknn
in
i = 1 for kin
= 1 when β = 1, and hknn
in
i = kmax
for kin
= 1 and β = 1.
Thus, we write the value of the factor as A = A(kmax
, β) = k(1−β)/2max
.
The algorithm works as follows: we randomly choose one node i from one network. After
that, we choose another node j, from the second network, with in-degree kjin
with probability
that follows a Poisson distribution P (kjin
,λ), where the mean value λ = hknn
in
i. We connect
nodes i and j if they are not connected yet.
It should be noted that Eqs. (1) and (2) in the main text may not be self-consistent for
9NATURE PHYSICS | www.nature.com/naturephysics 9
SUPPLEMENTARY INFORMATIONDOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
all values of ↵, β. For instance, for very low values of β, e.g., β = 1, the degree correlations
between coupled networks are not always self-consistent with the structural relations between
kin
and kout
described by ↵. Since β measures the convergence of connections between
networks, when β is negative hubs prefer to connect with low-degree nodes. To better
understand these features, consider β = 1, and for nodes with kin
= 1 and kin
= kmax
.
With this configuration, nodes with kin
= 1 are likely to be connected with nodes from the
adjacent network with kin
= kmax
. When ↵ = 1, most of the links are attached to the highly
active nodes, notably, nodes with kin
= kmax
, and less likely to nodes with kin
= 1. In this
regime, there are not enough low-degree nodes with outgoing links to be connected with the
high-degree nodes, thus the desired relation between knn
in
versus kin
cannot be realized. The
other possible situation is when ↵ is negative. In this regime, most of the outgoing links are
attached to low-degree nodes, consequently, the few hubs from the network are unlikely to
receive an outgoing link, and even when it happens, one hub does not have enough outgoing
links to be connected to the stubs of the low-degree nodes. For these reasons we limit our
study to ↵ > 1 and β > 0.5 where the relations are found to be self-consistent.
For every initial pair (↵, β), we generate a network with the above algorithm and then
we recalculate the e↵ective values of (↵, β) which are then used to plot the phase diagram
pc(↵, β) in Figs. 2 and 4 in the main text.
D. Calculation of the giant components and percolation threshold pc(γ,↵,β, kmax
)
With the networks generated in the previous section we are able to compute the functions
P (kA
in
, kA
out
) and P(kB
in
|kA
in
). Then we apply the recursive equations derived previously to
calculate the size of the giant components SA
and SB
from SI-Eqs. (11) and (12). We do
this calculation for di↵erent values of p for cases of study and then extract the percolation
threshold pc at which the giant components SA and SB vanish in conditional mode.
SI-Figure 2 shows the predictions of the theory in the conditional mode for a network with
γ = 2.5, ↵ = 0.5, β = 0.5 and kmax
= 100. We plot the relative size of the giant components
in A and B, SA
and SB
, as predicted by SI-Eqs. (11) and (12). As one can see in SI-Fig. 2,
there is a well-defined critical value at which the A-giant component vanishes which defines
the percolation threshold pc(γ,↵, β, kmax
) = 0.335 for these particular parameters.
SI-Figure 2 also presents the comparison between theoretical results and direct simula-
10
tions. We test the theory by attacking randomly the generated correlated networks and
calculating numerically the giant components versus the fraction of removed nodes 1 p.
The results show a good agreement corroborating the theory.
After testing the theory, a full analysis is done spanning a large parameter space by
changing the four parameters defining the theory: (γ,↵, β, kmax
). The results are plotted in
the main text Fig. 2 and 4 for the stated values of the parameters. Beyond the calcula-
tion of pc(↵, β), we also identify regimes of first-order phase transitions in the conditional
interaction, found specially when pc is high, beyond the standard second-order percolation
transition; a result that will be expanded in subsequent papers.
II. EXPERIMENTS: ANALYSIS OF INTERCONNECTED BRAIN NETWORKS
Our functional brain networks are based on functional magnetic resonance imaging
(fMRI). The fMRI data consists of temporal series, known as the blood oxygen level-
dependent (BOLD) signals, from di↵erent brain regions. The brain regions are represented
by voxels. In this work we use data sets gathered in two di↵erent and independent ex-
periments. The first is the NYU public data set from resting state humans participants.
The NYU CSC TestRetest resource is available at http://www.nitrc.org/projects/nyu_
trt/. The second data set was gathered in a dual-task experiment on humans previously
produced by our group [4] and recently analyzed in Ref. [5]. The brain networks ana-
lyzed here can be found at: http://lev.ccny.cuny.edu/~
hmakse/soft_data.html. Both
datasets were collected in healthy volunteers and using 3.0T MRI systems equipped with
echoplanar imaging (EPI). The first study was approved by the institutional review boards
of the New York University School of Medicine and New York University. The second study
is part of a larger neuroimaging research program headed by Denis Le Bihan and approved
by the Comite Consultatif pour la Protection des Personnes dans la Recherche Biomedicale,
Hopital de Bicetre (Le Kremlin-Bicetre, France).
Resting state experiments: A total of 12 right-handed participants were included (8
women and 4 men, mean age 27, ranging from 21 to 49). During the scan, participants
were instructed to rest with their eyes open while the word Relax was centrally projected
in white, against a black background. A total of 197 brain volumes were acquired. For
fMRI a gradient echo (GE) EPI was used with the following parameters: repetition time
1110 NATURE PHYSICS | www.nature.com/naturephysics
SUPPLEMENTARY INFORMATION DOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
all values of ↵, β. For instance, for very low values of β, e.g., β = 1, the degree correlations
between coupled networks are not always self-consistent with the structural relations between
kin
and kout
described by ↵. Since β measures the convergence of connections between
networks, when β is negative hubs prefer to connect with low-degree nodes. To better
understand these features, consider β = 1, and for nodes with kin
= 1 and kin
= kmax
.
With this configuration, nodes with kin
= 1 are likely to be connected with nodes from the
adjacent network with kin
= kmax
. When ↵ = 1, most of the links are attached to the highly
active nodes, notably, nodes with kin
= kmax
, and less likely to nodes with kin
= 1. In this
regime, there are not enough low-degree nodes with outgoing links to be connected with the
high-degree nodes, thus the desired relation between knn
in
versus kin
cannot be realized. The
other possible situation is when ↵ is negative. In this regime, most of the outgoing links are
attached to low-degree nodes, consequently, the few hubs from the network are unlikely to
receive an outgoing link, and even when it happens, one hub does not have enough outgoing
links to be connected to the stubs of the low-degree nodes. For these reasons we limit our
study to ↵ > 1 and β > 0.5 where the relations are found to be self-consistent.
For every initial pair (↵, β), we generate a network with the above algorithm and then
we recalculate the e↵ective values of (↵, β) which are then used to plot the phase diagram
pc(↵, β) in Figs. 2 and 4 in the main text.
D. Calculation of the giant components and percolation threshold pc(γ,↵,β, kmax
)
With the networks generated in the previous section we are able to compute the functions
P (kA
in
, kA
out
) and P(kB
in
|kA
in
). Then we apply the recursive equations derived previously to
calculate the size of the giant components SA
and SB
from SI-Eqs. (11) and (12). We do
this calculation for di↵erent values of p for cases of study and then extract the percolation
threshold pc at which the giant components SA and SB vanish in conditional mode.
SI-Figure 2 shows the predictions of the theory in the conditional mode for a network with
γ = 2.5, ↵ = 0.5, β = 0.5 and kmax
= 100. We plot the relative size of the giant components
in A and B, SA
and SB
, as predicted by SI-Eqs. (11) and (12). As one can see in SI-Fig. 2,
there is a well-defined critical value at which the A-giant component vanishes which defines
the percolation threshold pc(γ,↵, β, kmax
) = 0.335 for these particular parameters.
SI-Figure 2 also presents the comparison between theoretical results and direct simula-
10
tions. We test the theory by attacking randomly the generated correlated networks and
calculating numerically the giant components versus the fraction of removed nodes 1 p.
The results show a good agreement corroborating the theory.
After testing the theory, a full analysis is done spanning a large parameter space by
changing the four parameters defining the theory: (γ,↵, β, kmax
). The results are plotted in
the main text Fig. 2 and 4 for the stated values of the parameters. Beyond the calcula-
tion of pc(↵, β), we also identify regimes of first-order phase transitions in the conditional
interaction, found specially when pc is high, beyond the standard second-order percolation
transition; a result that will be expanded in subsequent papers.
II. EXPERIMENTS: ANALYSIS OF INTERCONNECTED BRAIN NETWORKS
Our functional brain networks are based on functional magnetic resonance imaging
(fMRI). The fMRI data consists of temporal series, known as the blood oxygen level-
dependent (BOLD) signals, from di↵erent brain regions. The brain regions are represented
by voxels. In this work we use data sets gathered in two di↵erent and independent ex-
periments. The first is the NYU public data set from resting state humans participants.
The NYU CSC TestRetest resource is available at http://www.nitrc.org/projects/nyu_
trt/. The second data set was gathered in a dual-task experiment on humans previously
produced by our group [4] and recently analyzed in Ref. [5]. The brain networks ana-
lyzed here can be found at: http://lev.ccny.cuny.edu/~
hmakse/soft_data.html. Both
datasets were collected in healthy volunteers and using 3.0T MRI systems equipped with
echoplanar imaging (EPI). The first study was approved by the institutional review boards
of the New York University School of Medicine and New York University. The second study
is part of a larger neuroimaging research program headed by Denis Le Bihan and approved
by the Comite Consultatif pour la Protection des Personnes dans la Recherche Biomedicale,
Hopital de Bicetre (Le Kremlin-Bicetre, France).
Resting state experiments: A total of 12 right-handed participants were included (8
women and 4 men, mean age 27, ranging from 21 to 49). During the scan, participants
were instructed to rest with their eyes open while the word Relax was centrally projected
in white, against a black background. A total of 197 brain volumes were acquired. For
fMRI a gradient echo (GE) EPI was used with the following parameters: repetition time
11NATURE PHYSICS | www.nature.com/naturephysics 11
SUPPLEMENTARY INFORMATIONDOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
(TR) = 2.0 s; echo time (TE) = 25 ms; angle = 90◦; field of view (FOV) = 192 ⇥ 192
mm; matrix = 64 ⇥ 64; 39 slices 3 mm thick. For spatial normalization and localization,
a high-resolution T1-weighted anatomical image was also acquired using a magnetization
prepared gradient echo sequence (MP-RAGE, TR = 2500 ms; TE = 4.35 ms; inversion time
(TI) = 900 ms; flip angle = 8◦; FOV = 256 mm; 176 slices). Data were processed using both
AFNI (version AFNI 2011 12 21 1014, http://afni.nimh.nih.gov/afni) and FSL (ver-
sion 5.0, www.fmrib.ox.ac.uk) and the help of the www.nitrc.org/projects/fcon_1000
batch scripts for preprocessing. The preprocessing consisted on: motion correcting (AFNI)
using Fourier interpolation, spatial smoothing (fsl) with gaussian kernel (FWHM=6mm),
mean intensity normalization (fsl), FFT band-pass filtering (AFNI) with 0.08Hz and 0.01Hz
bounds, linear and quadratic trends removing, transformation into MIN152 space (fsl) with
a 12 degrees of freedom affin transformation, (AFNI) and extraction of global, white matter
and cerebrospinal fluid nuisance signals.
Dual task experiments: Sixteen participants (7 women and 9 men, mean age, 23,
ranging from 20 to 28) were asked to perform two consecutive tasks with the instruction of
providing fast and accurate responses to each of them. The first task was a visual task of
comparing a given number (target T1) to a fixed reference, and, second, an auditory task
of judging the pitch of an auditory tone (target T2) [4]. The two stimuli are presented with
a stimulus onset asynchrony (SOA) varying from: 0, 300, 900 and 1200 ms. Subjects had
to respond with a key press using right and left hands, whether the number flashed on the
screen or the tone were above or below a target number or frequency, respectively. Full
details and preliminary statistical analysis of this experiment have been reported elsewhere
[4, 5].
Subjects performed a total of 160 trials (40 for each SOA value) with a 12 s inter-trial
interval in five blocks of 384 s with a resting time of ⇠ 5 min between blocks. In our
analysis we use all scans, that is, scans coming from all SOA. Since each of the 16 subjects
perform four SOA experiments, we have a total of 64 brain scans. The experiments were
performed on a 3T fMRI system (Bruker). Functional images were obtained with a T2*-
weighted gradient echoplanar imaging sequence [repetition time (TR) 1.5 s; echo time 40
ms; angle 90; field of view (FOV) 192 ⇥ 256 mm; matrix 64 ⇥ 64]. The whole brain was
acquired in 24 slices with a slice thickness of 5 mm. Volumes were realigned using the
first volume as reference, corrected for slice acquisition timing di↵erences, normalized to the
12
standard template of the Montreal Neurological Institute (MNI) using a 12 degree affine
transformation, and spatially smoothed (FWHM = 6mm). High-resolution images (three-
dimensional GE inversion-recovery sequence, TI = 700 mm; FOV = 192 ⇥ 256 ⇥ 256 mm;
matrix = 256 ⇥ 128 ⇥ 256; slice thickness = 1 mm) were also acquired. We computed
the phase and amplitude of the hemodynamic response of each trial as explained in M.
Sigman, A. Jobert, S. Dehaene, Parsing a sequence of brain activations of psychological
times using fMRI. Neuroimage 35, 655-668 (2007). We note that the present data contains
a standard preprocessing spatial smoothing with gaussian kernel (FWHM=6mm), which
was not applied in Ref. [5]. Such smoothing produces smaller percolation thresholds as
compared with those obtained in Ref. [5].
Construction of brain networks: In order to build brain networks in both experi-
ments, we follow standard procedures in the literature [5–7]. We first compute the corre-
lations Cij between the BOLD signals of any pair of voxels i and j from the fMRI images.
Each element of the resulting matrix has value on the range 1 Cij 1. If one consid-
ers that each voxel represents a node from the brain network in question, it is possible to
assume that the correlations Cij are proportional to the probability of nodes i and j being
functionally connected. Therefore, one can define a threshold T , such that if T < Cij the
nodes i and j are connected. We begin to add the links from higher values to lower values of
T . This growing process can be compared to the bond percolation process. As we lower the
value of T , di↵erent clusters of connected nodes appear, and as the threshold T approaches
a critical value of Tc, multiple components merge forming a giant component.
In random networks, the size of the largest component increases rapidly and continuously
through a critical phase transition at Tc, in which a single incipient cluster dominates and
spans over the system [8]. Instead, since the connections in brain networks are highly
correlated rather than random, the size of the largest component increases progressively
with a series of sharp jumps. These jumps have been previously reported in Ref. [5]. This
process reveals the multiplicity of percolation transitions: percolating networks subsequently
merge in each discrete transition as T decreases further. We observe this structure in the
two datasets investigated in this study: for the human resting state in Fig. 3a main text
and for the human dual task in Fig. 3b main text.
For each dataset we identify the critical value of T , namely Tc, in which the two largest
components merge, as one can notice in Fig. 3 in the main text. While the anatomical
1312 NATURE PHYSICS | www.nature.com/naturephysics
SUPPLEMENTARY INFORMATION DOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
(TR) = 2.0 s; echo time (TE) = 25 ms; angle = 90◦; field of view (FOV) = 192 ⇥ 192
mm; matrix = 64 ⇥ 64; 39 slices 3 mm thick. For spatial normalization and localization,
a high-resolution T1-weighted anatomical image was also acquired using a magnetization
prepared gradient echo sequence (MP-RAGE, TR = 2500 ms; TE = 4.35 ms; inversion time
(TI) = 900 ms; flip angle = 8◦; FOV = 256 mm; 176 slices). Data were processed using both
AFNI (version AFNI 2011 12 21 1014, http://afni.nimh.nih.gov/afni) and FSL (ver-
sion 5.0, www.fmrib.ox.ac.uk) and the help of the www.nitrc.org/projects/fcon_1000
batch scripts for preprocessing. The preprocessing consisted on: motion correcting (AFNI)
using Fourier interpolation, spatial smoothing (fsl) with gaussian kernel (FWHM=6mm),
mean intensity normalization (fsl), FFT band-pass filtering (AFNI) with 0.08Hz and 0.01Hz
bounds, linear and quadratic trends removing, transformation into MIN152 space (fsl) with
a 12 degrees of freedom affin transformation, (AFNI) and extraction of global, white matter
and cerebrospinal fluid nuisance signals.
Dual task experiments: Sixteen participants (7 women and 9 men, mean age, 23,
ranging from 20 to 28) were asked to perform two consecutive tasks with the instruction of
providing fast and accurate responses to each of them. The first task was a visual task of
comparing a given number (target T1) to a fixed reference, and, second, an auditory task
of judging the pitch of an auditory tone (target T2) [4]. The two stimuli are presented with
a stimulus onset asynchrony (SOA) varying from: 0, 300, 900 and 1200 ms. Subjects had
to respond with a key press using right and left hands, whether the number flashed on the
screen or the tone were above or below a target number or frequency, respectively. Full
details and preliminary statistical analysis of this experiment have been reported elsewhere
[4, 5].
Subjects performed a total of 160 trials (40 for each SOA value) with a 12 s inter-trial
interval in five blocks of 384 s with a resting time of ⇠ 5 min between blocks. In our
analysis we use all scans, that is, scans coming from all SOA. Since each of the 16 subjects
perform four SOA experiments, we have a total of 64 brain scans. The experiments were
performed on a 3T fMRI system (Bruker). Functional images were obtained with a T2*-
weighted gradient echoplanar imaging sequence [repetition time (TR) 1.5 s; echo time 40
ms; angle 90; field of view (FOV) 192 ⇥ 256 mm; matrix 64 ⇥ 64]. The whole brain was
acquired in 24 slices with a slice thickness of 5 mm. Volumes were realigned using the
first volume as reference, corrected for slice acquisition timing di↵erences, normalized to the
12
standard template of the Montreal Neurological Institute (MNI) using a 12 degree affine
transformation, and spatially smoothed (FWHM = 6mm). High-resolution images (three-
dimensional GE inversion-recovery sequence, TI = 700 mm; FOV = 192 ⇥ 256 ⇥ 256 mm;
matrix = 256 ⇥ 128 ⇥ 256; slice thickness = 1 mm) were also acquired. We computed
the phase and amplitude of the hemodynamic response of each trial as explained in M.
Sigman, A. Jobert, S. Dehaene, Parsing a sequence of brain activations of psychological
times using fMRI. Neuroimage 35, 655-668 (2007). We note that the present data contains
a standard preprocessing spatial smoothing with gaussian kernel (FWHM=6mm), which
was not applied in Ref. [5]. Such smoothing produces smaller percolation thresholds as
compared with those obtained in Ref. [5].
Construction of brain networks: In order to build brain networks in both experi-
ments, we follow standard procedures in the literature [5–7]. We first compute the corre-
lations Cij between the BOLD signals of any pair of voxels i and j from the fMRI images.
Each element of the resulting matrix has value on the range 1 Cij 1. If one consid-
ers that each voxel represents a node from the brain network in question, it is possible to
assume that the correlations Cij are proportional to the probability of nodes i and j being
functionally connected. Therefore, one can define a threshold T , such that if T < Cij the
nodes i and j are connected. We begin to add the links from higher values to lower values of
T . This growing process can be compared to the bond percolation process. As we lower the
value of T , di↵erent clusters of connected nodes appear, and as the threshold T approaches
a critical value of Tc, multiple components merge forming a giant component.
In random networks, the size of the largest component increases rapidly and continuously
through a critical phase transition at Tc, in which a single incipient cluster dominates and
spans over the system [8]. Instead, since the connections in brain networks are highly
correlated rather than random, the size of the largest component increases progressively
with a series of sharp jumps. These jumps have been previously reported in Ref. [5]. This
process reveals the multiplicity of percolation transitions: percolating networks subsequently
merge in each discrete transition as T decreases further. We observe this structure in the
two datasets investigated in this study: for the human resting state in Fig. 3a main text
and for the human dual task in Fig. 3b main text.
For each dataset we identify the critical value of T , namely Tc, in which the two largest
components merge, as one can notice in Fig. 3 in the main text. While the anatomical
13NATURE PHYSICS | www.nature.com/naturephysics 13
SUPPLEMENTARY INFORMATIONDOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
projection of the largest component varied across experiments, this merging pattern at Tc
was clearly observed in each participant of the two experiments analyzed here, two examples
are shown in Figs. 3a-b main text. The transition is confirmed by the measurement of the
second largest cluster which shows a peak at Tc, see SI-Fig. 3.
For T values larger than Tc the two largest brain clusters are disconnected, forming two
independent networks. Each network is internally connected by a set of strong-links, which
correspond to kin
[5] in the notation of systems of networks. By lowering T to values smaller
than Tc, the two networks connect by a set of weak-links, which correspond to kout
[5], i.e.
the set of links connecting the two networks.
Our analysis of the structural organization of weak links connecting di↵erent clusters is
performed with T0
< T < Tc. Here, T0
is chosen in such a way that the average hkout
i of
outgoing degrees of the nodes on the two largest clusters is hkout
i = 1. For lower values
of T0
, where hkout
i = 2 and = 5, we found no relevant di↵erence with the studied case of
hkout
i = 1.
As done in previous network experiments based on the dual task data [5] we create a
mask where we keep voxels which were activated in more than 75% of the cases, i.e., in at
least 48 instances out of the 64 total cases considered. The obtained number of activated
voxels in the whole brain is N ⇡ 60, 000, varying slightly for di↵erent individuals and stimuli.
The ‘activated or functional map’ exhibits phases consistently falling within the expected
response latency for a task-induced activation [4]. As expected for an experiment involv-
ing visual and auditory stimuli and bi-manual responses, the responsive regions included
bilateral visual occipito-temporal cortices, bilateral auditory cortices, motor, premotor and
cerebellar cortices, and a large-scale bilateral parieto-frontal structure. In the present anal-
ysis we follow [5] and we do not explore the di↵erences in networks between di↵erent SOA
conditions. Rather, we consider them as independent equivalent experiments, generating a
total of 64 di↵erent scans, one for each condition of temporal gap and subject.
The following emergent clusters are seen in resting state: medial prefrontal cortex, pos-
terior cingulate, and lateral temporoparietal regions, all of them part of the default mode
network (DMN) typically seen in resting state data and specifically found in our NYU dataset
[9].
14
A. Computation of parameters γ, ↵, β, and kmax
Once Tc is determined, we are able to compute the degree distribution of the brain
networks. For a given brain scan we search for all connected components of strong links
with Cij > Tc, where Tc is the first jump in the largest connected component as seen in Fig.
3 main text. We then calculate P (kin
) using all brain networks for a given experiment; the
results are plotted in Fig. 3 main text. We consider all nodes with kin
≥ 1 at Tc from all the
connected clusters. As one can see in Fig. 3b main text, for all data sets, we found degree
distributions which can be described by power laws P (kin
) ⇠ k−γin
with a given cut-o↵ kmax
.
For the resting state , we found γ = 2.85± 0.04 and kmax
= 133 while for the dual task we
found γ = 2.25± 0.07, kmax
= 139 (see Table I in main text). We use a statistical test based
on maximum likelihood methods and bootstrap analysis to determine the distribution of
degree of the networks. We follow the method of Clauset, Shalizi, Newman, SIAM Review
51, 661 (2009) of maximum likelihood estimator for discrete variables which was already
used in our previous analysis of the dual task data [5].
We fit the degree-distribution assuming a power law within a given interval. For this, we
use a generalized power-law form
P (k; kmin
, kmax
) =k−γ
⇣(γ, kmin
) ⇣(γ, kmax
), (21)
where kmin
and kmax
are the boundaries of the fitting interval and the Hurwitz ⇣ function is
given by ⇣(γ,↵) =P
i(i+ ↵)−γ . We set kmin
= 1.
We calculate the slopes in successive intervals by continuously increasing kmax
. For each
one of them we calculate the maximum likelihood estimator through the numerical solution
of
γ = argmax
γ
MX
i=1
ln ki M ln [⇣(γ, kmin
) ⇣(γ, kmax
)]
!, (22)
where ki are all the degrees that fall within the fitting interval and M is the total number
of nodes with degrees in this interval. The optimum interval was determined through the
Kolmogorov-Smirnov test.
For the goodness-of-fit test, we use KS test generating 10,000 synthetic random distribu-
tions following the best-fit power law. Analogous analysis is performed to test for a possible
exponential distribution to describe the data. We use KS statistics to determine the opti-
mum fitting intervals and also the goodness-of-fit. In all the cases where the power law was
1514 NATURE PHYSICS | www.nature.com/naturephysics
SUPPLEMENTARY INFORMATION DOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
projection of the largest component varied across experiments, this merging pattern at Tc
was clearly observed in each participant of the two experiments analyzed here, two examples
are shown in Figs. 3a-b main text. The transition is confirmed by the measurement of the
second largest cluster which shows a peak at Tc, see SI-Fig. 3.
For T values larger than Tc the two largest brain clusters are disconnected, forming two
independent networks. Each network is internally connected by a set of strong-links, which
correspond to kin
[5] in the notation of systems of networks. By lowering T to values smaller
than Tc, the two networks connect by a set of weak-links, which correspond to kout
[5], i.e.
the set of links connecting the two networks.
Our analysis of the structural organization of weak links connecting di↵erent clusters is
performed with T0
< T < Tc. Here, T0
is chosen in such a way that the average hkout
i of
outgoing degrees of the nodes on the two largest clusters is hkout
i = 1. For lower values
of T0
, where hkout
i = 2 and = 5, we found no relevant di↵erence with the studied case of
hkout
i = 1.
As done in previous network experiments based on the dual task data [5] we create a
mask where we keep voxels which were activated in more than 75% of the cases, i.e., in at
least 48 instances out of the 64 total cases considered. The obtained number of activated
voxels in the whole brain is N ⇡ 60, 000, varying slightly for di↵erent individuals and stimuli.
The ‘activated or functional map’ exhibits phases consistently falling within the expected
response latency for a task-induced activation [4]. As expected for an experiment involv-
ing visual and auditory stimuli and bi-manual responses, the responsive regions included
bilateral visual occipito-temporal cortices, bilateral auditory cortices, motor, premotor and
cerebellar cortices, and a large-scale bilateral parieto-frontal structure. In the present anal-
ysis we follow [5] and we do not explore the di↵erences in networks between di↵erent SOA
conditions. Rather, we consider them as independent equivalent experiments, generating a
total of 64 di↵erent scans, one for each condition of temporal gap and subject.
The following emergent clusters are seen in resting state: medial prefrontal cortex, pos-
terior cingulate, and lateral temporoparietal regions, all of them part of the default mode
network (DMN) typically seen in resting state data and specifically found in our NYU dataset
[9].
14
A. Computation of parameters γ, ↵, β, and kmax
Once Tc is determined, we are able to compute the degree distribution of the brain
networks. For a given brain scan we search for all connected components of strong links
with Cij > Tc, where Tc is the first jump in the largest connected component as seen in Fig.
3 main text. We then calculate P (kin
) using all brain networks for a given experiment; the
results are plotted in Fig. 3 main text. We consider all nodes with kin
≥ 1 at Tc from all the
connected clusters. As one can see in Fig. 3b main text, for all data sets, we found degree
distributions which can be described by power laws P (kin
) ⇠ k−γin
with a given cut-o↵ kmax
.
For the resting state , we found γ = 2.85± 0.04 and kmax
= 133 while for the dual task we
found γ = 2.25± 0.07, kmax
= 139 (see Table I in main text). We use a statistical test based
on maximum likelihood methods and bootstrap analysis to determine the distribution of
degree of the networks. We follow the method of Clauset, Shalizi, Newman, SIAM Review
51, 661 (2009) of maximum likelihood estimator for discrete variables which was already
used in our previous analysis of the dual task data [5].
We fit the degree-distribution assuming a power law within a given interval. For this, we
use a generalized power-law form
P (k; kmin
, kmax
) =k−γ
⇣(γ, kmin
) ⇣(γ, kmax
), (21)
where kmin
and kmax
are the boundaries of the fitting interval and the Hurwitz ⇣ function is
given by ⇣(γ,↵) =P
i(i+ ↵)−γ . We set kmin
= 1.
We calculate the slopes in successive intervals by continuously increasing kmax
. For each
one of them we calculate the maximum likelihood estimator through the numerical solution
of
γ = argmax
γ
MX
i=1
ln ki M ln [⇣(γ, kmin
) ⇣(γ, kmax
)]
!, (22)
where ki are all the degrees that fall within the fitting interval and M is the total number
of nodes with degrees in this interval. The optimum interval was determined through the
Kolmogorov-Smirnov test.
For the goodness-of-fit test, we use KS test generating 10,000 synthetic random distribu-
tions following the best-fit power law. Analogous analysis is performed to test for a possible
exponential distribution to describe the data. We use KS statistics to determine the opti-
mum fitting intervals and also the goodness-of-fit. In all the cases where the power law was
15NATURE PHYSICS | www.nature.com/naturephysics 15
SUPPLEMENTARY INFORMATIONDOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
accepted we ruled out the possibility of an exponential distribution, see [5].
In order to compute the correlation of kin
, kout
and knn
in
we consider the following statistics
for the weak links and the degrees of the external nearest neighbors of an outgoing node. This
correlation is gathered from the calculation of the average in-degree, hknn
in
i of the external
neighbors of a node with in-degree kin
. The strong-links are those links added to the network
for T > Tc. The weak links are those added to the network for values of T0
< T < Tc
until the average out-degree reaches hkout
i = 1. For statistical determination of the scaling
properties of weak-links, we consider that they connect two nodes in di↵erent networks, or
even nodes in the same component. To calculate the statistical scaling properties of weak
links, we consider the out-weak-degree kout
of a node as the number of all links added for
T0
< T < Tc.
Figure 3f main text shows that the scenario for the correlation between hknn
in
i and kin
is consistent with Eq. (2) in the main text. For the resting state experiments (Fig. 3f
in the main text) there is a positive correlation between the kin
of outgoing nodes placed
in di↵erent functional networks. For the dual-task human subjects (Fig. 3f main text) the
correlation is also positive.
Moreover, when analyzing the relation between kin
and kout
for the same outgoing nodes,
they are described by the correlations presented in Fig. 3e main text using power laws.
Figures 3e-f in the main text depict the power-law fits using Ordinary Least Square method
within a given interval of degree. We assess the goodness of fitting in each interval via
the coefficient of determination R2. We accept fittings where R2 & 0.9. The exponents
measured are presented in Table I in the main text.
Figures 4a and b in the main text show the results we found when we apply the the-
ory presented in Section I of this Supplementary Information on two coupled networks of
degree exponent γ = 2.85 and 2.25, respectively with the cut-o↵ given by kmax
= 133, 139,
respectively as given by the values for human resting state and dual task. For γ = 2.25 and
γ = 2.85, the value associated with the data gathered from humans, the results are similar
with those presented on Fig. 2 main text in both theoretical cases, the conditional (left
panels ) and redundant (right panels) interactions. The main di↵erences between the results
for γ = 2.25 and 2.85 are the values found for pc, where the values found for γ = 2.25 are
systematically smaller than the values found for γ = 2.85, going from pc ⇡ 0.1 to ⇡ 0.6 for
γ = 2.25, and from pc ⇡ 0.1 to ⇡ 0.8 for γ = 2.85. These results can be understood from the
16
knowledge gathered on the percolation of single networks [10]. For lower values of the degree
exponent γ the hubs on scale-free networks become more frequent, protecting the network
from breaking apart. When comparing the two cases of Fig. 4 main text with the theoretical
case of γ = 2.5 (Fig. 2 main text), one can notice that the broader the distribution (as lower
the value of γ), the more robust is the system of coupled networks. There general trends are
consistent with the calculations of pc for unstructured interconnected networks with one-to-
one connections done in Ref. [2]. The white circles in Fig. 4 in the main text correspond
to the values of ↵ and β measured from real data. As one can see, the experimental values
are placed on the region that represents the best compromise between the predictions for
optimal stability under conditional and redundant interactions.
It is also interesting to note that the extreme vulnerability predicted in Ref. [2] can be
somehow mitigated by decreasing the number of one-to-one interconnections as shown in
Ref. [11]. However, in this case, the system of networks may be rendered non-operational
due to the lack of interconnections. Indeed, by connecting both networks with one-to-one
outgoing links and by making these interconnections at random, there is a high probability
that a hub in one network will be connected with a low degree node in the other network.
These low degree nodes are highly probable to be chosen in a random attack, thus the hubs
become very vulnerable due to the conditional interaction with a low degree node in the
other network. This e↵ect leads to the catastrophic cascading behavior found in [2].
Another way to protect a network in the conditional mode is to increase the number of
out-going links per nodes, since the failure of a node occurs when all its inter-linked nodes
have failed. Thus, by just increasing the number of interlinks from one to many out-going
links emanating from a given node, larger resilience is obtained. If these links are distributed
at random, then this situation corresponds to ↵ = β = 0 in our model. However, in this
random conditional case, the network may be rendered non-operational due to the random
nature of the interlink connectivity. A functional real network is expected to be operating
with correlations and therefore the most efficient structure when there are many correlated
links connecting the networks is the one found for the brain networks investigated in the
present work. In other words, assuming that a natural system like the brain functions with
intrinsic correlations in inter-network connectivity, then the solution found here (large ↵
and β > 0) seems to be the natural optimal structure for global stability and avoidance of
systemic catastrophic cascading e↵ects.
1716 NATURE PHYSICS | www.nature.com/naturephysics
SUPPLEMENTARY INFORMATION DOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
accepted we ruled out the possibility of an exponential distribution, see [5].
In order to compute the correlation of kin
, kout
and knn
in
we consider the following statistics
for the weak links and the degrees of the external nearest neighbors of an outgoing node. This
correlation is gathered from the calculation of the average in-degree, hknn
in
i of the external
neighbors of a node with in-degree kin
. The strong-links are those links added to the network
for T > Tc. The weak links are those added to the network for values of T0
< T < Tc
until the average out-degree reaches hkout
i = 1. For statistical determination of the scaling
properties of weak-links, we consider that they connect two nodes in di↵erent networks, or
even nodes in the same component. To calculate the statistical scaling properties of weak
links, we consider the out-weak-degree kout
of a node as the number of all links added for
T0
< T < Tc.
Figure 3f main text shows that the scenario for the correlation between hknn
in
i and kin
is consistent with Eq. (2) in the main text. For the resting state experiments (Fig. 3f
in the main text) there is a positive correlation between the kin
of outgoing nodes placed
in di↵erent functional networks. For the dual-task human subjects (Fig. 3f main text) the
correlation is also positive.
Moreover, when analyzing the relation between kin
and kout
for the same outgoing nodes,
they are described by the correlations presented in Fig. 3e main text using power laws.
Figures 3e-f in the main text depict the power-law fits using Ordinary Least Square method
within a given interval of degree. We assess the goodness of fitting in each interval via
the coefficient of determination R2. We accept fittings where R2 & 0.9. The exponents
measured are presented in Table I in the main text.
Figures 4a and b in the main text show the results we found when we apply the the-
ory presented in Section I of this Supplementary Information on two coupled networks of
degree exponent γ = 2.85 and 2.25, respectively with the cut-o↵ given by kmax
= 133, 139,
respectively as given by the values for human resting state and dual task. For γ = 2.25 and
γ = 2.85, the value associated with the data gathered from humans, the results are similar
with those presented on Fig. 2 main text in both theoretical cases, the conditional (left
panels ) and redundant (right panels) interactions. The main di↵erences between the results
for γ = 2.25 and 2.85 are the values found for pc, where the values found for γ = 2.25 are
systematically smaller than the values found for γ = 2.85, going from pc ⇡ 0.1 to ⇡ 0.6 for
γ = 2.25, and from pc ⇡ 0.1 to ⇡ 0.8 for γ = 2.85. These results can be understood from the
16
knowledge gathered on the percolation of single networks [10]. For lower values of the degree
exponent γ the hubs on scale-free networks become more frequent, protecting the network
from breaking apart. When comparing the two cases of Fig. 4 main text with the theoretical
case of γ = 2.5 (Fig. 2 main text), one can notice that the broader the distribution (as lower
the value of γ), the more robust is the system of coupled networks. There general trends are
consistent with the calculations of pc for unstructured interconnected networks with one-to-
one connections done in Ref. [2]. The white circles in Fig. 4 in the main text correspond
to the values of ↵ and β measured from real data. As one can see, the experimental values
are placed on the region that represents the best compromise between the predictions for
optimal stability under conditional and redundant interactions.
It is also interesting to note that the extreme vulnerability predicted in Ref. [2] can be
somehow mitigated by decreasing the number of one-to-one interconnections as shown in
Ref. [11]. However, in this case, the system of networks may be rendered non-operational
due to the lack of interconnections. Indeed, by connecting both networks with one-to-one
outgoing links and by making these interconnections at random, there is a high probability
that a hub in one network will be connected with a low degree node in the other network.
These low degree nodes are highly probable to be chosen in a random attack, thus the hubs
become very vulnerable due to the conditional interaction with a low degree node in the
other network. This e↵ect leads to the catastrophic cascading behavior found in [2].
Another way to protect a network in the conditional mode is to increase the number of
out-going links per nodes, since the failure of a node occurs when all its inter-linked nodes
have failed. Thus, by just increasing the number of interlinks from one to many out-going
links emanating from a given node, larger resilience is obtained. If these links are distributed
at random, then this situation corresponds to ↵ = β = 0 in our model. However, in this
random conditional case, the network may be rendered non-operational due to the random
nature of the interlink connectivity. A functional real network is expected to be operating
with correlations and therefore the most efficient structure when there are many correlated
links connecting the networks is the one found for the brain networks investigated in the
present work. In other words, assuming that a natural system like the brain functions with
intrinsic correlations in inter-network connectivity, then the solution found here (large ↵
and β > 0) seems to be the natural optimal structure for global stability and avoidance of
systemic catastrophic cascading e↵ects.
17NATURE PHYSICS | www.nature.com/naturephysics 17
SUPPLEMENTARY INFORMATIONDOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
Another problem of interest is the targeted attack of interdependent networks as treated
in Ref. [12]. It would be of interest to determine how the present correlations a↵ect the
targeted attack to, for instance, the highly connected nodes.
18
[1] Moore, C. & Newman, M. E. J. Exact solution of site and bond percolation on small-world
networks. Phys. Rev. E 62, 7059-7064 (2000).
[2] Buldyrev, S. V., Parshani, R., Paul, G., Stanley, H. E. & Havlin, S. Catastrophic cascade of
failures in interdependent networks. Nature 464, 1025-1028 (2010).
[3] Dorogovtsev, S. N. Lectures on Complex Networks (Oxford Univ. Press, Oxford, 2010).
[4] Sigman, M. & Dehaene, S. Brain mechanisms of serial and parallel processing during dual-task
performance. J. Neurosci. 28, 7585-7598 (2008).
[5] Gallos, L. K., Makse, H. A. & Sigman, M. A small world of weak ties provides optimal global
integration of self-similar modules in functional brain networks. Proc. Natl. Acad. Sci. USA
109, 2825-2830 (2012).
[6] Eguiluz, V. M., Chialvo, D. R., Cecchi, G. A., Baliki, M. & Apkarian, A. V. Scale-free brain
functional networks. Phys. Rev. Lett. 94, 018102 (2005).
[7] Bullmore E. & Sporns O. Complex brain networks: graph theoretical analysis of structural
and functional systems. Nature Reviews Neuroscience 10, 186-198 (2009).
[8] Bollobas, B. Random Graphs (Academic Press, London, 1985).
[9] Shehzad, Z., Kelly, A. M. C. & Reiss, P. T. The resting brain: unconstrained yet reliable.
Cereb. Cortex 10, 2209-2229 (2009).
[10] Cohen, R., Ben-Avraham, D. & Havlin, S. Percolation critical exponents in scale-free networks.
Phys. Rev. E 66, 036113 (2002).
[11] Parshani, R., Buldyrev, S. V. & Havlin, S. Interdependent networks: reducing the coupling
strength leads to a change from a first to second order percolation transition. Phys. Rev. Lett.
105, 048701 (2010).
[12] Huang, X., Gao, J., Buldyrev, S. V., Havlin, S. & Stanley, H. E. Robustness of interdependent
networks under targeted attack. Phys. Rev. E 83, 065101 (2011).
1918 NATURE PHYSICS | www.nature.com/naturephysics
SUPPLEMENTARY INFORMATION DOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
Another problem of interest is the targeted attack of interdependent networks as treated
in Ref. [12]. It would be of interest to determine how the present correlations a↵ect the
targeted attack to, for instance, the highly connected nodes.
18
[1] Moore, C. & Newman, M. E. J. Exact solution of site and bond percolation on small-world
networks. Phys. Rev. E 62, 7059-7064 (2000).
[2] Buldyrev, S. V., Parshani, R., Paul, G., Stanley, H. E. & Havlin, S. Catastrophic cascade of
failures in interdependent networks. Nature 464, 1025-1028 (2010).
[3] Dorogovtsev, S. N. Lectures on Complex Networks (Oxford Univ. Press, Oxford, 2010).
[4] Sigman, M. & Dehaene, S. Brain mechanisms of serial and parallel processing during dual-task
performance. J. Neurosci. 28, 7585-7598 (2008).
[5] Gallos, L. K., Makse, H. A. & Sigman, M. A small world of weak ties provides optimal global
integration of self-similar modules in functional brain networks. Proc. Natl. Acad. Sci. USA
109, 2825-2830 (2012).
[6] Eguiluz, V. M., Chialvo, D. R., Cecchi, G. A., Baliki, M. & Apkarian, A. V. Scale-free brain
functional networks. Phys. Rev. Lett. 94, 018102 (2005).
[7] Bullmore E. & Sporns O. Complex brain networks: graph theoretical analysis of structural
and functional systems. Nature Reviews Neuroscience 10, 186-198 (2009).
[8] Bollobas, B. Random Graphs (Academic Press, London, 1985).
[9] Shehzad, Z., Kelly, A. M. C. & Reiss, P. T. The resting brain: unconstrained yet reliable.
Cereb. Cortex 10, 2209-2229 (2009).
[10] Cohen, R., Ben-Avraham, D. & Havlin, S. Percolation critical exponents in scale-free networks.
Phys. Rev. E 66, 036113 (2002).
[11] Parshani, R., Buldyrev, S. V. & Havlin, S. Interdependent networks: reducing the coupling
strength leads to a change from a first to second order percolation transition. Phys. Rev. Lett.
105, 048701 (2010).
[12] Huang, X., Gao, J., Buldyrev, S. V., Havlin, S. & Stanley, H. E. Robustness of interdependent
networks under targeted attack. Phys. Rev. E 83, 065101 (2011).
19NATURE PHYSICS | www.nature.com/naturephysics 19
SUPPLEMENTARY INFORMATIONDOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
FIG. 1: Pictorial representation of the a-e conditional and a-b redundant modes of interaction.
a, One node is removed, or fails, in network A, b, as in a regular percolation process this node is
removed together with its links. In the redundant mode of interaction, the neighbors of this node
are not removed, because they still maintain connection with the giant component from network
B, but c, for the conditional mode of interaction the two nodes are removed, since they do not
belong to the giant component of network A. d, As a consequence of the removal of the nodes in
network A all the nodes from network B that lose connectivity with network A are also removed.
e, Finally, the last node from network B is removed once it loses connectivity with the giant
component of network B. In the end, for the conditional mode of interaction, only the mutually
connected component remains.
20
FIG. 2: Giant component of network A and B in the conditional mode of failure. We present
the prediction of the theory for values of NA = NB = 1500, γ = 2.5, ↵ = 0.5, β = 0.5 and
kmax
= 100 and compare with computer simulations of the giant component obtained numerically
by attacking the same network. We perform average over 100 di↵erent realizations. We attack a
fraction 1 p of both networks and calculate the fraction of nodes belonging to the corresponding
giant components. The results show a very good agreement between theory and simulations.
2120 NATURE PHYSICS | www.nature.com/naturephysics
SUPPLEMENTARY INFORMATION DOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
FIG. 1: Pictorial representation of the a-e conditional and a-b redundant modes of interaction.
a, One node is removed, or fails, in network A, b, as in a regular percolation process this node is
removed together with its links. In the redundant mode of interaction, the neighbors of this node
are not removed, because they still maintain connection with the giant component from network
B, but c, for the conditional mode of interaction the two nodes are removed, since they do not
belong to the giant component of network A. d, As a consequence of the removal of the nodes in
network A all the nodes from network B that lose connectivity with network A are also removed.
e, Finally, the last node from network B is removed once it loses connectivity with the giant
component of network B. In the end, for the conditional mode of interaction, only the mutually
connected component remains.
20
FIG. 2: Giant component of network A and B in the conditional mode of failure. We present
the prediction of the theory for values of NA = NB = 1500, γ = 2.5, ↵ = 0.5, β = 0.5 and
kmax
= 100 and compare with computer simulations of the giant component obtained numerically
by attacking the same network. We perform average over 100 di↵erent realizations. We attack a
fraction 1 p of both networks and calculate the fraction of nodes belonging to the corresponding
giant components. The results show a very good agreement between theory and simulations.
21NATURE PHYSICS | www.nature.com/naturephysics 21
SUPPLEMENTARY INFORMATIONDOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.
FIG. 3: First and second largest component in the brain networks corresponding to resting state
and dual task. The largest component shows a jump while the second largest component shows a
peak, indicating a percolation transition at Tc. a, Resting state. b, Dual task.
2222 NATURE PHYSICS | www.nature.com/naturephysics
SUPPLEMENTARY INFORMATION DOI: 10.1038/NPHYS3081
© 2014 Macmillan Publishers Limited. All rights reserved.