Top Banner
Communication Dynamics in Endogenous Social Networks Daron Acemoglu * Kostas Bimpikis Asuman Ozdaglar Abstract We develop a model of information exchange through communication and investigate its impli- cations for information aggregation in large societies. An underlying state (of the world) determines which action has higher payoff. Agents decide which agents to form a communication link with incur- ring the associated cost and receive a private signal correlated with the underlying state. They then exchange information over the induced communication network until taking an (irreversible) action. We define asymptotic learning as the fraction of agents taking the correct action converging to one in probability as a society grows large. Under truthful communication, we show that asymptotic learning occurs if (and under some additional conditions, also only if) in the induced communication network most agents are a short distance away from “information hubs”, which receive and distribute a large amount of information. Asymptotic learning therefore requires information to be aggregated in the hands of a few agents. We also show that while truthful communication is not always optimal, when the communication network induces asymptotic learning (in a large society), truthful communication is an -equilibrium. We then provide a systematic investigation of what types of cost structures and associated social cliques (consisting of groups of individuals linked to each other at zero cost, such as friendship networks) ensure the emergence of communication networks that lead to asymptotic learn- ing. Our result shows that societies with too many and sufficiently large social cliques do not induce asymptotic learning, because each social clique would have sufficient information by itself, making communication with others relatively unattractive. Asymptotic learning results if social cliques are neither too numerous nor too large, in which case communication across cliques is encouraged. Finally, we show how these results can be applied to several commonly studied random graph models, such as preferential attachment and Erd˝ os-Renyi graphs. 1 Introduction Most social decisions, ranging from product and occupational choices to voting and political behav- ior, rely on information agents gather through communication with friends, neighbors and co-workers as well as information obtained from news sources and prominent webpages. A central question in social sciences concerns the dynamics of communication and information exchange and whether such * Dept. of Economics, Massachusetts Institute of Technology Operations Research Center, Massachusetts Institute of Technology Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology 1
57

Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

May 09, 2017

Download

Documents

Naos2015
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Communication Dynamics in Endogenous

Social Networks

Daron Acemoglu∗ Kostas Bimpikis† Asuman Ozdaglar‡

Abstract

We develop a model of information exchange through communication and investigate its impli-cations for information aggregation in large societies. An underlying state (of the world) determineswhich action has higher payoff. Agents decide which agents to form a communication link with incur-ring the associated cost and receive a private signal correlated with the underlying state. They thenexchange information over the induced communication network until taking an (irreversible) action.We define asymptotic learning as the fraction of agents taking the correct action converging to one inprobability as a society grows large. Under truthful communication, we show that asymptotic learningoccurs if (and under some additional conditions, also only if) in the induced communication networkmost agents are a short distance away from “information hubs”, which receive and distribute a largeamount of information. Asymptotic learning therefore requires information to be aggregated in thehands of a few agents. We also show that while truthful communication is not always optimal, whenthe communication network induces asymptotic learning (in a large society), truthful communicationis an ε-equilibrium. We then provide a systematic investigation of what types of cost structures andassociated social cliques (consisting of groups of individuals linked to each other at zero cost, such asfriendship networks) ensure the emergence of communication networks that lead to asymptotic learn-ing. Our result shows that societies with too many and sufficiently large social cliques do not induceasymptotic learning, because each social clique would have sufficient information by itself, makingcommunication with others relatively unattractive. Asymptotic learning results if social cliques areneither too numerous nor too large, in which case communication across cliques is encouraged. Finally,we show how these results can be applied to several commonly studied random graph models, such aspreferential attachment and Erdos-Renyi graphs.

1 Introduction

Most social decisions, ranging from product and occupational choices to voting and political behav-

ior, rely on information agents gather through communication with friends, neighbors and co-workers

as well as information obtained from news sources and prominent webpages. A central question in

social sciences concerns the dynamics of communication and information exchange and whether such∗Dept. of Economics, Massachusetts Institute of Technology†Operations Research Center, Massachusetts Institute of Technology‡Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology

1

Page 2: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

dynamics lead to the effective aggregation of dispersed information that exists in a society. We con-

struct a dynamic model to investigate this question. If information exchanges were non-strategic,

timeless and costless, all information could be aggregated immediately by simultaneous communica-

tion across all agents. Thus, the key ingredient of our approach is dynamic and costly communication.

Our benchmark model features an underlying state of the world that determines which action has

higher payoff (which is the same for all agents). Because of discounting, earlier actions are preferred to

later ones. Each agent receives a private signal correlated with this underlying state. In addition, she

can communicate with others, but such communication first requires the formation of a communication

link, which may be costly. Therefore, our framework combines elements from models of social learning

and network formation. The network formation decisions of agents induce a communication graph for

the society. Thereafter, agents communicate with those others with whom they are connected until

they take an irreversible action. Crucially, information acquisition takes time because the “neighbors”

of an agent with whom she communicates acquire more information from their own neighbors over

time. Information exchange will thus be endogenously limited by two features: the communication

network formed at the beginning of the game, which allows communication only between connected

pairs, and discounting, which encourages agents to take actions before they accumulate sufficient

information.

We characterize the equilibria of this network formation and communication game and then inves-

tigate the structure of these equilibria as the society becomes large (i.e., for a sequence of games). Our

main focus is on how well information is aggregated, which we capture with the notion of asymptotic

learning. We say that there is asymptotic learning if the fraction of agents taking the correct action

converges to one (in probability) as the society becomes large.

Our analysis proceeds in several stages. First, we take the communication graph for given and

assume that agents are non-strategic in their communication. Also, we assume that they continue

passing new information to their neighbors even after they take an action. Under these assumptions,

we provide a condition that is both necessary and sufficient for asymptotic learning. Intuitively, this

condition requires that most agents are a short distance away from information hubs, which are agents

that have a very large (in the limit, infinite) number of connections. Two different types of information

hubs are the conduits of asymptotic learning. The first are information mavens, which have a large

in-degree, enabling them to aggregate information. If most agents are close to an information maven,

asymptotic learning is guaranteed. The second type of hubs are social connectors, which have large

out-degree, enabling them to communicate their information to a large number of agents.1 Social

connectors are only useful for asymptotic learning if they are close to mavens, so that they can

distribute their information. Thus, asymptotic learning is also obtained if most agents are close to a

social connector, who is in turn a short distance away from a maven.1both of these terms are inspired by Gladwell (2000).

2

Page 3: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Second, we generalize these results to environments in which individuals stop receiving new in-

formation or stop communicating after they take an action. We show that the sufficiency result for

asymptotic learning from the first environment applies to this more complicated setting. Moreover,

we show that the necessity result also carries over with the addition of a mild assumption. Further-

more, we study an environment, in which individuals may misreport their information if they have an

incentive to do so. In particular, we show that individuals may in general choose to misreport their

information in order to delay the action of their neighbors, thus obtaining more information from them

in the future. Nevertheless, we establish that whenever truthful communication leads to asymptotic

learning, it is an ε-equilibrium of the strategic communication game to report truthfully. Interestingly,

the converse is not necessarily true: strategic communication may lead to asymptotic learning in some

special cases in which truthful communication precludes learning.

Our characterization results on asymptotic learning can be seen both as “positive” and “negative”.

On the one hand, communication structures that do not feature such hubs appear more realistic in

the context of social networks and communication between friends, neighbors and co-workers. Indeed,

the popular (though not always empirically plausible) random graph models such as preferential at-

tachment and Poisson (Erdos-Renyi) graphs do not lead to asymptotic learning. On the other hand,

as discussed above, most individuals obtain key information from either individuals or news sources

(websites) that correspond to mavens and social connectors, which do play the role of aggregating

and distributing large amounts of information. Corresponding to such structures, we show that scale

free random graphs (in particular, power law graphs with small exponent γ ≤ 2),2 and hierarchical

graphs, where “special” agents are likely to receive and distribute information to lower layers of the

hierarchy, induce network structures that guarantee asymptotic learning. The intuition for why such

information hubs and almost all agents being close to information hubs are necessary for asymptotic

learning is instructive: were it not so, a large fraction of agents would prefer to take an action before

waiting for sufficient information to arrive and a nontrivial fraction of those would take the incorrect

action.

Third, armed with the analysis of information exchange over a given communication network, we

then turn to the analysis of the endogenous formation of this network. We assume that forming

communication links is costly, though there also exist social cliques, groups of individuals that are

linked to each other at zero cost. These can be thought of as “friendship networks,” which are linked

for reasons unrelated to information exchange and thus act as conduits of such exchange at low cost.

Agents have to pay a cost at the beginning in order to communicate (receive information) from those

who are not in their social clique. Even though network formation games have several equilibria, the

structure of our network formation and information exchange game enables us to obtain relatively2These models are shown to provide good representations for peer-to-peer networks, scientific collaboration networks

(in experimental physics), and traffic in networks (Jovanovic, Annexstein, and Berman (2001), Newman (2001), Toroczkaiand Bassler (2004), H. Seyed-allaei and Marsili (1999)).

3

Page 4: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

sharp results on what types of societies will lead to endogenous communication networks that ensure

asymptotic learning. In particular, we show that societies with too many (disjoint) and sufficiently

large social cliques induce behavior inconsistent with asymptotic learning. This is because each social

clique, which is sufficiently large, would have enough information to make communication with others

(from other social cliques) unattractive; the society gets segregated into a very large number of disjoint

social cliques not sharing information. In contrast, asymptotic learning obtains in equilibrium if social

cliques are neither too numerous nor too large so that it becomes advantageous at least for some

members of these cliques to communicate with members of other cliques, forming a structure in which

information is shared across (almost) all members of the society.

Our paper is related to several strands of literature on social and economic networks. First, it

is related to the large and growing literature on social learning. Much of this literature focuses on

Bayesian models of observational learning, where each individual learns from the actions of others

taken in the past. A key impediment to information aggregation in these models is the fact that

actions do not reflect all of the information that an individual has and this can induce a pattern

reminiscent to a “herd,” where individuals ignore their own information and copy the behavior of

others. (See, for example, Bikchandani, Hirshleifer, and Welch (1992), Banerjee (1992), and Smith

and Sorensen (2000), as well as Bala and Goyal (1998), for early contributions, and Smith and Sorensen

(1998), Banerjee and Fudenberg (2004) and Acemoglu, Dahleh, Lobel, and Ozdaglar (2008) for models

of Bayesian learning with richer observational structures). While observational learning is important

in many situations, a large part of information exchange in practice is through communication.

Several papers in the literature study communication, though typically using non-Bayesian or

“myopic” rules (for example, DeMarzo, Vayanos, and Zwiebel (2003) and Golub and Jackson (2008)).

A major difficulty faced by these approaches, often precluding Bayesian and dynamic game theoretic

analysis of learning in communication networks, is the complexity of updating when individuals share

their ex-post beliefs. We overcome this difficulty by adopting a different approach, whereby individuals

can directly communicate their signals and there is no restriction on the total “bits” of communication.

This leads to a very tractable structure for updating of beliefs and enables us to study perfect Bayesian

equilibria of a game of network formation, communication and decision-making. It also reverses one

of the main insights of these papers, also shared by the pioneering contribution to the social learning

literature by Bala and Goyal (1998), that the presence of “highly connected” or “influential” agents,

or what Bala and Goyal (1998) call a “royal family,” acts as a significant impediment to the efficient

aggregation of information. On the contrary, in our model the existence of such highly connected

agents (information hubs, mavens or connectors) is crucial for the efficient aggregation of information.

Moreover, their existence also reduces incentives for non-truthful communication, and is the key input

into our result that truthful communication can be an ε-equilibrium.

Our analysis of asymptotic learning in large networks also builds on random graph models. In

4

Page 5: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

particular, we use several tools and results from this literature to characterize the asymptotics of

beliefs and information. We also study information aggregation in the popular preferential attachment

and Erdos-Renyi graphs (e.g., Barabasi and Albert (1999), Albert and Barabasi (2002), Mitzenmacher

(2004), Durrett (2007)).

Our work is also related to the growing literature on network formation, since communication

takes place over endogenously formed networks. Although the network formation literature is large

and growing (see, e.g., Jackson and Wolinsky (1996), Bala and Goyal (2000) and Jackson (2004)), we

are not aware of other papers that endogenize the benefits of forming links through the subsequent

information exchange. It is also noteworthy that, while network formation games have a large number

of equilibria, the simple structure of our model enables us to derive relatively sharp results about

environments in which the equilibrium networks will lead to asymptotic learning.

Finally, our paper is related to the literature on strategic communication, pioneered by the cheap

talk framework of Crawford and Sobel (1982). While cheap talk models have been used for the study

of information aggregation with one receiver and multiple senders (e.g. Morgan and Stocken (2008))

and multiple receivers and single sender (e.g. Farrell and Gibbons (1989)), most relevant to our paper

are two recent papers that consider strategic communication over general networks, Galeotti, Ghiglino,

and Squintani (2009) and Hagenbach and Koessler (2009). A major difference between these works

and ours is that we consider a model where communication is allowed for more than one time period,

thus enabling agents to receive information outside their immediate neighborhood (at the cost of a

delayed decision) and we also endogenize the network over which communication takes place. On the

other hand, our framework assumes that an agent’s action does not directly influence others’ payoffs,

while such payoff interactions are the central focus of Galeotti, Ghiglino, and Squintani (2009) and

Hagenbach and Koessler (2009).

The rest of the paper is organized as follows. Section 2 develops the general model of network

formation and subsequent information exchange over the induced communication graph. Also, it in-

troduces the three main environments, which we study. Section 3 provides a number of examples

illustrating the information exchange process and providing economic insights on the model. Section 4

contains our main results on social learning given a communication graph under the different environ-

ments we study and Section 5 goes a step further and discusses our results on network formation and

their relation with asymptotic learning. Section 6 illustrates how our results can be applied to popular

random graph models and, finally, Section 7 concludes. All proofs are presented in Appendices A,B

and C.

2 The Model

In this section, we define the model for a finite set N n = {1, 2, · · · , n} of agents and define the

notion of equilibrium. We then describe the limit economy as n→∞.

5

Page 6: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

2.1 The Environment

Each agent i ∈ N n has a choice among a set of (finite) alternatives. The payoff to each agent

depends on her set of actions and an underlying state of the world θ. To simplify the exposition and

without loss of generality, we assume that the underlying state is binary, θ ∈ {0, 1}, and both values

of θ are equally likely, i.e., P(θ = 0) = P(θ = 1) = 1/2.

The state of the world θ is unknown. Agent i forms beliefs about the state of the world from a

private signal si ∈ Si (where Si is a Euclidean space), as well as information she can obtain from other

agents through a communication network Gn, which will be described shortly. We assume that time

is discrete and there is a common discount factor δ ∈ (0, 1). At each time period, t = 0, 1, · · · , agent

i can decide to take an irreversible action, 0 or 1, or wait for another time period. Her payoff is thus

uni (xni , θ) =

δτπ if xni,τ = θ and xni,t = “wait” for t < τ,

0 otherwise,

where xni = [xni,t]t=0,1··· denotes the sequence of agent i’s actions (xni,t ∈ {“wait”, 0, 1}). Here, xni,t = 0

or 1 denotes agent i taking action 0 or 1 respectively, while “wait” designates the agent deciding

to wait for that time period without taking an action; π > 0 is the payoff from the correct action.

Without loss of generality, we normalize π to be equal to 1. For the rest of paper, we say that the

agent “exits”, if she chose to take action 0 or 1. The discount factor δ ∈ (0, 1) implies that an earlier

exit is preferred to a later one.

2.2 Communication

Suppose that agents are situated in a communication network represented by the directed graph

Gn = (N n, En), where N n = {1, · · · , n} is the set of agents and En is the set of directed edges with

which agents are linked. We say that agent j can obtain information from i or that agent i can send

information to j if there is an edge from i to j in graph Gn, i.e., (i, j) ∈ En.

Let Ini,t denote the information set of agent i at time t and Ini,t denote the set of all possible

information sets. Then, for every pair of agents i, j, such that (i, j) ∈ En, we say that agent j

communicates with agent i or that agent i sends a message to agent j, and define the following

mapping

mnij,t : Ini,t →Mn

ij,t for (i, j) ∈ En,

where Mnij,t denotes the set of messages that agent i can send to agent j at time t. The definition

of mnij,t captures the fact that communication is directed and is only allowed between agents that

are linked in the communication network, i.e., j communicates with i if and only if (i, j) ∈ En. The

direction of communication should be clear: when agent j communicates with agent i, then agent i

sends a message to agent j, that could in principle depend on the information set of agent i as well as

the identity of agent j. Importantly, we assume that the cardinality (“dimensionality”) of Mnij,t is no

less than that of Ini,t, so that communication can take the form of agent i sharing all her information

6

Page 7: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

12

3

4

5

6

I1,0 = (s1)

7

(a) Time t = 0.

12s2

3

s3

4s4

5

s5

6

I1,1 = (s1, s2, s4, s5)

s6

7

s7

(b) Time t = 1.

12s3

3

4

5

(s6, s7)

6

I1,2 = (s1, s2, s4, s5, s3, s6, s7)

7

(c) Time t = 2.

Figure 1: The information set of Agent 1 under truthful communication.

with agent j. This has two key implications. First, an agent can communicate (indirectly) with a

much larger set of agents than just her immediate neighbors, albeit with a time delay. As an example,

an agent can communicate with the neighbors of her neighbors in two time periods (see Figure 1).

Second, mechanical duplication of information can be avoided. For example, the second time agent

j communicates with agent i, she can repeat her original signal, but this will not be recorded as an

additional piece of information by agent j, since given the size of the message space Mnij,t, each piece

of information can be “tagged”. This ensures that under truthful communication, there need be no

confounding of new information and previously communicated information. Figure 1 also illustrates

this property.

The information set of agent i at time t ≥ 1 is given by

Ini,t = {si,mnji,τ , for all 1 ≤ τ < t and j such that (j, i) ∈ En}

and Ini,0 = {si}. In particular, the information set of agent i at time t ≥ 1 consists of her private signal

and all the messages her neighbors sent to i in previous time periods. Agent i takes an action at every

time period. In particular, agent i’s action at time t is a mapping from her information set to the set

of actions, i.e.,

σni,t : Ini,t → {“wait”, 0, 1}.

The tradeoff between taking an action (0 or 1) and waiting, should be clear at this point. An agent

would wait, in order to communicate with a larger set of agents and potentially choose the correct

action with higher probability. On the other hand, future is discounted, therefore, delaying is costly.

We close the section with a number of definitions. We define a path between agents i and j in

network Gn as a sequence i1, · · · , iK of distinct nodes such that i1 = i, iK = j and (ik, ik+1) ∈ En for

k ∈ {1, · · · ,K − 1}. The length of the path is defined as K − 1. Moreover, we define the distance of

agent i to agent j as the length of the shortest path from i to j in network Gn, i.e.,

distn(i, j) = min{length of P∣∣ P is a path from i to j in Gn}.

7

Page 8: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Finally, the (indirect) neighborhood of agent i at time t is defined as

Bni,t = {j

∣∣ distn(j, i) ≤ t},

where Bni,0 = {i}, i.e., Bn

i,t consists of all agents that are at most t links away from agent i in graph

Gn. Intuitively, if agent i waits for t periods and all of the intervening agents receive and communicate

information truthfully, i will have access to all of the signals of the agents in the set Bni,t.

2.3 Network Formation

The previous subsection described the protocol of communication among agents over a given com-

munication network Gn = (N n, En). In this section, we formally present how this communication

network emerges.

We assume that link formation is costly. In particular, communication costs are captured by an

n × n nonnegative matrix Cn, where Cnij denotes the cost that agent i has to incur in order to form

the directed link (j, i) with agent j. As noted above, a link’s direction coincides with the direction

of the flow of messages. In particular, agent i incurs a cost to form in-links. We refer to Cn as the

communication cost matrix. We assume that Cnii = 0 for all i ∈ N n.

We define agent i’s link formation strategy, gni , as an n-tuple such that gni ∈ {0, 1}n and gnij = 1

implies that agent i forms a link with agent j. The cost agent i has to incur if she implements strategy

gni is given by

Cost(gni ) =∑j∈N

Cnij · gnij .

The link formation strategy profile gn = (gn1 , · · · , gnn) induces the communication network Gn =

(N n, En), where (j, i) ∈ En if and only if gnij = 1.

2.4 Definition of Equilibria

The environment described so far defines a two-stage game Γ(Cn), where Cn denotes the commu-

nication cost matrix. We refer to this game as the Network Learning Game. The two stages of the

network learning game can be described as follows:

Stage 1 [Network Formation Game]: Agents pick their link formation strategies. The link for-

mation strategy profile gn induces the communication network Gn = (N n, En).

We refer to stage 1 of the network learning game, when the communication cost matrix is Cn as the

network formation game and we denote it by Γnet(Cn).

Stage 2 [Information Exchange Game]: Agents communicate over the induced network Gn.

Each agent’s action σni,t is a mapping from the information set of agent i at time t to the set of actions.

We refer to stage 2 of the network learning game, when the communication network is Gn as the

information exchange game and we denote it by Γinfo(Gn).

The expected payoff of agent i from vector of actions xni , such that xni,t = σni,t(Ini,t), can be defined

8

Page 9: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

recursively as

Eσ(πni,t∣∣Ini,t, xni,t) =

P(xni,t = θ∣∣Ini,t) if xni,t ∈ {0, 1},

δ Eσ(πni,t+1

∣∣Ini,t+1, xni,t+1) if xni,t = “wait”,

where Eσ refers to the expectation with respect to the probability measure induced by strategy profile

σ. We next define the equilibria of the information exchange game Γinfo(Gn, Sn). Note that we use

the standard notation g−i and σ−i to denote the strategies of agents other than i. Also, we let σi,−t

denote the vector of actions of agent i at all times except t.

Definition 1. An action strategy profile σn,∗ is a pure-strategy Perfect Bayesian Equilibrium of the

information exchange game Γinfo(Gn) if for every i ∈ N n and time t, σn,∗i,t maximizes the expected

payoff of agent i given the strategies of other agents σn,∗−i , i.e.,

σn,∗i,t ∈ arg maxy∈{“wait”,0,1}

E((y,σn,∗i,−t),σn,∗−i )(πni

∣∣Ini,t, y).

We denote the set of equilibria of this game by INFO(Gn).

Definition 2. A pair (gn,∗, σn,∗) is a pure-strategy Perfect Bayesian Equilibrium of the network learn-

ing game Γ(Cn) if

(a) σn,∗ ∈ INFO(Gn), where Gn is induced by the link formation strategy gn,∗.

(b) For all i ∈ N n, gn,∗i maximizes the expected payoff of agent i given the strategies of other agents

gn,∗−i , i.e.,

gn,∗i ∈ arg maxz∈{0,1}n

Eσ[Πi(z, gn,∗−i )] ≡ Eσ(πni,0

∣∣Ini,0, σni,0(Ini,0))− Cost(z).

for all σ ∈ INFO(Gn), where Gn is induced by link formation strategy (z, gn,∗−i ).

We denote the set of equilibria of this game by NET (Cn).

For the remainder of the paper, we refer to a pure-strategy Perfect Bayesian Equilibrium simply

as equilibrium (we do not study mixed strategy equilibria).

2.5 Learning in Large Societies

Our main focus in this paper is whether equilibrium behavior leads to information aggregation.

This is captured by the notion of “asymptotic learning”, which characterizes the behavior of agents

over communication networks with growing size. We first focus on asymptotic learning over a fixed

communication network, i.e., we study agents’ decisions along equilibria of the information exchange

game.

We consider a sequence of communication networks {Gn}∞n=1 where Gn = {N n, En} with N n =

{1, · · · , n} and refer to this sequence of communication networks as a society. A sequence of com-

munication networks induces a sequence of information exchange games, and with a slight abuse of

notation, we also use the term society to refer to this sequence of games. For any fixed n ≥ 1 and any

9

Page 10: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

equilibrium of the information exchange game σn ∈ INFO(Gn), we introduce the indicator variable:

Mni,t =

1 if agent i takes the correct decision by time t,

0 otherwise.(1)

Given our focus on sequences of communication networks and their equilibria, we use the term equilib-

rium to denote a sequence of equilibria of the sequence of information exchange games, or of the society

{Gn}∞n=1. We denote such an equilibrium by σ = {σn}∞n=1, which designates that σn ∈ INFO(Gn) for

all n. Similarly, we refer to a sequence of equilibria of the network learning game (g, σ) = {(gn, σn)}∞n=1

as a network equilibrium.

The next definition introduces asymptotic learning for a given society.

Definition 3. We say that asymptotic learning occurs in society {Gn}∞n=1 along equilibrium σ if for

every ε > 0, we have

limn→∞

limt→∞

([1n

n∑i=1

(1−Mn

i,t

)]> ε

)= 0.

This definition equates asymptotic learning occurs with all but a negligible fraction of the agents

taking the correct action (as the society grows infinitely large). We say that asymptotic learning occurs

in a network equilibrium (g, σ) if asymptotic learning occurs in society {Gn}∞n=1, induced by the link

formation strategy g, along equilibrium σ.

2.6 Assumptions

We restrict attention to a special class of private signals, which is introduced in the following

assumption.

Assumption 1 (Binary Private Signals). Let si denote the private signal for agent i. Then,

(a) si ∈ {0, 1} for all i, i.e., private signals are binary.

(b) L(1) = β1−β and L(0) = 1−β

β (β > 1/2), where L(x) denotes the likelihood ratio for private signal

x, i.e., L(x) = P(x|θ=1)P(x|θ=0) .

We call β the (common) precision of the private signals.

Note that when β > δ, delaying is too costly and agents are better off acting with no delay. Since

our goal is to study information propagation in a network, we let β < δ.

The communication model described in Section 2 is fairly general. In particular, we did not restrict

the set of messages that an agent can send or specify her information set. Throughout, we maintain

the assumption that once formed, the communication network Gn is common knowledge. Also, we

focus on the following three environments of increasing complexity, defined by Assumptions 2, 3 and

4 respectively.

Assumption 2 (Continuous Communication). Communication between agents is continuous if

mnij,t = {s` for all ` ∈ Bn

i,t},

10

Page 11: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

for all agents i, j and time periods t.

This assumption is adopted as a prelude to Assumptions 3 and 4, because it is simpler to work

with and as we show the main results that hold under this assumption, generalize to the more complex

environments generated by Assumptions 3 and 4. Intuitively, this assumption compactly imposes three

crucial features: (1) As already noted, communication takes place by sharing signals, so that when

agent j communicates with agent i at time t, then agent i sends to j all the information agent i has

obtained thus far, i.e., the private signals of all agents that are at a distance at most t from i (refer

back to Figure 1 for an illustration of the communication process centered at a particular agent); (2)

Communication is continuous in the sense that agents do not stop transmitting new information even

after taking their irreversible action (action 0 or action 1). This also implies that agents never exit

the social network, which would be a good approximation to friendship networks that exist for reasons

unrelated to communication; (3) Agents cannot strategically manipulate the messages they sent, i.e.,

an agent’s private signal is hard information.

Assumption 3 relaxes the second feature above, the continuous transmission of information.

Assumption 3 (Non-Strategic Communication). Communication between agents is non-strategic,

i.e.,

mnij,t = {s` for all ` ∈ Ini,t},

for all agents i, j and time periods t.

Intuitively, it states that when an agent takes an irreversible action, then she no longer obtains new

information and, thus, can only communicate the information she has obtained until the time of her

decision. The difference between Assumptions 2 and 3 can be seen from the fact that in Assumption

3 we write Ini,t as opposed to Bni,t, which implies that an agent stops receiving and subsequently

communicating new information as soon as she takes an irreversible action. We believe that this

assumption is a reasonable approximation to communication in social networks, since an agent that

engages in information exchange to make a decision would have weaker incentives to collect new

information after reaching that decision. Nevertheless, she can still communicate the information she

had previously obtained to other agents. We call this type of communication non-strategic to stress

the fact that the agents cannot strategically manipulate the information they communicate.3

Finally, we discuss the implications of relaxing Assumption 3 by allowing strategic communication,

i.e., when agents can strategically lie or babble about their information. In particular, we replace

Assumption 3 with Assumption 4.

3Yet another variant of this assumption would be that agents exit the social network after taking an action and stopcommunicating entirely. In this case, the results are again similar if their action is observed by their neighbors. If theyexit the social network, stop communication altogether and their action is not observable, then the implications aredifferent. We do not analyze these variants in the current version to save space.

11

Page 12: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Assumption 4 (Strategic Communication). Communication between agents is strategic if

mnij,t ∈ {0, 1}

∣∣Ini,t∣∣,for all agents i, j and time periods t.

This assumption makes it clear that in this case the messages need not be truthful. Allowing

strategic communication adds an extra dimension in an agent’s strategy, since the agent can choose

to “lie” about (part) of her information set with some probability, in the hope that this increases her

expected payoff. Note that, in contrast with “cheap talk” models, externalities in our framework are

purely informational as opposed to payoff relevant. Thus, an agent may have an incentive to “lie” as

a means to obtain more information from the information exchange process.

3 Motivating Examples

To motivate subsequent discussion, we present a number of examples on the information exchange

game assuming that the communication network Gn between the n agents is fixed and given. Com-

munication is assumed to be non-strategic (cf. Assumption 3).

3.1 Equilibria of the Information Exchange Game

We start the discussion by identifying the tradeoffs that agents are faced with when exchanging

information over the communication network Gn. First, we show, that even when the communication

network among the n agents is fixed and strategic communication is not allowed (cf. Assumption 3),

multiple equilibria can arise, which potentially have very different properties.

Example 1. Consider the communication network depicted in Figure 2(a) and let the discount factor

δ = 0.855 and the precision of private signals β = 0.635. Then, the following are equilibria of the

information exchange game defined on this communication network:

(a) Equilibrium 1: Agents A and 1,2,3,4 take an irreversible action at time t = 0. Then, the

expected utility for all five agents is simply Eeq1[πi] = Peq1(xi,0 = θ∣∣Ii,0) = β = 0.635, for all

i = A, 1, 2, 3, 4.

(b) Equilibrium 2: Agents A and 1,2,3,4 decide to “wait” at t = 0 and take an irreversible action at

time t ≥ 1. Then, their ex ante expected utility is

Eeq2[πi] ≥ δ2 · Peq2(xi,2 = θ∣∣Ii,2) ≈ 0.64 > Eeq1[πi],

for all i = A, 1, 2, 3, 4.

An interesting question is whether there is a way to provide a ranking of the equilibria and identify

the “best” among them. A natural candidate for ranking equilibria is Pareto dominance. The following

proposition provides a sufficient condition for an equilibrium to Pareto dominate another equilibrium.

Informally, it states that an agent is better off delaying taking an irreversible action, as long as the rest

of the agents also choose to delay. Before stating the proposition, we define a partial order relation

12

Page 13: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

A

1 2

3 4

(a) Example 1: Multiple Equilibria.

1

10

A

B

...0

A1

· · ·

A10A10A10A10A10A10A10

· · ·

(b) Example 2: No Pareto Dominant Equilibrium.

Figure 2: Network topologies for Examples 1 and 2.

on the set of equilibria INFO(Gn).

Definition 4. Let σ1, σ2 ∈ INFO(Gn). We say that equilibrium σ1 is (weakly) more informative

that σ2 (we write σ1 < σ2) if

σ2,(i,t)(Ii,t) = “wait”⇒ σ1,(i,t)(Ii,t) = “wait” for all agents i and time periods t.

The order defined above is partial, since there may exist equilibria profiles that belong to INFO(Gn),

which do not satisfy the condition above for all agents i. In Example 1, Equilibrium 2 is more infor-

mative than Equilibrium 1 (Equilibrium 2 < Equilibrium 1).

Proposition 1. Let Assumption 1 hold. Then, for any two equilibria σ1, σ2 ∈ INFO(Gn) it holds

σ1 < σ2 ⇒ σ1 (weakly) Pareto dominates σ2.

Proof. For the sake of contradiction, assume that some agent i is strictly better off under strategy

profile σ2 than under σ1, i.e., Eσ2 [πi] > Eσ1 [πi]. Then, we show that σ1 cannot be an equilibrium

strategy profile. In particular, consider a deviation σ from strategy profile σ1 for agent i such that

σi,t = σ2,(i,t) for all time periods t and σj,t = σ1,(j,t) for all (j, t), j 6= i. Next, note that the

information set of agent i under strategy profile σ is always at least as large as under strategy profile

σ2 (straightforward by induction). Thus,

Eσ[πi] ≥ Eσ2 [πi] > Eσ1 [πi],

since, taking the same actions as agent i under profile σ2 would take, is a feasible strategy for agent i

under σ. This implies that σ1 cannot be an equilibrium strategy profile, thus we reach a contradiction.

13

Page 14: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Naturally, one might conjecture that the following, more general, intuition is true: more infor-

mation in earlier periods leads to higher expected utilities for all agents and consequently to Pareto

dominant equilibrium profiles. However, this is not necessarily the case, as Example 2 shows.

Example 2. Consider the communication network depicted in Figure 2(b) and let δ = 0.92 and

β = 0.55. Then, the following are equilibria of the information exchange game defined on this com-

munication network (and there is no equilibrium that Pareto dominates both):

(a) Equilibrium 1: All agents on the right of 1, · · · , 10 exit (taking an irreversible action) at time

t = 0. Agents 1-10 wait till they communicate, with the neighbors of agents A1-A10 respectively

(4 time periods) . Agent 0 waits for 5 time periods and communicates with the neighbors of

agents A1, · · · , A10 (5 time periods).

(b) Equilibrium 2: All neighbors on the right of 1, · · · , 10 wait 2 time periods, agents 1-10 exit

with high probability at time t = 3 and agent 0 does not communicate with the neighbors of

A1, · · · , A10.

Agent 0 is better off in Equilibrium 1 as opposed to Equilibrium 2. On the other hand, all other agents

are better off in Equilibrium 2.

The discussion above implies that, generally, a ranking of equilibria with respect to a Pareto

dominance relation is not possible. In particular, as the previous two examples illustrate, an agent’s

expected payoff is, generally, not monotonic with respect to her neighbors’ exit times. In Example 1

agent A is better off when agents 1 − 4 delay exiting, whereas in Example 2 agent 0 is better off if

agents A,B and her neighbors exit immediately (at time t = 0). However, we conjecture that there

are simple necessary and sufficient conditions on the structure of the network topology under which

we can guarantee the above monotonicity, which subsequently implies that the set of equilibria of

the information exchange game forms a lattice, that has many desirable properties, such as a Pareto

dominant equilibrium. We leave the latter as future work.

3.2 Strategic Communication

The following example introduces strategic communication by showing that lying may lead to

higher ex ante expected utility.

Example 3. Consider the network topology depicted in Figure 3(a) and let δ = 0.9 and β = 0.75.

Let us consider the case, where all agents send truthful messages and focus on agents A and B.

Consider the case that agent A deviates and lies in the first time period. First, note that given the

values of δ and β it is optimal for both A, B to take their irreversible decision at time t ≥ 1, since

E[πexit at 0] = β = 0.75 and E[πexit at ≥1] ≥ δPexit at ≥1(xi,1 = θ∣∣Ii,1) ≈ 0.76 > 0.75. Moreover, if

agent B observes three identical signals at time t = 1 (i.e., if her signal matches the messages received

by agents A and D), then her posterior belief becomes b3

b3+(1−b)3 = 2728 > δ, thus agent B takes an

14

Page 15: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

A B DC

1

2

3

4

5

(a) Example 3: Strategic Communication.

A

B

C

D

...

m

2

1

(b) Example 4: Social Planner.

Figure 3: Network topologies for Examples 3 and 4.

irreversible action at t = 1 and does not delay further. This is, however, not the case if the messages

that B received are not in agreement, in which case she will want to delay further (and subsequently

pass the new information to agent A).

We conclude that agent A has an incentive to lie in the first time period, since this increases her ex

ante probability that she will communicate with agents 3-5. The same intuition applies to the case of

babbling, i.e., agent A is ex ante better off if she babbles as opposed to send a truthful message.

The example above implies that it is optimal for agents to “lie” about (part) of their information

in certain situations. Thus, generally, truthful communication of information cannot be supported

at equilibrium. In particular, one can show that under strategic communication equilibria involve

agents mixing between truthful communication and “babbling”, i.e., sending uninformative messages

(uncorrelated with the underlying state). The mixing probabilities can, in principle, be computed as

the outcomes in the following tradeoff: on the one hand, revealing as little truthful information in the

current time period potentially delays exit decisions but, on the other hand, communication should

be sufficiently informative for neighboring agents to delay their exit decisions, so as to benefit from

communication in future time periods. The tradeoff described here makes the precise characterization

of equilibria when strategic communication is allowed a very challenging task. However, we will identify

conditions under which truthful communication remains (approximately) an equilibrium.

3.3 Socially Optimal Strategies

A question that arises naturally in this setting is whether a social planner can induce an allocation

that substantially increases the expected aggregate utility of the agents with respect to the equilibrium

allocation that maximizes the same metric. As the following example shows, the social planner can

definitely do better.

15

Page 16: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Example 4. Consider the network topology depicted in Figure 3(b) and let δ = 0.845 and β = 0.65.

Then, at the unique equilibrium of the information exchange game agents A,B,C,D exit at t = 0 and

as a consequence agents 1, · · · ,m exit at t = 0 as well. The aggregate expected utility of the set of

n = m + 20 agents is simply 0.65 · n. Consider the following allocation that the social planner can

implement: agents A,B,C,D exit at time t = 1 and the rest of the agents (1, · · · ,m) exit at time

t ≥ 1. The ex ante expected utility of this allocation profile is ≈ 0.66 · n.

Theorem 4 identifies conditions under which the social planner can (cannot) outperform equilibrium

allocations and relates the answer to this question with asymptotic learning.

4 Results

In this section, we present our main results on asymptotic learning and network formation and

discuss their implications. We relegate the proofs to appendices A, B, C.

4.1 Asymptotic Learning under Truthful Communication

We begin by introducing the concepts that are instrumental for asymptotic learning: the minimum

observation radius, k-radius sets, and leading agents or leaders. All three notions depend only on the

structure of the underlying communication network, i.e., they are equilibrium independent. Thus, we

will define them by assuming that communication is continuous (cf. Assumption 2) and as a result

is not affected by strategic considerations among the agents. Recall that given an equilibrium of the

information exchange game on communication network Gn, σn ∈ INFO(Gn), the optimal action of

agent i at time t, when agent i’s information set is Ini,t is given by the following expression:

xn,∗i,t (Ini,t) ∈ arg maxy∈{“wait”,0,1}

E((y,σni,−t),σn−i)

(πni∣∣Ini,t, y).

As noted above, under Assumption 2, there is no dependence of the expected payoff on the partic-

ular equilibrium strategies, that players follow. Therefore, we may drop subscript σ. We define the

minimum observation radius of agent i as the following stopping time:

Definition 5. The minimum observation radius of agent i is defined as dni , where

dni = minIni,t∈{0,1}

|Ini,t|arg min

t

{xn,∗i,t (Ini,t) ∈ {0, 1}

}.

In particular, the minimum observation radius of agent i can be simply interpreted as the minimum

number of time periods that agent i will wait before she takes an irreversible action 0 or 1, given that

all other agents do not exit, over any possible realization of the private signals. In Appendix A we

present an efficient algorithm based on dynamic programming for computing the minimum observation

radius of each agent i.

Given the notion of a minimum observation radius, we define k-radius sets as follows.

Definition 6. Let V nk be defined as

V nk = {i ∈ N

∣∣ ∣∣Bni,dni

∣∣ ≤ k}.16

Page 17: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

We refer to V nk as the k-radius set.

Intuitively, V nk includes all agents that may take an action before they receive signals from more

than k other individuals—the size of their (indirect) neighborhood by the time their minimum ob-

servation radius dni is reached is no greater than k. Equivalently, agent i belongs to set V nk if the

number of agents that lie at distance less than dni from i are at most k. From Definition 6 it follows

immediately that

i ∈ V nk ⇒ i ∈ V n

k′ for all k′ > k. (2)

Finally, we define leading agents or leaders. Let indegni = |Bni,1|, outdegni =

∣∣{j∣∣i ∈ Bnj,1}∣∣ denote the

in-degree, out-degree of agent i in communication network Gn respectively.

Definition 7. A collection {S}∞n=1 of sets of agents is called a set of leaders if

(i) There exists k > 0, such that Snj ⊆ V njk for all j ∈ J , where J is an infinite index set.

(ii) limn→∞1n ·∣∣Sn∣∣ = 0, i.e., the collection {S}∞n=1 contains a negligible fraction of the agents as the

society grows.

(iii) limn→∞1n ·∣∣Snfollow∣∣ > ε, for some ε > 0, where Snfollow denotes the set of followers of Sn. In

particular, i ∈ Snfollow if there exists j ∈ Sn, such that j ∈ Bni,dni

.

The following theorem provides a necessary and sufficient condition for asymptotic learning to

occur in a society under Assumption 2.

Theorem 1. Let Assumptions 1 and 2 hold. Then, asymptotic learning occurs in society {Gn}∞n=1

(in any equilibrium σ) if and only if

limk→∞

limn→∞

1n·∣∣V nk

∣∣ = 0. (3)

Intuitively, asymptotic learning is precluded if there exists a significant fraction of the society

that will take an action before seeing a large set of signals, since in this case there will be a positive

probability of each individual making a mistake, since her decision is based on a small set of signals.

Since we consider a sequence of networks becoming arbitrarily large (“a society”), this condition

requires the fraction of individuals with any finite radius set to go to zero. The rest of this subsection

describes the implications of Theorem 1, that provide more economic intuition on the asymptotic

learning result. However, we first provide the analogue of the theorem under Assumption 3.

Theorem 2. Let Assumptions 1 and 3 hold. Then,

(i) Asymptotic learning occurs in society {Gn}∞n=1 in any equilibrium σ if condition (3) holds.

(ii) Conversely, if condition (3) does not hold for society {Gn}∞n=1and the society does not contain

a set of leaders, then asymptotic learning does not occur in any equilibrium σ.

17

Page 18: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

This theorem shows that the major results in Theorem 1, in particular, the sufficiency part of the

theorem, continues to hold under Assumption 3. Put differently, provided that the society does not

contain a set of leaders, then asymptotic learning occurs under Assumption 3 if and only if it occurs

under Assumption 2. However, as we show below, asymptotic learning can occur under Assumption

3 when there exists a set of leaders, even if it does not occur under Assumption 2.

We next discuss the implications of Theorems 1 and 2. In particular, the next three corollaries

identify the role of certain types of agents on information spread in a given society.

Similarly, to k-radius sets, we define sets Unk for scalar k > 0 as

Unk = {i ∈ N n∣∣ there exists ` ∈ Bn

i,dniwith indegn` > k},

i.e., the set Unk consists of all agents, which have an agent with in-degree at least k within their

minimum observation radius.

Corollary 1. Let Assumptions 1 and 3 hold. Then, asymptotic learning occurs in society {Gn}∞n=1

if

limk→∞

limn→∞

1n·∣∣Unk ∣∣ = 1.

Intuitively, Corollary 1 states that if all but a negligible fraction of the agents are at a short distance

(at most the minimum observation radius) from an agent with a high in-degree, then asymptotic

learning occurs. This corollary therefore identifies a group of agents, that is crucial for a society

to permit asymptotic learning: information mavens, who have high in-degrees and can thus act as

effective aggregators of information (a term inspired by Gladwell (2000)). Information mavens are one

type of hubs, the importance of which was already mentioned in the Introduction. Our next definition

formalizes this notion further and enables an alternative sufficient condition for asymptotic learning.

Definition 8. Agent i is called an information maven of society {Gn}∞n=1 if i has an infinite in-degree,

i.e., if

limn→∞

indegni =∞.

Let MAVEN ({Gn}∞n=1) denote the set of mavens of society {Gn}∞n=1.

For any agent j, let dMAVEN ,nj denote the shortest distance defined in communication network Gn

between j and a maven k ∈MAVEN ({Gn}∞n=1). Finally, let Wn denote the set of agents that are at

distance at most equal to their minimum observation radius from a maven in communication network

Gn, i.e., Wn = {j∣∣ dMAVEN ,nj ≤ dnj }.

The following corollary highlights the importance of information mavens for asymptotic learning.

Informally, it states that if almost all agents have a short path to a maven, then asymptotic learning

occurs.

Corollary 2. Let Assumptions 1 and 3 hold. Then, asymptotic learning occurs in society {Gn}∞n=1 if

limn→∞

1n·∣∣Wn

∣∣ = 1.

18

Page 19: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Corollary 2 thus clarifies that asymptotic learning is obtained when there are information mavens

and almost all agents are at a “short distance” away from one (less than their minimum observation

radius).4

As mentioned in the Introduction, a second type of information hubs also plays an important role

in asymptotic learning. While mavens have high in-degree and are thus able to effectively aggregate

dispersed information, because our communication network is directed, they may not be in the right

position to distribute this aggregated information. If so, even in a society that has several information

mavens, a large fraction of the agents may not benefit from their information. Social connectors, on

the other hand, are defined as agents with a high out-degree, and thus play the role of spreading the

information aggregated by the mavens.5 Before stating the proposition, we define social connectors.

Definition 9. Agent i is called a social connector of society {Gn}∞n=1 if i has an infinite out-degree,

i.e., if

limn→∞

outdegni =∞.

Corollary 3. Let Assumptions 1 and 3 hold. Consider society {Gn}∞n=1, which is such that the

sequence of in- and out- degrees is non-decreasing for every agent and the set of information mavens

does not grow at the same rate as the society itself, i.e.,

limn→∞

∣∣MAVEN ({Gn}∞n=1)∣∣

n= 0.

Then, for asymptotic learning to occur, the society should contain a social connector within a short

distance to a maven, i.e.,

dMAVEN ,ni ≤ dni , for some social connector i.

Corollary 3 thus states that unless a non-negligible fraction of the agents belongs to the set of

mavens and, subsequently, the rest can obtain information directly from a maven, then, information

aggregated at the mavens is spread through the out-links of a connector (note that an agent can be

both a maven and a connector). Combined with the previous corollaries, this result implies that there

are essentially two ways in which society can achieve asymptotic learning. First, it may contain several

information mavens who not only collect and aggregate information but also distribute it to almost all

the agents in the society. Second, it may contain a sufficient number of information mavens, who pass

their information to social connectors, and almost all the agents in the society are a short distance

away from social connectors and thus obtain accurate information from them. This latter pattern has4This corollary is weaker than Corollary 1. This is simply a technicality because the sequence of communication

networks {Gn}∞n=1 is arbitrary. In particular, we have not assumed that the in-degree of an agent is non-decreasing withn, thus the limit in the corollary may not be well defined for arbitrary sequences of communication networks

5For simplicity (and to avoid the technical issues mentioned in a previous footnote) we assume for this corollary thatboth the in- and out-degree sequences of agents are non-decreasing with n (note that we can rewrite the proposition forany sequence of in- and out- degrees at the expense of introducing additional notation).

19

Page 20: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

a greater plausibility in practice than one in which the same agents will collect and distribute dispersed

information. For example, if a website or a news source can rely on information mavens (journalists,

researchers or analysts) to collect sufficient information and then reach a large number of individuals,

then information will be economically aggregated and distributed in the society.

Theorem 2 is not stated as an if and only if result because the fact that condition (3) does not

hold in a society, does not necessarily preclude asymptotic learning. The following example involves

a society for which condition (3) does not hold, in which the actions chosen by the entire population

depend on the equilibrium behavior of a small set of leaders. In particular, if the leaders delay their

irreversible decision long enough, then the actions of the rest of the society are solely based on the

information that these agents communicate and asymptotic learning fails. However, if leaders do not

coordinate at equilibrium, then they exit early and this leads the rest of the agents to take a delayed,

but more informed, irreversible action.

Example 5. Consider the communication network depicted in Figure 4 and let the discount factor

δ = 0.9 and the precision of private signals β = 0.55. Then, the following are equilibria of the

information exchange game defined on this communication network:

(a) Equilibrium 1: Agents A,B,C take an irreversible action at time t = 0, in which case agents

1, · · · , n are forced to communicate with the neighbors of agent-maven m and take the correct

action with arbitrarily high probability.

(b) Equilibrium 2: Agents A,B,C decide to “wait” at t = 0 and take an irreversible action at time

t ≥ 1. Then, there is positive probability that agents 1, · · · , n exit before they communicate

with the maven. They exit earlier, but there is positive probability that they take the wrong

action.

The results summarized in Theorem 2, Corollaries 1, 2 and 3 can be seen both as positive and

negative, as already noted in the Introduction. On the one hand, communication structures that do

not feature information mavens (or connectors) will not lead to asymptotic learning, and information

mavens may be viewed as unrealistic or extreme. On the other hand, as already noted above, much

communication in modern societies happens through agents that play the role of mavens and connectors

(see again Gladwell (2000)). These are highly connected agents that are able to collect and distribute

crucial information. Perhaps more importantly, most individuals obtain some of their information

from news sources, media, and websites, which exist partly or primarily for the purpose of acting as

information mavens and connectors.6

6For example, a news website such as cnn.com acts as a connector that spreads the information aggregated bythe journalists-mavens to interested readers. Similarly, a movie review website, e.g., imdb.com, spreads the aggregateknowledge of movie reviewers to interested movie aficionados.

20

Page 21: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

1

...

n

m· · ·

· · ·

· · ·

A

B

C

Figure 4: Learning in a society that condition (3) does not hold.

4.2 Strategic Communication and ε-equilibria

Next we explore the implication of relaxing the assumption that agents cannot manipulate the

messages they send, i.e., that information on private signals is hard. In particular, we replace As-

sumption 3 with Assumption 4 and we allow agents to lie about their or any of the private signals

they have obtained information about, i.e., mnij,t(si1 , · · · , si|In

i,t|) 6= (si1 , · · · , si|In

i,t|), where the latter

is the true vector of private signals of agents in Ini,t. Informally, an agent has an incentive to misre-

port information, so as to delay her neighbors taking irreversible actions, which in turn prolongs the

information exchange process.

Let (σn,mn) denote an action-message strategy profile, where mn = {mn1 , · · · ,mn

n} and mni =

[mnij,t]t=0,1,···, for j such that i ∈ Bn

j,1. Also let subscript (σn,mn) refer to the probability measure

induced by the action-message strategy profile.

Definition 10. An action-message strategy profile (σn,∗,mn,∗) is a pure-strategy ε-Perfect Bayesian

Equilibrium of the information exchange game Γinfo(Gn) if for every i ∈ N n and time t, we have

E(σn,∗,mn,∗)(πni∣∣Ini,t, σn,∗i,t (Ini,t)) ≥ E((σni ,σ

n,∗−i ),(mni ,m

n,∗−i ))(π

ni

∣∣Ini,t, σni,t(Ini,t))− ε,for all mn

i and σni .

We denote the set of ε-equilibria of this game by ε-INFO(Gn, Sn).

Informally, a strategy profile is an ε-equilibrium if there is no deviation from the profile, such

that an agent increases her expected payoff by more than ε. Similarly with Definition 3, we define

ε-asymptotic learning for a given society.

Definition 11. We say that ε-asymptotic learning occurs in society {Gn}∞n=1 along ε-equilibrium

21

Page 22: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

(σ,m) if for every ζ > 0, we have

limn→∞

limt→∞

P(σ,m)

([1n

n∑i=1

(1−Mn

i,t

)]> ζ

)= 0.

We show that strategic communication does not harm asymptotic learning. The main intuition

behind this result is that it is approximately optimal for an agent to report her private signal truthfully

to a neighbor with a high in-degree (maven).

Theorem 3. Let Assumption 1 hold. If asymptotic learning occurs in society {Gn}∞n=1 under As-

sumption 3, then there exists an ε-equilibrium (σ,m), such that ε-asymptotic learning occurs in society

{Gn}∞n=1 along ε-equilibrium (σ,m) when we allow strategic communication (cf. under Assumption

4).

This theorem therefore implies that the focus on truthful reporting was without much loss of

generality as far as asymptotic learning is concerned. In any communication network in which there is

asymptotic learning, even if agents can strategically manipulate information, there is arbitrarily little

benefit in doing so. Thus, the main lessons about asymptotic learning derived above apply regardless

of whether communication is strategic or not.

However, this theorem does not imply that all learning outcomes are identical under truthful and

strategic communication. In particular, interestingly, we can construct examples in which strategic

communication leads agents take the correct action with higher probability than under non-strategic

communication (cf. Assumption 3). The main reason for this (counterintuitive) fact is that under

strategic communication an agent may delay taking an action compared to the non-strategic en-

vironment. Therefore, the agent obtains more information from the communication network and,

consequently, chooses the correct action with higher probability.

4.3 Welfare

The following theorem relates the performance of a social planner with the question of asymptotic

learning. In particular, we show that the social planner can do considerably better than the best

equilibrium allocation in terms of aggregate expected welfare only for societies in which condition (3)

fails to hold. Let Esp[πi],Eσ[πi] denote the expected payoff for agent i in a communication network

of n agents under the allocation imposed by the social planner and equilibrium σ respectively. The

following theorem states that when asymptotic learning occurs in a society, then the social planner

cannot achieve a significant increase in the expected aggregate welfare of the agents.

Theorem 4. Let Assumptions 1 and 3 hold. Consider society {Gn}∞n=1 and any equilibrium σ =

{σn}∞n=1 (σn ∈ INFO(Gn) for all n). Then,

Condition (3) holds in society {Gn}∞n=1 ⇒ limn→∞

∑i∈Nn Esp[πi]−

∑i∈Nn Eσ[πi]

n= 0.

22

Page 23: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

However, as illustrated in Example 4, the social planner can outperform equilibrium allocations

if certain conditions are met. In Proposition 2, we lay out a sufficient condition (not necessary) on

the structure of a society, such that the social planner can achieve an allocation, that significantly

improves over any equilibrium in terms of aggregate welfare. Informally, the social planner performs

better than equilibrium when the structure of the society exhibits sufficient heterogeneity, i.e., there

is a large fraction of agents, who obtain information through a small number of agents (information

bottleneck).

Definition 12. A collection {S}∞n=1 of sets of agents is called an information bottleneck of society

{Gn}∞n=1 if

(i) limn→∞1n ·∣∣Sn∣∣ = 0, i.e., the collection {S}∞n=1 contains a negligible fraction of the agents as the

society grows.

(ii) there exists an ε > 0 such that lim supn→∞1n ·∣∣Snf ∣∣ > ε, where {Sf}∞n=1 denotes a subset of the

followers of collection {S}∞n=1 that satisfies a number of properties. In particular, if i ∈ Snf , there

exists j ∈ Sn such that

(a) dni > dnj + distn(i, j), where recall that dni is the minimum observation radius of agent i at

Gn.

(b) there exists k such that distn(i, k) = dnj + distn(i, j) + 1 and j ∈ Pnik for all paths from i to

k in Gn.

Informally, Definition 12 refers to a small fraction of agents (collection {Sn}∞n=1) that blocks the

access to information to a large set of agents (collection {Snf }∞n=1).

The following proposition formalizes the discussion above by providing a particular subset of soci-

eties, in which the social planner can significantly outperform equilibrium behavior.

Proposition 2. Let Assumptions 1 and 3 hold and consider a society {Gn}∞n=1 in which condition

(3) fails to hold. Then, there exists an ε > 0, such that

{Gn}∞n=1 contains an information bottleneck {S}∞n=1 ⇒ lim supn→∞

∑i∈Nn Esp[πi]−

∑i∈Nn Eσ[πi]

n> ε,

for every equilibrium σ.

Proposition 2 is an attempt towards establishing a connection between societies in which asymptotic

learning does not occur and societies, in which a social planner can achieve a significantly more efficient

allocation than equilibrium. Although the specifics of the sufficient condition presented above make

it rather restrictive, the connection is far more general. The challenge in relaxing the condition lies in

quantifying the value of additional information under different realizations of the private signals.

23

Page 24: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Social clique 1 Social clique 2

0

0

0

c

0

c

0

0

c

Figure 5: Social Cliques.

5 Network Formation

In this section, we investigate the first stage of the network learning game, where agents choose their

communication network by forming potentially costly links. The costs of forming links are captured

by a sequence of cost matrices (corresponding to the sequence of networks). Our main results identify

properties of the communication cost matrices that lead to equilibria of the information exchange

game that induce asymptotic learning.

Similar to the analysis of the information exchange game, we consider a sequence of communication

cost matrices {Cn}∞n=1, where for fixed n,

Cn : N n ×N n → <+ and Cnij = Cn+1ij for all i, j ∈ N n. (4)

For the remainder of the section, we focus our attention to the social cliques communication cost

structure. The properties of this communication structure are stated in the next assumption.

Assumption 5. Let cnij ∈ {0, c} for all pairs (i, j) ∈ N n×N n, where c < 1−β. Moreover, let cij = cji

for all i, j ∈ N n (symmetry), and cij + cjk ≥ cik for all i, j, k ∈ N n (triangular inequality).

The assumption that c < 1 − β rules out the degenerate case where no agent forms a costly link.

The symmetry and triangular inequality assumptions are imposed to simplify the definition of a social

clique, which is introduced next. Let Assumption 5 hold. We define a social clique (cf. Figure 5)

Hn ⊂ N n as a set of agents such that

i, j ∈ Hn if and only if cij = cji = 0.

Note that this set is well-defined since, by the triangular inequality and symmetry assumptions, if an

agent i does not belong to social clique Hn, then cij = c for all j ∈ Hn. Hence, we can uniquely

partition the set of nodes N n into a set of Kn pairwise disjoint social cliques Hn = {Hn1 , · · · , Hn

Kn}.We use the notation Hnk to denote the set of pairwise disjoint social cliques that have cardinality

greater than or equal to k, i.e., Hnk = {Hni , i = 1, . . . ,Kn | |Hn

i | ≥ k}. We also use scn(i) to denote

the social clique that agent i belongs to.

We consider a sequence of communication cost matrices {Cn}∞n=1 satisfying condition (4) and

24

Page 25: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Assumption 5, and we refer to this sequence as a communication cost structure. As shown above,

the communication cost structure {Cn}∞n=1 uniquely defines the following sequences, {Hn}∞n=1 and

{Hnk}∞n=1 for k > 0, of sets of pairwise disjoint social cliques. Moreover, it induces network equilibria

(g, σ) = (gn, σn)∞n=1 such that (gn, σn) ∈ NET (Cn) for all n.

Theorem 5. Let {Cn}∞n=1 be a communication cost structure and let Assumptions 1, 3 and 5 hold.

Then, there exists a constant k = k(c) such that the following hold:

(a) Suppose that

lim supn→∞

∣∣Hnk

∣∣n≥ ε for some ε > 0. (5)

Then, asymptotic learning does not occur in any network equilibrium (g, σ).

(b) Suppose that

limn→∞

∣∣Hnk

∣∣n

= 0 and limn→∞

∣∣Hn`

∣∣ =∞ for some `. (6)

Then, asymptotic learning occurs in all network equilibria (g, σ) when the discount factor δ

satisfies√c+ β < δ < 1.

(c) Suppose that

limn→∞

∣∣Hnk

∣∣n

= 0 and lim supn→∞

∣∣Hn`

∣∣ < M for all `, (7)

where M > 0 is a scalar, and let agents be patient, i.e., consider the case, when the discount

factor δ → 1. Then, there exists a c > 0 such that

(i) If c ≤ c, asymptotic learning occurs in all network equilibria (g, σ).

(ii) If c > c, asymptotic learning depends on the network equilibrium considered.

In particular, there exists at least one network equilibrium (g, σ), where there is no asymp-

totic learning and there exists at least one network equilibrium (g, σ) where asymptotic

learning occurs.

The results in this theorem provide a fairly complete characterization of what types of environments

will lead to the formation of networks that will subsequently induce asymptotic learning. The key

concept is that of a social clique, which represents groups of individuals that are linked to each other at

zero cost. These can be thought of as “friendship networks,” which are linked for reasons unrelated to

information exchange and thus can act as conduits of such exchange at low cost. Agents can exchange

information without incurring any costs (beyond the delay necessary for obtaining information) within

their social cliques. However, if they wish to obtain further information, from outside their social

cliques, they have to pay a cost at the beginning in order to form a link. Even though network

formation games have several equilibria, the structure of our network formation and information

exchange game enables us to obtain relatively sharp results on what types of societies will lead to

endogenously formed communication networks that ensure asymptotic learning. In particular, the

25

Page 26: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

: Clique with infinite size: Clique with size > k

(a) Equilibrium Network, when (6) holds.

...

...

...

...

: Individual Agent: Small Clique

sender receiver

(b) Equilibrium Network, when (7) holds.

Figure 6: Network Formation Among Social Cliques.

first part of Theorem 5 shows that asymptotic learning cannot occur in any equilibrium if the number

of social cliques increases at the same rate as the size of the society (or its lim sup does so). This is

intuitive; when this is the case, there will be many social cliques of sufficiently large size that none of

their members wish to engage in further costly communication with members of other social cliques.

But since several of these will not contain an information hub, they have a positive probability of not

taking the correct action, thus precluding social learning.

In contrast, the second part of the theorem shows that if the number of disjoint social cliques is

limited (grows less rapidly than the size of the society) and some of them are large enough to contain

information hubs, then asymptotic learning will take place (provided that the discount factor is not

too small). In this case, as shown by Figure 6(a), sufficiently many social cliques will connect to the

larger social cliques acting as information hubs, ensuring effective aggregation of information for the

great majority of the agents in the society. It is important that the discount factor is not too small,

otherwise smaller cliques will not find it beneficial to form links with the larger cliques.

Finally, the third part of the theorem outlines a more interesting configuration, potentially leading

to asymptotic learning. In this case, many small social cliques form an “informational ring”(Figure

6(b)) . Each is small enough that it finds it beneficial to connect to another social clique, provided

that this other clique will also connect to others and obtain further information. This intuition also

clarifies why such information aggregation takes place only in some equilibria. The expectation that

others will not form the requisite links leads to a coordination failure. Interestingly, however, if the

discount factor is sufficiently large and the cost of link formation is not too large, the coordination

failure equilibrium disappears, because it becomes beneficial for each clique to form links with another

one, even if further links are not forthcoming.

26

Page 27: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

6 Asymptotic Learning in Random Graphs

As an illustration of the results we outlined in Section 4, we apply them to a series of commonly

studied random graph models. We begin by providing the definitions for the graph models we focus

on. Note that in the present section we assume that communication networks are bidirectional, or

equivalently that if agent i ∈ Bnj,1 then j ∈ Bn

i,1.

Definition 13. A sequence of communication networks {Gn}∞n=1, where Gn = {N n, En}, is called

(i) complete if for every n we have

(i, j) ∈ En for all i, j ∈ N n .

(ii) k-bounded degree for scalar k > 0, if for every n we have

|Bni,1| ≤ k for all i ∈ N n ,

where recall that Bni,1 denotes the agents that are one link away from agent i in communication

network Gn.

(iii) a star if for every n we have

(i, 1) ∈ En and (i, j) /∈ En for all i ∈ N n and j 6= 1 .

Definition 14 (Erdos-Renyi). A sequence of communication networks {Gn}∞n=1, whereGn = {N n, En},is called Erdos-Renyi if for every n we have

P ((i, j) ∈ En) =p

nindependently for all i, j ∈ N n ,

where p scalar, such that 0 < p < 1.

Definition 15 (Power-Law). A sequence of communication networks {Gn}∞n=1, where Gn = {N n, En},is called Power-Law with exponent γ > 0 if we have

limn→∞

∑i∈Nn 1|Bni,1|=k

n= ck · k−γ for every scalar k > 0,

where ck is a constant. In other words, the fraction of nodes in the network having degree k, for every

k > 0, follows a power law distribution with exponent γ.

Definition 16 (Preferential Attachment). A sequence of communication networks {Gn}∞n=1, where

Gn = {N n, En}, is called preferential attachment if it was generated by the following process:

(i) Begin the process with G1 = {{1}, {(1, 1)}, i.e., the communication network that contains agent

1 and a loop edge.

(ii) At step n, add agent n+ 1 to Gn. Choose an agent w from Gn and let En+1 = En + (n+ 1, w).

Agent w is chosen according to the preferential attachment rule, i.e., w = j for j ∈ Nn with

probability

P(w = j) =deg(j)∑

`∈Nn deg(`),

where deg(j) denotes the degree of node j at the step.

27

Page 28: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Layer 3

Layer 2

Layer 1

(a) Hierarchical Society. (b) Complete Society. (c) Star Society.

Figure 7: Example Society Structures.

Definition 17 (Hierarchical). A sequence of communication networks {Gn}∞n=1, whereGn = {N n, En},is called ζ-hierarchical (or simply hierarchical) if it was generated by the following process:

(i) Agents are born and placed into layers. In particular, at each step n = 1, · · · , a new agent is

born and placed in layer `.

(ii) Layer index ` is initialized to 1 (i.e., the first node belongs to layer 1). A new layer is created

(and subsequently the layer index increases by one) at time period n ≥ 2 with probability 1n1+ζ ,

where ζ > 0.

(iii) Finally, for every n we have

P ((i, j) ∈ En) = p|Nn` |

, independently for all i, j ∈ N n that belong to the same layer `,

where N n` denotes the set of agents that belong to layer ` at step n and p scalar, such that

0 < p < 1. Moreover,

P ((i, k) ∈ En) =1|N<`|

and∑k∈N<`

P ((i, k) ∈ En) = 1 for all i ∈ Nn` , k ∈ Nn

<`, ` > 1,

where N n<` denotes the set of agents that belong to a layer with index lower than ` at step n.

Intuitively, a hierarchical sequence of communication networks resembles a pyramid, where the

top contains only a few agents and as we move towards the base, the number of agents grows. The

following argument provides an interpretation of the model. Agents on top layers can be thought of as

“special” nodes, that the rest of the nodes have a high incentive connecting to. Moreover, agents tend

to connect to other agents in the same layer, as they share common features with them (homophily).

As a concrete example, academia can be thought of as such a pyramid, where the top layer includes

the few institutions, then next layer includes academic departments, research labs and finally at the

lower levels reside the home pages of professors and students.

Proposition 3. Let Assumptions 1 and 3 hold and consider society {Gn}∞n=1 and discount factor

δ < 1. Then,

28

Page 29: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

(i) Asymptotic learning does not occur in society {Gn}∞n=1 if the sequence of communication net-

works {Gn}∞n=1 is k-bounded, for some constant k > 0.

(ii) For every ε > 0, asymptotic learning does not occur in society {Gn}∞n=1 with probability at least

1− ε, if the sequence of communication networks {Gn}∞n=1 is preferential attachment.

(iii) For every ε > 0, asymptotic learning does not occur in society {Gn}∞n=1 with probability at least

1− ε, if the sequence of communication networks {Gn}∞n=1 is Erdos-Renyi.

Proposition 4. Let assumptions 1, 3 hold and consider society {Gn}∞n=1. Then,

(i) Asymptotic learning occurs in society {Gn}∞n=1 if the sequence of communication networks

{Gn}∞n=1 is complete and the discount factor δ is larger than some scalar δ1 < 1.

(ii) Asymptotic learning occurs in society {Gn}∞n=1 if the sequence of communication networks

{Gn}∞n=1 is a star and the discount factor δ is larger than some scalar δ2 < 1.

(iii) Let ε > 0. Then, asymptotic learning occurs in society {Gn}∞n=1 with probability at least 1−ε, if

the sequence of communication networks {Gn}∞n=1 is γ-power law, with γ ≤ 2 and the discount

factor δ is larger than some scalar δ3(ε) < 1.

(iv) Let ε > 0. Then, asymptotic learning occurs in society {Gn}∞n=1 with probability at least 1− ε,if the sequence of communication networks {Gn}∞n=1 is ζ(ε)−hierarchical and the discount factor

δ is larger than some scalar δ4(ε) < 1.

The results presented provide additional insights on the conditions under which asymptotic learning

takes place. The popular preferential attachment and Erdos-Renyi graphs do not lead to asymptotic

learning, which can be interpreted as implying that asymptotic learning is unlikely in several important

networks. Nevertheless, these network structures, though often used in practice, do not provide a

good description of the structure of many real life networks. In contrast, our results also showed that

asymptotic learning takes place in power law graphs with small exponent γ ≤ 2, and such graphs

appear to provide a better representation for many networks related to communication, including

for peer-to-peer networks. scientific collaboration networks (in experimental physics) and traffic in

networks (Jovanovic, Annexstein, and Berman (2001), Newman (2001), Toroczkai and Bassler (2004),

H. Seyed-allaei and Marsili (1999)). Asymptotic learning also takes place in hierarchical graphs, where

“special” agents are likely to receive and distribute information to lower layers of the hierarchy.

7 Conclusion

In this paper, we develop a framework for the analysis of information exchange through commu-

nication and investigate its implications for information aggregation in large societies. An underlying

state (of the world) determines which action has higher payoff. Agents decide which agents to form

a communication link with incurring the associated cost and receive a private signal correlated with

29

Page 30: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

an underlying state. They then exchange information over the induced communication network until

taking an (irreversible) action.

Our focus has been on asymptotic learning, defined as the fraction of agents taking the correct

action converging to one in probability as a society grows large. We showed that asymptotic learning

occurs if and, under some additional mild assumptions, only if the induced communication network

includes information hubs and most agents are at a short distance from a hub. Thus asymptotic

learning requires information to be aggregated in the hands of a few agents. This kind of aggregation

also requires truthful communication, which we show is an ε-equilibrium of the strategic communication

in large societies. We showed how these results can be applied to several commonly studied random

graph models. In particular, the popular preferential attachment and Erdos-Renyi graphs do not

lead to asymptotic learning. However, other plausible (and perhaps empirically more relevant) graph

structures do ensure asymptotic learning.

Using our analysis of information exchange over a given network, we then provided a systematic

investigation of what types of cost structures, and associated social cliques which consist of groups

of individuals linked to each other at zero cost (such as friendship networks), ensure the emergence

of communication networks that lead to asymptotic learning. Our main result on network forma-

tion shows that societies with too many (disjoint) and sufficiently large social cliques do not form

communication networks that lead to asymptotic learning, because each social clique would have suffi-

cient information to make communication with others not sufficiently attractive. Asymptotic learning

results if social cliques are neither too numerous nor too large so as to encourage communication

across cliques. Our analysis was conducted under a simplifying assumption that all agents have the

same preferences. Interesting avenues for research include investigation of similar dynamic models of

information exchange and network formation in the presence of ex ante or ex post heterogeneity of

preferences as well as differences in the quality of information available to different agents, which may

naturally lead to the emergence of hubs.

30

Page 31: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Appendix A

Preliminaries

The following discussion provides a characterization of agents’ decisions at an equilibrium of the

information exchange game and an efficient algorithm to compute the minimum observation radius,

dni , of agent i, introduced in Definition 5.

Given the information set of agent i at time t, we can define her belief as pni,t = Pσ(θ = 1∣∣Ini,t).

Lemma 1. Let Assumption 3 hold. Given communication networkGn and equilibrium σ ∈ INFO(Gn),

there exists a sequence of belief thresholds for each agent i, {pn,∗σ,(i,t)}∞t=0, that depend on the current

time period, the agent i, the communication network Gn and σ such that the following hold:

(a) Agent i maximizes her expected utility at information set Ini,t by taking action xni,t(Ini,t) defined

as

xni,t(Ini,t) =

0, if 1− pni,t ≥ p

n,∗σ,(i,t),

1, if pni,t ≥ pn,∗σ,(i,t),

“wait”, otherwise.

(b) We have 1/2 ≤ pn,∗σ,(i,t) ≤ δ < 1 for every i and t.

Proof. At time period t, agent i has to decide whether to wait or take an irreversible action, 0 or 1

given her information set Ini,t. Recall that E[πni,t∣∣Ini,t, x] denotes agent i’s payoff at information set Ini,t,

when she takes action x. Then,

E[πni,t∣∣Ini,t, x] =

1− pni,t if she takes action 0, i.e., x = 0,

pni,t if she takes action 1, i.e., x = 1,

0 + δ E[πni,t∣∣Ini,t+1, x

ni,t+1] if she decides to wait, i.e., x = “wait”.

where xni,t+1 = σni,t+1(Ini,t+1). The agent will choose to take irreversible action 0 and not wait if

1− pni,t ≥ pni,t and 1− pni,t ≥ 0 + δ E[πni,t∣∣Ini,t+1, x

ni,t+1]

Similarly she will choose to take irreversible action 1 and not wait if

pni,t ≥ 1− pni,t and pni,t ≥ 0 + δ E[πni,t∣∣Ini,t+1, x

ni,t+1]

Part (a) of the lemma follows by letting pn,∗i,t = max{12 , δ E[πni,t

∣∣Ini,t+1, xni,t+1]}.

For part (b) note that from definition pn,∗σ,(i,t) ≥12 for every i and t. Moreover the maximum expected

payoff for an agent i is bounded above by the maximum possible payoff π, which we normalized to 1.

This implies that pn,∗σ,(i,t) ≤ δ for every i and t.

Intuitively, belief thresholds pn,∗σ,(i,t) depend only on the value of future communication. In particu-

lar, they are non-decreasing in the amount of new information agent i is expected to obtain in future

time periods. Note that the dependence on equilibrium σ is crucial and different equilibria lead to

different sequences of belief thresholds.

31

Page 32: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Lemma 1 holds independently of the private signal structure. For the rest of the paper, we restrict

attention to the private signals introduced in Assumption 1. Next lemma, in particular, relates Lemma

1 with the information that an agent has received up to time t.

Lemma 2. Let Assumptions 1 and 3 hold. Consider communication network Gn and an equilibrium

σ ∈ INFO(Gn). Also, let xni,t(Ini,t) denote the action that maximizes the expected utility of agent i at

information set Ini,t. Then,

xni,t(Ini,t) =

0, if logL(si) +

∑j∈Ini,t,j 6=i

logL(sj) ≤ − logAn,∗σ,(i,t),

1, if logL(si) +∑

j∈Ini,t,j 6=ilogL(sj) ≥ logAn,∗σ,(i,t),

“wait”, otherwise,

where An,∗σ,(i,t) is a constant given by An,∗σ,(i,t) =pn,∗σ,(i,t)

1−pn,∗σ,(i,t)

, and pn,∗σ,(i,t) is the belief threshold defined in

Lemma 1.

Proof. We prove that if

logL(si) +∑

j∈Ini,t,j 6=ilogL(sj) ≤ − logAn,∗σ,(i,t)

then agent i maximizes her expected utility by taking action xni,t = 0. The remaining statements can

be shown by similar arguments. From Lemma 1 we obtain that xni,t = 0 if and only if 1− pni,t ≥ pn,∗i,t ⇒

pni,t ≤ 1− pn,∗i,t . Note that from Bayes’ Rule,

pni,t =dPσ(Ini,t

∣∣θ = 1) Pσ(θ = 1)∑1k=0 dPσ(Ini,t

∣∣θ = k) Pσ(θ = k)=

dPσ(Ini,t∣∣θ = 1)∑1

k=0 dPσ(Ini,t∣∣θ = k)

≤ 1− pn,∗i,t , (8)

where the second equality follows from our assumption that the two states are a priori equally likely.

Conditional on state θ, the private signals of different agents are independent, thus

dPσ(Ini,t∣∣θ = k) = dPσ(si

∣∣θ = k)∏

j∈Ini,t,j 6=idPσ(sj

∣∣θ = k). (9)

From relations (8) and (9), we obtain

dPσ(si∣∣θ = 1)

∏j∈Ini,t,j 6=i

dPσ(sj∣∣θ = 1) ≤

1− pn,∗i,tpn,∗i,t

dPσ(si∣∣θ = 0)

∏j∈Bni,t

dPσ(sj∣∣θ = 0).

Finally, taking logs on both sides gives the desired result and completes the proof.

In light of this result, we can describe equilibrium actions in the following compact form:

xni,t(Ini,t) =

0, if

∑`∈Ini,t

1{s`=0} −∑

`∈Ini,t1{s`=0} ≥ logAn,∗σ,(i,t) ·

(log β

1−β

)−1,

1, if∑

`∈Ini,t1{s`=1} −

∑`∈Ini,t

1{s`=0} ≥ logAn,∗σ,(i,t) ·(

log β1−β

)−1,

“wait”, otherwise.

The following algorithm computes the minimum observation radius for agent i and communication

network topology Gn. Specifically, the algorithm computes the sequence of optimal decisions of agent

i for any realization of the private signals under the assumption that all agents except i never exit

and keep transmitting new information. Assume for simplicity that Gn is connected (otherwise apply

32

Page 33: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

the algorithm to the connected component in which agent i resides). Let tend,i denote the maximum

distance between i and an agent in Gn. Note that this implies Bntend,i

= Nn. The state at time t,

0 ≤ t ≤ tend,i, is simply the number of 1’s and 0’s the agent has observed thus far, qt and st respectively.

The algorithm computes the optimal decision for agent i for every possible realization of the private

signals starting from the last time period tend,i and working its way through the beginning (t = 0). In

particular (we drop subscript i),

Algorithm 1.

(1)

OPTdec(tend, qtend , stend) =

1 if qtend > stend

0 otherwise

and Pay(tend, qtend , stend) = max{

βqtend

βqtend+(1−β)

stend, 1− β

qtend

βqtend+(1−β)

stend

}(2) For t = tend − 1 to 0 do:

(i) Preliminaries:

∗ Current Belief: pt = βqt

βqt+(1−β)st

∗ Number of new observations at time t: k=∣∣Bn

i,t+1

∣∣− ∣∣Bni,t

∣∣.∗ Payexit(t, qt, st) = δt ·max {pt, 1− pt} and initialize Paywait(t, qt, st) = 0.

(ii) For j = 0 to k do

Paywait(t, qt, st) = Paywait(t, qt, st) +(k

j

)· pjt (1− pt)k−j · Pay(t+ 1, qt + j, st + k − j).

(iii) If Payexit > δPaywait then

OPTdec(t, qt, st) =

1 if pt > 1/2

0 otherwise

and Pay(t, qt, st) = Payexit(t, qt, st).

Else OPTdec(t, qt, st) = “wait” and Pay(t, qt, st) = δPaywait(t, qt, st).

Correctness: Note that if agent i has not exited till the last time period, then, since there is no

more observations to be made in the future, she will exit and choose the action that maximizes her

expected payoff (see Lemma 1). For any other time period t agent i has a prior belief pt for the state

of the world given her past observations. Her payoff of exiting taking an action at the current time

period is given by Payexit. On the other, her expected payoff, if she decides to wait, is computed in

the For Loop, where we condition on all possible outcomes of the next period’s observations. Finally,

the agent decides to “wait” only if this action leads to higher expected payoff. It is straightforward to

see that the computational complexity of the algorithm is O(n2).

To relate the algorithm with computing the minimum observation radius, note that the minimum

exit time coincides with the exit time, when all private signals are equal. Specifically, to compute the

33

Page 34: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

minimum observation radius we fix the private signal realization to be the vectors of all 1’s or 0’s (the

answer will be the same, since the problem is symmetric) and use the algorithm detailed above.

Finally, Lemma 3 states that the error probability, i.e., choosing to take the wrong action, for

agent i, such that i ∈ V nk , i.e., P (Mn

i,t = 0) is uniformly bounded away from 0 for all time periods t.

Lemma 3. Let k > 0 be a constant, such that the k-radius set V nk is non-empty. Then,

limt→∞

P(Mni,t = 0) ≥ (1− β)k > 0 for all i ∈ V n

k .

Proof. We assume without loss of generality that the underlying state of the world, θ, is 1. The

lemma follows from the observation that if the first k observations that agent i obtains are equal to

0, then the agent will take action 0, thus taking the wrong action. In particular, since i ∈ V nk this

implies that∣∣Bn

i,dni

∣∣ ≤ k. Consider the following event E

E := (sj = 0 for all j ∈ Bni,dni

).

If event E occurs, then xni,dni= 0 from the definition of the minimum observation radius dni and by

lemma 2. Finally, note that

P(E occurs) ≥ (1− β)k,

for agent i ∈ V nk . We conclude that

limt→∞

P(Mni,t = 0) ≥ (1− β)k for all i ∈ V n

k .

Proof of Theorem 1. Note that, under Assumption 2 an agent’s expected payoff depends only

on the realization of the private signals and not on the strategy profiles that other agents choose to

follow. Therefore as in the definition of the minimum observation radius, we drop subscript σ in the

proof of Theorem 1. First, we show that learning fails if condition (3) is not satisfied. In particular,

suppose that there exists a k > 0 and ε > 0, such that

lim supn→∞

1n·∣∣V nk

∣∣ ≥ ε. (10)

From condition (10) we obtain that there exists an infinite index set J such that∣∣V njk

∣∣ ≥ ε · nj for j ∈ J.

Now restrict attention to index set J , i.e., consider n = nj for some j ∈ J and η > 0 (for appropriate

η). Then,

limt→∞ P(

1n

∑ni=1M

ni,t > 1− η

)= limt→∞ P

(1n

[∑i∈V nk

Mni,t +

∑i/∈V nk

Mni,t

]> 1− η

)≤ limt→∞ P

(1n

[∑i∈V nk

Mni,t + n−

∣∣V nk

∣∣] > 1− η)

= limt→∞ P(

1n

∑i∈V nk

Mni,t >

∣∣V nk ∣∣n − η

) ,

where the inequality follows since we let Mni,t = 1 for all i /∈ V n

k and for all t. Next we use Markov’s

34

Page 35: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

inequality to obtain

limt→∞

P

1n

∑i∈V nk

Mni,t >

∣∣V nk

∣∣n− η

≤ limt→∞

E[∑

i∈V nkMni,t

]n ·(∣∣V n

k

∣∣/n− η) .We can view each summand above as an independent Bernoulli variable with success probability

bounded above by 1− (1− β)k from Lemma 3. Thus,

limt→∞EhP

i∈V nkMni,t

in·“∣∣V nk ∣∣/n−η” ≤ limt→∞

∣∣V nk ∣∣(1−(1−β)k)

n·“∣∣V nk ∣∣/n−η” ≤ ε

ε−η (1− (1− β)k′) = 1− ζ < 1

where the second inequality follows from the fact that n was chosen such that∣∣V nk

∣∣ ≥ ε · n. Finally,

the last expression follows if we pick η > 0 and ζ > 0 appropriately. In particular, let η = ε− εr where

r =⌈

1−ζ1−(1−β)k

⌉and 0 < ζ < (1− β)k.

We obtain that for all j ∈ J it holds that

limt→∞

P

([1nj

nj∑i=1

(1−Mnj

i,t

)]> η

)≥ ζ > 0.

Since J is an infinite index set we conclude that

lim supn→∞

limt→∞

P

([1n

n∑i=1

(1−Mn

i,t

)]> η

)≥ ζ > 0,

thus learning is incomplete and condition (3) is necessary for learning.

Next, we prove sufficiency. We assume, without loss of generality that θ = 1. The information set of

agent i at time dni , where recall that dni denotes agent i’s minimum observation radius in Gn, is given

by

Ini,dni = {ki,1, ki,0},

where ki,1 (ki,0) denotes the number of 1’s (0’s) agent i has observed up to time dni .

We introduce an additional indicator random variable for agent i, Mni , which takes value 1 only if

agent i takes the correct decision by time t ≤ dni . Note that always Mni ≤ Mn

i,t. Then, for any fixed

n, we obtain

limt→∞

P(1−Mni,t = 1) = 1− lim

t→∞P(1−Mn

i,t = 0) ≤ 1− P(1− Mni = 0).

Let z = log(

pn,∗i,t1−pn,∗i,t

)(log(

β1−β

))−1and note that ki,1 + ki,0 =

∣∣Bni,dni

∣∣. Then, from Lemma 2 and

since we have assumed that θ = 1, we have

P(1− Mni = 0) = P(ki,1 − ki,0 ≥ z) = 1− P

(ki,0 ≥

∣∣Bni,dni

∣∣− z2

). (11)

Note that P(si = 0∣∣θ = 1) = (1 − β) for all i ∈ N , therefore E[ki,0] = (1− β)

∣∣Bni,dni

∣∣. Then, from Eq.

(11), we have

P

(ki,0 ≥

∣∣Bni,dni

∣∣− z2

)= P

(ki,0 − (1− β)

∣∣Bni,dni

∣∣ ≥ (12− (1− β)

) ∣∣Bni,dni

∣∣− z

2

)= P

(ki,0 − E[ki,0] ≥

(12− (1− β)

) ∣∣Bni,dni

∣∣− z

2

). (12)

35

Page 36: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Let εi = (2 ·∣∣Bn

i,dni

∣∣)−1z, and note that 0 < εi < 1/2, since∣∣Bn

i,dni

∣∣z. Then, we obtain from Eq. (12),

P(ki,0 − E[ki,0] ≥

(12− (1− β)

) ∣∣Bni,dni

∣∣− z

2

)≤ P

(ki,0 − E[ki,0] ≥

(12− (1− β)− εi

) ∣∣Bni,dni

∣∣)≤ exp

(−2(

12− (1− β)− εi

)2 ∣∣Bni,dni

∣∣) , (13)

where Eq. (13) follows from Hoeffding’s inequality. We conclude that the probability an agent i will

take the wrong irreversible action (or no irreversible action) by time dni is upper bounded, i.e.,

P(1− Mni,t = 1) ≤ exp

(−2(

12− (1− β)− εi

)2 ∣∣Bni,dni

∣∣) . (14)

Finally for a given ε > 0 we have

limt→∞

P

([1n

n∑i=1

(1−Mni,t)

]> ε

)≤

E[∑n

i=1(1− Mni )]

εn, (15)

where Eq. (15) follows from Markov’s inequality and the definition of Mni . Finally, by combining Eqs.

(14) and (15),

E[∑n

i=1(1− Mni )]

εn≤

∑ni=1 exp

(−2(1

2 − (1− β)− εi)2∣∣Bn

i,dni

∣∣)εn

. (16)

Let ζ > 0. We show that

lim supn→∞

limt→∞

P

([1n

n∑i=1

(1−Mn

i,t

)]ε

)< ζ.

Let K > 0 such that K = min{k∣∣ n · exp(−k) < 1

2ε · ζ · n}. Note that K is a constant, i.e., does

not grow as n goes to infinity. Then,n∑i=1

exp

(−2(

12− (1− β)− εi

)2 ∣∣Bni,dni

∣∣) =∑i∈V

exp

(−2(

12− (1− β)− εi

)2 ∣∣Bni,dni

∣∣)

+∑i/∈V

exp

(−2(

12− (1− β)− εi

)2 ∣∣Bni,dni

∣∣) , (17)

where V ={i∣∣ ∣∣Bn

i,dni

∣∣ ≤ K

2( 12−(1−β)−εi)2 ≤ K ′

}⊆ V n

K′ . Given set V we bound the two terms of Eq.

(17). Note that exp(−2(

12 − (1− β)− εi

)2 ∣∣Bni,dni

∣∣) ≤ 1 for all i, therefore,∑i∈V

exp

(−2(

12− (1− β)− εi

)2 ∣∣Bni,dni

∣∣) ≤ ∣∣V ∣∣. (18)

Moreover, we have exp(−2(

12 − (1− β)− εi

)2 ∣∣Bni,dni

∣∣) ≤ e−K′ for all i /∈ V . Therefore,∑i/∈V

exp

(−2(

12− (1− β)− εi

)2 ∣∣Bni,dni

∣∣) ≤ n · exp(−K ′

)<

12ε · ζ · n, (19)

where the last inequality follows from the definition of K. Furthermore, from the condition for asymp-

totic learning (cf. Eq. (3)), we have that

limn→∞

1n· V n

K′ = 0,

36

Page 37: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

which further implies that there exists N > 0, where N large constant, such that,1n· V n

K′ <12ε · ζ for all n > N. (20)

Combining Eqs. (15),(16), (18), (19) and (20), we obtain

limt→∞

P

([1n

n∑i=1

(1−Mn

i,t

)]> ε

)< ζ for all n > N,

which implies that condition (3) is sufficient for asymptotic learning.

Proof of Theorem 2. We show that although an agent has potentially access to less information un-

der Assumption 3, asymptotic learning occurs whenever asymptotic learning occurs under Assumption

2. Before showing the equivalence, we introduce some additional notation. Given a communication

network Gn and an equilibrium of the information exchange game σn, let Mni (Gn, σn) take value 1 if

agent i takes the correct action. Note that this is related to the indicator variable Mni,t , which takes

value one if agent i takes the correct decision by time t [cf. Eq. (1)], i.e., Mni = 1 if and only if Mn

i,t = 1

for some finite t. For any integer k > 0, let P(Mni = 1 | k) denote the probability that agent i makes

the correct decision when she has access to a set of k signals; without loss of generality we represent

this set by Sk = {s1, . . . , sk}. The next lemma studies the properties of this probability as a function

of k.

Lemma 4. Let Gn be a communication network and σn be an equilibrium of the information exchange

game. Then, the probability that agent i makes the correct decision when she only observes the set

Sk of signals is lower-bounded by

P(Mni = 1 | k) ≥ 1− exp

(−2(

12− (1− β)

)2

k

),

where β = P(si = θ) (cf. Assumption 1).

Proof. We can write the probability P(Mni = 0 | k) as

P(Mni = 0 | k) = P

(∑j∈k

sj < k/2 | θ = 1)P(θ = 1) + P

(∑j∈k

sj > k/2 | θ = 0)P(θ = 0). (21)

We establish an upper bound on the second term. Note that∑

j∈Sk sj is a random variable with

expectation (conditional on θ = 0) equal to k(1− β). Hence, we have

P( ∑j∈Sk

sj > k/2 | θ = 0)

= P

∑j∈Sk

sj − E[ ∑j∈Sk

sj

]> k/2− E[

∑j∈Sk

sj ]∣∣∣ θ = 0

= P

( ∑j∈Sk

sj − k(1− β) > k/2− k(1− β) | θ = 0). (22)

The Hoeffding’s inequality states that for the sum of n independent random variables X1, . . . , Xn that

are almost surely bounded, i.e., P(Xi− E[Xi] ∈ [ai, bi]) = 1 for all i = 1, . . . , n, we have

P(V − E[V ] ≥ nt) ≤ exp(− 2n2t2∑n

i=1(bi − ai)2

),

where V = X1 + · · ·+Xn. Applying Hoeffding’s inequality to the random variables sj , j ∈ Sk, which

37

Page 38: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Node `Node jNode i

dist(`, j)dnjdnj

dist(`, i) ≤ dni

Agent j’s observation set Bnj,dnj

Agent i’s observation set Bni,dni

Figure 8: Proof of Proposition 5.

are binary, and therefore belong to interval [0,1], we obtain

P( ∑j∈Sk

sj − k(1− β) > k(1

2− (1− β)

)| θ = 0

)≤ exp

(−2(

12− (1− β)

)2

k

).

Combining with Eq. (22) and using a symmetrical argument to show that the same bound applies to

the first term in Eq. (21) yields

P(Mni = 0 | k) ≤ exp

(−2(

12− (1− β)

)2

k

),

establishing the desired lower bound on P(Mni = 1 | Sk).

Proposition 5. Let Assumption 1 hold. If asymptotic learning occurs in society {Gn}∞n=1 under

Assumption 2, then asymptotic learning occurs under Assumption 3 along any equilibrium σ.

Proof. Consider set Unk , where recall that Unk is the set of agents that are at a short distance (at most

equal to their minimum observation radius) from an agent with in-degree at least k. Define similarly

set Znk,σ as the set of agents, that at some equilibrium σ, communicate with an agent with in-degree at

least k. Note that under Assumption 2 the sets are equal, i.e., Unk = Znk,σ. To complete the proof, we

show that for k large enough (and consequently n large enough), Unk = Znk,σ, even under Assumption

3.

Consider i ∈ Unk and let Pn = {`, i1, · · · , iK , i} denote the shortest path in communication network

Gn between i and any agent `, with degn` ≥ k. First we show the following (refer to Figure 8)

i ∈ Unk ⇒ j ∈ Unk for all j ∈ Pn. (23)

Assume for the sake of contradiction that condition (23) does not hold and consider the simplified

environment under Assumption 2 (we are only looking at the set Unk , so we can restrict attention to

that environment). Then, let

j = arg minj′{distn(`, j′)

∣∣j′ ∈ Pn and distn(`, j′) > dnj′}.

For agents i, j we have dni > dnj and dist(j, i) + dnj < dist(`, i) ≤ dni , since otherwise j ∈ Unk . This

38

Page 39: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

implies that Bnj,dnj⊂ Bn

i,dni. Furthermore,

δdnj · Pσ(Mn

j = 1∣∣|Bn

j,dnj| same obs.) > δdist(`,j) · Pσ(Mn

j = 1∣∣|Bn

j,dnj| same obs. + k new). (24)

In particular, the left hand side is equal to the expected payoff of agent j if she takes an irreversible

action at time dnj after receiving |Bnj,dnj| agreeing observations (e.g., if all the observations she received

are 1’s) , whereas the right hand side is a lower bound on the expected payoff if agent j delays taking

an action until after she communicates with agent `. The inequality follows, from the definition of the

minimum observation radius for agent j. On the other hand,

δdist(j,i)+dnj · P(Mn

i = 1∣∣|Bn

j,dnj| same obs.) < δdist(`,i) − ε, for some ε > 0, (25)

since otherwise agent i would take an action before she communicated with agent `, which contradicts

that her minimum observation radius is dni ≥ dist(`, i) (recall that the maximum payoff when agent i

takes an action after dist(`, i) time periods is δdist(`,i).

From Eq. (25) we obtain

P(Mnj = 1

∣∣|Bnj,dnj| same obs.) < δdist(`,i)−dist(j,i)−d

nj − ε

δdist(j,i)+dnj< δdist(`,i)−dist(j,i)−d

nj − ε.

Moreover, from Eq. (24) and Lemma 4 we have

P(Mnj = 1

∣∣|Bnj,dnj| same obs.) > δdist(`,j)−d

nj − ε,

when the degree k of agent ` is larger than some k. We conclude that dist(`, j) < dist(`, i)−dist(j, i),which is obviously a contradiction. This implies that (23) holds.

Next we show, by induction on the distance from agent ` with degree ≥ k, that Unk = Znk,σ for any

equilibrium σ. The claim is obviously true for all agents with distance equal to 0 (agent `) and 1 (her

neighbors). Assume that the claim holds for all agents with distance at most t from agent `, i.e., if

i ∈ Unk and dist(`, i) ≤ t then i ∈ Znk,σ. Finally, we show the claim for an agent i such that i ∈ Unkand dist(`, i) = t+ 1. Consider a shortest path Pn from i to `. Condition (23) implies that all agents

j in the shortest path are such that j ∈ Unk , thus from the induction hypothesis we obtain j ∈ Znk,σ.

Thus, for k sufficiently large we obtain that i ∈ Znk,σ, for any equilibrium σ.

Finally, from Corollary 1 we conclude that asymptotic learning under Assumption 2 implies asymp-

totic learning under Assumption 3.

The first part of Theorem 2 follows directly from Theorem 1 and Proposition 5. To conclude the

proof of Theorem 2 we need to show that if asymptotic learning occurs when condition (3) does not

hold along some equilibrium σ, then the society contains a set of leaders. In particular, consider a

society {Gn}∞n=1 in which condition (3) does not hold, i.e., assume as in the proof of Theorem 1 that

there exists a k > 0, ε > 0 and infinite index set J , such that |V njk | > ε · nj for j ∈ J . We restrict

attention to index set J and consider σ = {σn}∞n=1 an equilibrium along which asymptotic learning

occurs in the society.

Consider a collection of subsets of the possible realizations of the private signals, {Qn}∞n=1, and a

collection of subsets of agents, {Rn}∞n=1, such that:

39

Page 40: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

(i) limn→∞ P(Qn) > ε, for some ε > 0, i.e., the subsets of realizations considered have positive

measure as the society grows, and if a realization is in Qn its complement is also in Qn.

(ii) limn→∞1n |R

n| > ε.

(iii) every agent i in Rn exits at time equal to her minimum observation radius, dni .

Such collections should exist, since condition (3) fails to hold in the society. Consider next equilibrium

σ and assume that the realization of the private signals belongs to subset Qn (for the appropriate n).

Since asymptotic learning occurs along equilibrium σ we have:

limn→∞

1n|Rnσ| = 0,

where Rnσ = {i ∈ Rn∣∣σni,dni ∈ {0, 1}}. However, this implies that there should exist a collection of

subsets of agents, {Sn}∞n=1, such that:

(i) Rn,cσ ⊆ Snfollow, where Rn,cσ = {i ∈ Rn∣∣σni,dni = “wait”}.

(ii) σnj,τ ∈ {0, 1}, for some τ < dni − dist(i, j) and i such that j ∈ Bni,dni

, i ∈ Rnσ.

(iii) limn→∞1n |S

n| = 0, since otherwise asymptotic learning would not occur along equilibrium σ.

Note that collection {Sn}∞n=1 satisfies the definition of a set of leaders [cf. Definition 7], since

lim supn→∞

1n|Rn,c| > ε

and Theorem 2(ii) follows.

Proof of Theorem 3.

The proof of the theorem relies heavily on the next proposition, which intuitively states that there

is little incentive to lie to an agent with a large number of neighbors.

Proposition 6 (Truthful Communication to a High Degree Agent). Let Assumption 1 hold. Then,

for every ε > 0, there exists a scalar k(ε) > 0, such that truth-telling to agent i, with indegni ≥ k(ε),

in the first time period is an ε-equilibrium of INFO(Gn) for all ε ≥ ε. Formally,

(σn,truth,mn,truth) ∈ ε-INFO(Gn),

where mn,truthji,0 = sj for j ∈ Bn

i,1.

Proof. The proof is based on the following argument. Suppose that all agents in Bni,1 except j report

their signals truthfully to i. Moreover, let |Bni,1| ≥ k, where k is a large constant. Then, it is an

(approximately) weakly dominant strategy for j to report her signal truthfully to i, since j’s message

will not be pivotal for agent i with high probability, i.e., i will take the same irreversible action (0 or

1) after communication in the first time period, no matter what j reports.

The remainder of the proof formalizes the argument.

Fix ε > 0 and recall from Lemma 2 that agent i will choose to take an irreversible action at time

period t = 1 and not delay if∣∣ ∑`∈Bni,1

1{s`=1} −∑`∈Bni,1

1{s`=0}∣∣ ≥ logAn,∗i,1 ·

(log

β

1− β

)−1

, (26)

40

Page 41: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

assuming that agent i obtains the true vector of private signals. Consider the case when all agents

` ∈ Bni,1, except possibly agent j, report their private signal truthfully to i. Let A be the following

event:

A :=∣∣ ∑`∈Bni,1,` 6=j

1{s`=1} −∑

`∈Bni,1,` 6=j1{s`=0}

∣∣ ≥ 2 logδ

1− δ·(

logβ

1− β

)−1

,

where log δ1−δ ·

(log β

1−β

)−1is an upper bound (constant) for the right hand side of Eq. (26). By

Lemma 4, we have P(A) ≥ 1 − exp(−c(|Bni,1| − 1)), where c is a constant. Consider k such that

k > 1c log 1

ε , which implies that if degni ≥ k then P(A) > 1− ε1+ε . Note that when event A occurs, then

agent i takes an irreversible action at time t = 1 no matter what message agent j sends. Thus,

E(σn,truth,(mnj ,m

n,truth−j ))

(πnj∣∣Inj,t, σn,truth, A) = E(σn,truth,mn,truth)(π

nj

∣∣Inj,t, σn,truth, A). (27)

Finally,

E(σn,truth,(mnj ,m

n,truth−j ))

(πnj∣∣Inj,t, σn,truth)− E(σn,truth,mn,truth)(π

nj

∣∣Inj,t, σn,truth) =

= P(A)(E

(σn,truth,(mnj ,mn,truth−j ))

(πnj∣∣Inj,t, σn,truth, A)− E(σn,truth,mn,truth)(π

nj

∣∣Inj,t, σn,truth, A))

+

+ P(Ac)(E

(σn,truth,(mnj ,mn,truth−j ))

(πnj∣∣Inj,t, σn,truth, Ac)− E(σn,truth,mn,truth)(π

nj

∣∣Inj,t, σn,truth, Ac))≤ P(A) · 0 + (1− P(A)) · 1 ≤ ε, (28)

where Eq. (28) follows from Eq. (27) and P(A) > 1 − ε. From Definition 10, Eq. (28) implies that

(σn,truth,mn,truth) is an ε-equilibrium of the information exchange game for all ε > ε > 0.

Theorem 3 follows directly from Proposition 6 and Theorem 2.

Proof of Theorem 4. We show that when condition (3) holds for society {Gn}∞n=1, then for every

ε > 0, there exists a Nε such thatPi∈Nn Esp[πni ]−

Pi∈Nn Eσn [πni ]

n < ε, for every n > Nε.

Let k large enough such that P(Mni = 1

∣∣k) > 1 − ε/2 (such a k exists from Lemma 4). From our

assumption that the society is rapidly informed we obtain that there exists a N such that the following

holds|Unk |n

> 1− ε/2, (29)

for all n > N , where recall that Unk denotes the set of agents that are close to an agent with degree at

least k. We consider the following two cases:

(i) i /∈ Unk , in which case we assume that the increase in the expected welfare of agent i under the

allocation implemented by the social planner is the maximum possible, i.e.,

Esp[πni ]− Eσn [πni ] = 1, (30)

since recall that the maximum payoff for an agent is normalized to one.

(ii) i ∈ Unk . Consider the environment when information spreading never stops (cf. Assumption 2).

Note that agent i’s expected utility is maximized in this environment. From the definition of

the minimum observation radius, we obtain that agent i exits (takes an irreversible decision)

41

Page 42: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

only after time period dni , thus her expected utility is upper bounded by δdni . Now, consider

equilibrium σn and the environment under Assumption 3. Recall that Znk,σn denotes the set

of agents that communicate with an agent with degree at least k under equilibrium profile

σn. If i ∈ Znk,σn , then we obtain that her expected utility under σn is lower bounded by

δdni P(Mn

i = 1∣∣k) > δd

ni (1 − ε/2) > δd

ni − ε/2. Finally, if i /∈ Znk,σn , there there exists an agent

j ∈ Pn, where Pn denotes the shortest path from an agent with degree ≥ k to i, such that j /∈ Unk .

However, consider the (feasible) strategy for agent i to always copy agent j’s decision. Then,

agent i’s expected utility is given by E[πni ] ≥ δdist(j,i)E[πnj ] > δdist(j,i)δdist(`,j)(1−ε/2) > δdni −ε/2,

where ` is the agent with degree ≥ k, since otherwise j would be better off delaying her exit.

Again, we obtain E[πni ] > δdni − ε/2. The above discussion implies that

Eσn [πni ] > Esp[πni ]− ε/2, for all i ∈ Unk . (31)

Combining Eqs. (30), (31) we obtain∑i∈Nn Esp[πni ]−

∑i∈Nn Eσn [πni ]

n=

∑i∈Nn∩Unk

(Esp[πni ]− Eσn [πni ]) +∑

i∈Nn∩(Unk )c(Esp[πni ]− Eσn [πni ])

n

< (1− ε/2) · ε/2 + ε/2 · 1 < ε,

which concludes the proof.

Proof of Proposition 2 (Sketch). The proposition follows by noting that the social planner can

induce a strategy profile, where all agents in the information bottleneck, delay for one period their exit

decision under any private signal realization. The aggregate loss from such a delay is not significant,

since the information bottleneck contains only a negligible fraction of the agents as the society grows.

On the other hand, the followers in collection {Snf }∞n=1 gain by a positive (and uniformly lower-

bounded) amount in expectation by the additional information they obtain. Finally, a non-negligible

fraction of the agents belongs to {Snf }∞n=1, which concludes the proof.

Appendix B

Proof of Theorem 5.

Given a communication network Gn and an equilibrium of the information exchange game σn, let

Mni (Gn, σn) =

1 if agent i takes the correct decision,

0 otherwise.(32)

Note that this is related to the indicator variable Mni,t , which takes value one if agent i takes the

correct decision by time t [cf. Eq. (1)], i.e., Mni = 1 if and only if Mn

i,t = 1 for some finite t. For any

integer k > 0, let P(Mni = 1 | k) denote the probability that agent i makes the correct decision when

she has access to a set of k signals; without loss of generality we represent this set by Sk = {s1, . . . , sk}.The next lemma studies the properties of this probability as a function of k.

Lemma 5. Let Gn be a communication network and σn be an equilibrium of the information exchange

42

Page 43: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

game. Then, there exists an integer k > 1 such that for any scalar 0 < c < 1− β and discount factor

δ < 1 that satisfy δ − c/δ > β, we have

P(Mni = 1 | k) ≥ δ − c

δand P(Mn

i = 1 | k − 1) < δ − c

δ.

Proof. We first note that the probability P(Mni = 1 | k) is nondecreasing in k, the number of signals

that agent i has access to. This holds since a (suboptimal) strategy for agent i is to discard some of

the signals in making her decision. We have P(Mni = 0 | S1) = β (since β = P(si = 1 | θ = 1) =

P(si = 0 | θ = 0). Choosing k > 1

2( 12−(1−β))2 log 1

1+c/δ−δ and using Lemma 4, we obtain

P(Mni = 1 | k) ≥ δ − c

δ. (33)

In view of the assumption β < δ − c/δ and the monotonicity of P(Mni = 1 | k) in k, this shows the

existence of an integer k, such that Eq. (33) is satisfied.

Next, we make an important observation which will be used frequently in the subsequent analysis.

Consider an agent i such that Hnsc(i) ∈ H

nk, where k is the integer defined in Lemma 5, i.e., the size of

the social clique of agent i is greater than or equal to k, |Hnsc(i)| ≥ k. Suppose agent i does not form a

link with cost c with any agents outside her social clique. If she makes a decision at time t = 0 based

on her signal only, her expected payoff will be β. If she waits for one period, she has access to signals

of all the agents in her social clique (i.e., she has access to at least k signals), implying by Lemma

5 that her expected payoff would be bounded from below by δ(δ − c/δ). Hence, her expected payoff

E[Πi(gn)] satisfies

E[Πi(gn)] ≥ max{β, δ2 − c},

for any link formation strategy gn and along any σ ∈ INFO(Gn) (where Gn is the communication

network induced by gn).

Suppose now that agent i forms a link with cost c with an agent outside her social clique. Then,

her expected payoff will be bounded from above by

E[Πi(gn)] < max{β, δ2 − c},

where the second term in the maximum is an upper bound on the payoff she could get by having access

to signals of all agents she is connected to in two time steps (i.e., signals of the agents in her social

clique and in the social clique that she is connected to). Combining the preceding two relations, we

see that an agent i with Hnsc(i) ∈ H

nk

will not form any costly links in any network equilibrium, i.e.,

gnij = 1 if and only if sc(j) = sc(i) for all i such that |Hnsc(i)| ≥ k. (34)

(a) Condition (5) implies that for all sufficiently large n, we have∣∣Hnk ∣∣ ≥ ξn, (35)

where ξ is a constant that satisfies 0 < ξ < ε. For any ε′ with 0 < ε′ < ξ, we can express the probability

that a non-negligible fraction of agents take the wrong action as follows. For some t > 0, let Mni,t and

43

Page 44: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Mni be indicator variables defined in Eqs. (1) and (32). Since Mn

i,t ≤Mni for all t, we have

P

(n∑i=1

1−Mni,t

n> ε′

)≥ P

(n∑i=1

1−Mni

n> ε′

)

= P

∑i| |Hn

sc(i)|<k

1−Mni

n+

∑i| |Hn

sc(i)|≥k

1−Mni

n

> ε′

≥ P

∑i| |Hn

sc(i)|≥k

1−Mni

n> ε′

. (36)

The right-hand side of the preceding inequality can be re-written as

P

∑i| |Hn

sc(i)|≥k

1−Mni

n> ε′

= 1− P

∑i| |Hn

sc(i)|≥k

1−Mni

n≤ ε′

= 1− P

∑i| |Hn

sc(i)|≥k

Mni

n≥ r − ε′

,

where r =∑

i| |Hnsc(i)|≥k

1n . By Eq. (35), it follows that for n sufficiently large, we have r ≥ ξ. Using

Markov’s inequality, the preceding relation implies

P

∑i| |Hn

sc(i)|≥k

1−Mni

n> ε′

≥ 1−

∑i| |Hn

sc(i)|≥k E[Mn

i ]

n· 1r − ε′

. (37)

We next provide an upper bound on E[Mni ] for an agent i with |Hn

sc(i)| ≥ k. By observation (34),

no agent in i’s social clique, sc(i), will form a link with agents outside sc(i), implying that agent i

will communicate with at most |Hnsc(i)| agents. Therefore, the probability that agent i will choose the

wrong action is bounded from below by the probability of the event that all agents in her social clique

will receive the wrong signal, i.e., P(sj = 1− θ for all j ∈ Hnsc(i)), implying that

P(Mni = 0) ≥ (1− β)

∣∣Hnsc(i)

∣∣,

and therefore

E[Mni ] ≤ 1− (1− β)

∣∣Hnsc(i)

∣∣.

Using this bound and assuming without loss of generality, that social cliques are ordered by size (Hn1

is the biggest), we can re-write Eq. (37) as

P

∑i| |Hn

sc(i)|≥k

1−Mni

n> ε′

≥ 1−

∑|Hnk|

j=1 |Hnj |(

1− (1− β)|Hnj |)

n· 1r − ε′

≥ 1−r ·(

1− (1− β)nr/|Hnk|)

r − ε′

≥ 1− ξ · (1− (1− β))ξ − ε′

> ζ. (38)

Here, the second inequality is obtained since the largest value for the sum is achieved when all sum-

44

Page 45: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

mands are equal. In particular, this is equivalent to the following optimization problem

max∑q

j=1 kj(1− (1− β)kj )

s.t.∑q

j=1 kj = A,

for which the optimal solution is kj = A/q for all j. The third inequality holds using the relation

r ≥ ξ and choosing appropriate values for ε′ and ζ.

Combining Eqs. (36) and (38) establishes that for all t > 0 and all sufficiently large n, we have

P

(n∑i=1

1−Mni,t

n> ε′

)> ζ > 0,

which implies

lim supn→∞

limt→∞

P

(n∑i=1

1−Mni,t

n> ε′

)> ζ,

and shows that asymptotic learning does not occur in any network equilibrium.

(b) We show that if the communication cost structure satisfies condition (6), then asymptotic learning

occurs in all network equilibria (g, σ) = ({gn, σn})∞n=1. For an illustration of the resulting communi-

cation networks, when condition (7) holds, refer to Figure 6(a). Let Bni (Gn) be the neighborhood of

agent i in communication network Gn (induced by the link formation strategy gn),

Bni (Gn) = {j

∣∣ there exists a path P in Gn from j to i},

i.e., Bni (Gn) is the set of agents in Gn whose information agent i can acquire over a sufficiently large

(but finite) period of time.

We first show that for any agent i such that lim supn→∞∣∣Hn

sc(i)

∣∣ < k, her neighborhood in any

network equilibrium satisfies limn→∞∣∣Bn

i

∣∣ =∞. We use the notion of an isolated social clique to show

this. For a given n, we say that a social clique Hn` is isolated (at a network equilibrium (g, σ)) if no

agent in Hn` forms a costly link with an agent outside Hn

` in (g, σ). Equivalently, a social clique Hn` is

not isolated if there exists at least one agent j ∈ Hn` , such that j incurs cost c and forms a link with

an agent outside Hn` .

We show that for an agent i with lim supn→∞∣∣Hn

sc(i)

∣∣ < k, the social clique Hnsc(i) is not isolated in

any network equilibrium for all sufficiently large n. Using condition (6), we can assume without loss of

generality that social cliques are ordered by size from largest to smallest and that limn→∞ |Hn1 | =∞.

Suppose that Hnsc(i) is isolated in a network equilibrium (g, σ). Then the expected payoff of agent i is

given by

E[Πi(gn)] ≤ max{β, δ P(Mni = 1 | |Hn

sc(i)|)}.

Recall that P(Mni = 1 | |Hn

sc(i)|) denotes the probability that agent i makes the correct decision when

she has access to only |Hnsc(i)| signals. The term on the right-hand side is the maximum of the payoff

she would get by acting at t = 0 (given by β) and the payoff she would get by waiting one time period,

in which case she has access to the set of signals of all agents in Hnsc(i). Since lim supn→∞

∣∣Hnsc(i)

∣∣ < k,

45

Page 46: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

the preceding relation implies

E[Πi(gn)] ≤ max{β, δ P(Mni = 1

∣∣ k − 1)}.

Using the definition of k, it follows from Lemma 5 that for some ε′ > 0,

E[Πi(gn)] ≤ max{β, δ

(δ − c

δ− ε′

)}= max{β, δ2 − c− δε′}. (39)

Suppose next that agent i forms a link with an agent j ∈ Hn1 . Her expected payoff E[Πi(gn)]

satisfies

E[Πi(gn)] ≥ δ2 · P(Mni = 1 | |Hn

1 |)− c,

since in two time steps, she has access to the signals of all agents in the social clique Hn1 . By Lemma

4, we have

P(Mni = 1 ||Hn

1 |) ≥ 1− exp

(−2(

12− (1− β)

)2

|Hn1 |

).

Since limn→∞ |Hn1 | = ∞, there exists some N1 such that exp

(−2(

12 − (1− β)

)2 |Hn1 |)< ε′/δ for all

n > N1. Combining the preceding two relations, we obtain

E[Πi(gn)] > δ2 − δε′ − c for all n > N1.

Comparing this relation with Eq. (39), we conclude that under the assumption that δ >√c+ β, the

social clique Hnsc(i) is not isolated in any network equilibrium for all n > N1.

Next, we show that limn→∞ |Bni | =∞ in any network equilibrium. Assume to arrive at a contra-

diction that lim supn→∞ |Bni | <∞ in some network equilibrium. This implies that lim supn→∞ |Bn

i | <|Hn

1 | for all n > N2 > N1. Consider some n > N2. Since Hnsc(i) is not isolated, there exists some

j ∈ Hnsc(i) such that j forms a link with an agent h outside Hn

sc(i). Since lim supn→∞ |Bni | < |Hn

1 |,agent j can improve her payoff by changing her strategy to gnjh = 0 and gnjh′ = 1 for h′ ∈ Hn

1 , i.e., j is

better off deleting her existing costly link and forming one with an agent in social clique Hn1 . Hence,

for any network equilibrium, we have

limn→∞

|Bni | =∞ for all i with lim sup

n→∞|Hn

sc(i)| < k (40)

We next consider the probability that a non-negligible fraction of agents take the wrong action

along a network equilibrium (g, σ). Let Mni,t and Mn

i be indicator variables defined in Eqs. (1) and

(32). For any n, there exists some t such that Mni,t = Mn

i for all t > t. Therefore, for all t large and

some ε > 0, we have

P

(n∑i=1

1−Mni,t

n> ε

)= P

(n∑i=1

1−Mni

)≤ 1ε·n∑i=1

E[1−Mni ]

n(41)

where the second inequality follows from Markov’s inequality. We next provide upper bounds on the

individual terms in the sum on the right-hand side. For any agent i, we have

E[1−Mni ] = P(Mn

i = 0 | |Bni |) ≤ exp

(−2(

12− (1− β)

)2

|Bni |

), (42)

where the inequality follows from Lemma 4.

Consider an agent i with lim supn→∞ |Hnsc(i)| < k (i.e., |Hn

sc(i)| < k for all n large). By Eq. (40), we

46

Page 47: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

have limn→∞ |Bni | = ∞. Together with Eq. (42), this implies that for some ζ > 0, there exists some

N such that for all n > N , we have

E[1−Mni ] <

ε ζ

2for all i with lim sup

n→∞|Hn

sc(i)| < k. (43)

Consider next an agent i with lim supn→∞ |Hnsc(i)| ≥ k, and for simplicity, let us assume that the

limit exists, i.e., limn→∞ |Hnsc(i)| ≥ k.7 This implies that |Hn

sc(i)| ≥ k for all large n, and therefore,

∑i| lim supn→∞ |Hn

sc(i)|≥k

E[1−Mni ]

n≤

|Hnk |∑j=1

|Hnj | · exp

(−2(

12− (1− β)

)2

|Hnj |

)

≤|Hn

k|

n· k · exp

(−2(

12− (1− β)

)2

k

),

where the first inequality follows from Eq. (42) (and the assumption that the social cliques are ordered

from largest to smallest), and the second inequality follows since the expression x·exp(−2(

12 − (1− β)

)2x)

is decreasing in x. Using condition (6), i.e., limn→∞

∣∣Hnk

∣∣n = 0, this relation implies that there exists

some N such that for all n > N , we have∑i| lim supn→∞ |Hn

sc(i)|≥k

E[1−Mni ]

n<ε ζ

2. (44)

Combining Eqs. (43) and (44) with Eq. (41), we obtain for all n > max {N, N} and all t large,

P

(n∑i=1

1−Mni,t

n> ε

)< ζ,

where ζ > 0 is an arbitrary scalar. This implies that

limn→∞

limt→∞

P

(n∑i=1

1−Mni,t

n> ε

)= 0,

showing that asymptotic learning occurs along every network equilibrium.

(c) The proof proceeds in two parts. First, we show that if condition (7) is satisfied, learning occurs

in at least one network equilibrium (g, σ). Then, we show that there exists a c > 0, such that if c < c,

then learning occurs in all network equilibria. We complete the proof by showing that if c > c, then

there exist network equilibria, in which asymptotic learning fails, even when condition (7) holds. We

consider the case when agents are patient, i.e., the discount factor δ → 1. We consider k, such that

P(Mni = 1

∣∣k) ≥ 1− c and P(Mni = 1

∣∣k − 1) < 1− c < 1− c− ε′, for some ε′ > 0 (such a k exists from

Lemma 5)8. Finally, we assume that β < 1− c− ε′, since otherwise no agent would have an incentive

to form a costly link.

Part 1: We assume, without loss of generality, that social cliques are ordered by size (Hn1 is the small-

7The case when the limit does not exist can be proven by focusing on different subsequences. In particular, alongany subsequence Ni such that limn→∞,n∈Ni |Hn

sc(i)| ≥ k, the same argument holds. Along any subsequence Ni with

limn→∞,n∈Ni |Hnsc(i)| < k, we can use an argument similar to the previous case to show that limn→∞,n∈Ni |Bni | = ∞,

and therefore E[1−Mni ] < ε ζ

2for n large and n ∈ Ni.

8As opposed to parts (a) and (b), where we considered a fixed discount factor δ < 1, in part (c) we assume δ → 1.Lemma 5 still applies and we obtain that there exists k with the properties stated above.

47

Page 48: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

est). LetHn<k

denote the set of social cliques of size less than k, i.e., Hn<k

= {Hni , i = 1, . . . ,Kn | |Hn

i | <k}. Finally, let rec(j) and send(j) denote two special nodes for social clique Hn

j , the receiver and the

sender (they might be the same node). We claim that (gn, σn) described below and depicted in Figure

6(b) is an equilibrium of the network learning game Γ(Cn) for n large enough and δ sufficiently close

to one.

gnij =

1 if sc(i) = sc(j), i.e., i, j belong to the same social clique,

1 if i = rec(`− 1) and j = send(`) for 1 < ` ≤ |Hn<k|,

1 if i = rec(|Hn<k|) and j = send(1),

0 otherwise

and σn ∈ INFO(Gn), where Gn is the communication network induced by gn. In this communication

network, social cliques with size less than k are organized in a directed ring, and all agents i, such

that |Hnsc(i)| < k have the same neighborhood, i.e., Bn

i = Bn for all such agents.

Next, we show that the strategy profile (gn, σn) described above is indeed an equilibrium of the network

learning game Γ(Cn). We restrict attention to large enough n’s. In particular, let N be such that∑|HN<k|

i=1 |HNi | > k and consider any n > N (such N exists from condition (7)). Moreover, we assume

that the discount factor is sufficiently close to one. We consider the following two cases.

Case 1: Agent i is not a connector. Then, gnij = 1 if and only if sc(j) = sc(i). Agent i’s neighborhood

as noted above is set Bn, which is such that P(Mni = 1

∣∣ |Bn|) > 1− c from the assumption on n, i.e.,

n > N , where N such that∑|HN

<k|

i=1 |HNi | > k. Agent i can communicate with all agents in Bn in at

most |H<k| time periods. Therefore, her expected payoff is lower-bounded by

E[Πi(gn)] ≥ δ∣∣Hn

<k

∣∣· P(Mn

i = 1∣∣ |Bn|) > 1− c,

under any equilibrium σn for δ sufficiently close to one. Agent i can deviate by forming a costly link

with agent m, such that sc(m) 6= sc(i). However, this is not profitable since from above her expected

payoff under (gn, σn) is at least 1− c.Case 2: Agent i is a connector, i.e., there exists exactly one j, such that sc(j) 6= sc(i) and gnij = 1.

Using a similar argument as above we can show that it is not profitable for agent i to form an additional

costly link with an agent m, such that sc(m) 6= sc(i). On the other hand, agent i could deviate by

setting gnij = 0. However, then her expected payoff would be

E[Πi(gn)] = max{β, δ P(Mni = 1

∣∣ |Hni |)} ≤ max{β, δ P(Mn

i = 1∣∣ k − 1)} < 1− c− ε′

< δ

∣∣Hn<k

∣∣P(Mn

i = 1∣∣ |Bn|)− c,

for discount factor δ sufficiently close to one and since we assume that β < 1−c. Therefore deleting the

costly link is not a profitable deviation. Similarly we can show that it a (weakly) dominant strategy

for the connector not to replace her costly link with another costly link.

We showed that (gn, σn) is an equilibrium of the network learning game. Note that we described a

48

Page 49: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Hn`1 X

Hn`2

(a) Deviation for i ∈ Hn`1 - property (i).

X

Hn`

(b) Deviation for i ∈ Hn` - property (ii).

Figure 9: Communication Networks under condition (7).

link formation strategy, in which social cliques connect to each other in a specific order (in increasing

size). There is nothing special about this ordering and any permutation of the first |Hn<k| cliques is an

equilibrium as long as they form a directed ring. Finally, any node in a social clique can be a receiver

or a sender.

Next, we argue that asymptotic learning occurs in network equilibria (g, σ) = {(gn, σn)}∞n=1, where

for all n > N , N is a large constant, gn has the form described above. As shown above, all agents i for

which Hnsc(i) < k have the same neighborhood, which we denoted by Bn. Moreover, limn→∞ |Bn| =

∞, since all social cliques with size less than k are connected to the ring and, by condition (7),

limn→∞∑

i| |Hni |<k|Hn

i | = ∞. For discount factor δ sufficiently close to one and from arguments

similar to those in the proof of part (b), we conclude that asymptotic learning occurs in network

equilibria (g, σ).

Part 2: We have shown a particular form of network equilibria, in which asymptotic learning occurs.

The following proposition states that for discount factor δ sufficiently close to one network equilibria

fall in one of two forms.

Proposition 7. Let Assumptions 1, 3, 5 and condition (7) hold. Then, an equilibrium (gn, σn) of the

network learning game Γ(Cn) can be in one of the following two forms.

(i) (Incomplete) Ring Equilibrium: Social cliques with indices {1, · · · , j}, where j ≤ |Hn<k|,

form a directed ring as described in Part 1 and the rest of the social cliques are isolated. We

call those equilibria ring equilibria and, in particular, a ring equilibrium is called complete if

j = |Hn<k|, i.e., if all social cliques with size less than k are not isolated.

(ii) Directed Line Equilibrium: Social cliques with indices {1, · · · , j}, where j ≤ |Hn<k|, and

clique with index |HnKn | (the largest clique) form a directed line with the latter being the end-

point. The rest of the social cliques are isolated.

Proof. Let (gn, σn) be an equilibrium of the network learning game Γ(Cn). Monotonicity of P(Mni =

1∣∣ x) as a function of x implies that if clique Hn

` is not isolated, then no clique with index less

than ` is isolated in the communication network induced by gn. In particular, let conn(`) be the

connector of social clique Hn` and E[Πconn(`)(gn)] be her expected payoff. Consider an agent i such

that sc(i) = `′ < ` and, for the sake of contradiction, Hn`′ is isolated in the communication network

49

Page 50: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

induced by gn. Social cliques are ordered by size, therefore, |Hn`′ | ≤ |Hn

` |. At this point, we use the

monotonicity of P(Mni = 1

∣∣ x). Consider the expected payoff of agent i:

E[Πi(gn)] = max{β, δ P(Mni = 1| |Hn

`′ |)} ≤ max{β, δ P(Mni = 1| |Hn

` |)} < E[Πconn(`)(gn)], (45)

where the last inequality follows from the fact that agent conn(`) formed a costly link. Consider a

deviation, gn,deviationi for agent i, in which gn,deviationi,conn(`) = 1 and gn,deviationij = gnij , i.e., agent i forms a

costly link with agent conn(`). Then,

E[Πi(gn,deviation)] ≥ δ E[Πconn(`)(gn)] > E[Πi(gn)],

from (45) and for discount factor sufficiently close to one. Therefore, social clique Hn`′ will not be

isolated in any network equilibrium (gn, σn).

Next, we show two structural properties that all network equilibria (gn, σn) should satisfy, when

the discount factor δ is sufficiently close to one. We say that there exists a path P between social

cliques Hn`1

and Hn`2

, if there exists a path between some i ∈ Hn`1

and j ∈ Hn`2

. Also, we say that the

in-degree (out-degree) of social clique Hn`1

is k, if the sum of in-links (out-links) of the nodes in Hn`1

is

k, i.e., Hn`1

has in-degree k if∑

i∈Hn`1

∑j /∈Hn

`1

gnij = k.

(i) Let Hn`1, Hn

`2be two social cliques that are not isolated. Then, there should exist a directed path

P in Gn induced by gn between the two social cliques.

(ii) The in-degree and out-degree of each social clique is at most one.

Figure 9 provides an illustration of why the properties hold for patient agents. In particular, for

property (i), let i = conn(Hn`1

) and j = conn(Hn`2

) and assume, without loss of generality, that |Bni | ≤

|Bnj |. Then, for discount factor sufficiently close to one and from monotonicity of P(Mn

i = 1∣∣ x),

we conclude that i has an incentive to deviate, delete her costly and form a costly link with agent j.

Property (ii) follows due to similar arguments.

From the above, we conclude that the only two potential equilibrium topologies are the (incomplete)

ring and the directed line with the largest clique being the endpoint under the assumptions of the

proposition.

So far we have shown a particular form of network equilibria, that arise under condition (7), in

which asymptotic learning occurs. We also argued that under condition (7) only (incomplete) ring or

directed line equilibria can arise for network learning game Γ(Cn). In the remainder we show that

there exists a bound c > 0 on the common cost c for forming a link between two social cliques, such

that if c < c all network equilibria (g, σ) that arise satisfy that gn is a complete ring equilibrium for

all n > N , where N is a constant. In those network equilibria asymptotic learning occurs as argued

in Part 1. On the other hand, if c > c coordination among the social cliques may fail and additional

equilibria arise in which asymptotic learning does not occur. Let

cn = mink{ P(Mn

i = 1∣∣ k∑j=1

|Hnj |+ |Hn

k+1|)− P(Mni = 1

∣∣ |Hnk+1|)}, (46)

50

Page 51: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

where k1 ≤ k < |Hn<k| and

∑k1j=1 |Hn

j | ≥ |HnKn | (size of the largest social clique). Moreover, let

c = lim infn→∞

cn.

Then,

Proposition 8. Let Assumptions 1, 3, 5 and condition (7) hold. If c < c asymptotic learning occurs

in all network equilibria (g, σ). Otherwise, there exist equilibria in which asymptotic learning does

not occur.

Proof. Let the common cost c be such that c < c, where c is defined as above, and consider a network

equilibrium (g, σ). Let N be a large enough constant and consider the corresponding gn for n > N .

We claim that gn is a complete ring equilibrium for all such n. Assume for the sake of contradiction

that the claim is not true. Then, from Proposition 7, gn is either an incomplete ring equilibrium or

a directed line equilibrium. We consider the former case (the latter case can be shown with similar

arguments). There exists an isolated social clique Hn` , such that |Hn

` | < k and all cliques with index

less than ` are not isolated and belong to the incomplete ring. However, from the definition of c we

obtain that an agent i ∈ Hn` would have an incentive to connect to the incomplete ring, thus we reach a

contradiction. In particular, consider the following link formation strategy for agent i: gn,deviationim = 1

for agent m ∈ Hn`−1 and gn,deviationij = gnij for j 6= m. Then,

E[Πni (gn,deviation)] ≥ δ|H

n<k|P(Mn

i = 1∣∣ |Hn

` |+`−1∑j=1

|Hnj |})−c > max{β, δP(Mn

i = 1∣∣ |Hn

` |)} = E[Πni (gn)],

where the strict inequality follows from the definition of c for δ sufficiently close to one. Thus, we

conclude that if c < c, gn is a complete ring for all n > N , where N is a large constant, and from Part

1 asymptotic learning occurs in all network equilibria (g, σ).

On the contrary, if c > c, then there exists an infinite index set W , such that for all n in the

(infinite) subsequence, {nw}w∈W , there exists a k, such that

P(Mni = 1

∣∣ k∑j=1

|Hnj |+ |Hn

k+1|)− c < P(Mni = 1

∣∣ |Hnk+1|). (47)

Moreover, |Hnk+1| < k and

∑kj=1 |Hn

j | ≥ |HnKn |. From Lemma 4 we conclude that for (47) to hold it

has to be that∑k

j=1 |Hnj | < R, where R is a uniform constant for all n in the subsequence. Consider

(g, σ)∞n=1, such that for every n in the subsequence, gn is such that social cliques with index greater

than k (as described above) are isolated and the rest form an incomplete ring or a directed line and

σn = INFO(Gn), where Gn is the communication network induced by gn. From above, we obtain

that for c > c, (gn, σn) is an equilibrium of the network learning game Γ(Cn). Asymptotic learning,

however, fails in such an equilibrium, since for every i ∈ Nn, |Bni | ≤ R, where recall that Bn

i denotes

the neighborhood of agent i.

Proposition 8 concludes the proof of Theorem 5. For a concrete example, consider the following

51

Page 52: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

communication cost structure {Cn}∞n=1:

Cnij =

0 for i = 2`+ 1 and j = 2`+ 2, 0 ≤ ` ≤ n−1

2 − 1

0 for i = 2`+ 2 and j = 2`+ 1, 0 ≤ ` ≤ n−12 − 1(symmetry)

c otherwise,

for every n. In other words, social cliques consist of agents with consecutive indices and have size

at most two. Let β > 1/2 and δ arbitrarily close to one. Then, bound c has the following form

c = P(Mi = 1∣∣4) − P(Mi = 1

∣∣2). In particular, if c > c then the empty communication network, i.e,

where all social cliques are isolated for every n, is an equilibrium. On the other hand, if c < c, then

all equilibria of the network learning game Γ(Cn) have the form (gn, σn), where gn is a complete ring.

Appendix C

Proof of Proposition 3. (i) Let the discount factor be δ < 1. Then, the minimum observation

radius of an agent is bounded by log δ1−δ

(log β

1−β

)−1< Z, where Z is a constant. Then, |V n

kZ| = n

for all n > kZ . Thus, asymptotic learning fails from Theorem 2.

(ii) First, we state a lemma from Bollobas and Riordan (2004) for bounding the probability that

there exists a path between two agents in a preferential attachment network. For the remainder, of

the proof, agent i is simply the agent that was born at time i.

Lemma in Bollobas and Riordan (2004) Let {Gn}∞n=1 be a preferential attachment sequence of

communication networks and consider network Gn, for n large enough. Let P be a path from agent i

to agent j in Gn, where En(P ) denotes the set of edges of path P . Then,

P(P ⊆ Gn) ≤ C |En(P )|∏

(i1,i2)∈En(P )

1√i1i2

,

where C is an absolute constant.

This lemma allows us to consider a random communication network, where an edge between agents

i and j is present independently with probability 1/√ij, although in the original network Gn edges

are dependent. The intuition why such a result is true comes from observing that the dependencies

between edges due to the preferential attachment rule are not too strong as the communication network

grows.

We are now ready to proceed with the proof. Let wn be the average degree for Gn, which by the

preferential attachment rule remains finite with high probability as n grows, i.e., limn→∞wn < W . In

particular, for sufficiently large n,

P(at most n/c nodes have degree more than cW ) > 1− ε/2, for all c > 0. (48)

Next, we use the above lemma to show that most low degree agents agents are not connected with a

short path to high degree agents. In particular, let S1 (S2) denote the set of agents that have degree

greater (smaller) than cW . We want to bound the probability that there exists a short path (at most

52

Page 53: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

L links long) from an agent in S2 to an agent in S1. As mentioned above, with high probability

|S1| ≤ n/c. Then, if P `k denotes a path of length ` from agent k ∈ S2 to any agent in S1, we haveL∑`=1

P(P `k ⊆ Gn) ≤L∑`=1

∑j∈S1

C`√k · j

∑u1,··· ,u`−1∈S2

`−1∏t=1

1ut

(49)

≤L∑`=1

∑j∈S1

C`√k · j

∑i∈S2

1i

≤L∑`=1

∑j∈{1,··· ,n/c}

C`√k · j

∑i∈{n/c,··· ,n}

1i

(50)

≤ ζ < 1, for agents with k = O(n), (51)

where Eq. (49) follows from Lemma in Bollobas and Riordan (2004) and Eq. (50) follows by setting

set S1 to be the set of the first n/c nodes (lowest indices) and set S2 the rest. Finally, Eq. (51)

follows for an appropriate choice of constant c (and consequently the size of set S1). In particular, as

n grows, we obtain∑

i∈{n/c,··· ,n}1i ≈ log c and

∑j∈{1,··· ,n/c}

C`√k·j ≈

constantc1/2

for k = O(n). Finally, note

that with high probability most of the agents with index O(n) belong to set S2, since by Lemma in

Bollobas and Riordan (2004) above their expected degree is bounded by a small constant. Combining

Eqs. (48), (51) and Corollary 1 we conclude that asymptotic learning fails in a preferential attachment

communication network sequence with probability at least 1− ε for discount factor δ < 1.

(iii) Follows directly from (ii), since Erdos-Renyi communication networks are even sparser than

preferential attachment communication networks.

Proof of Proposition 4. (i) - (ii) The claims follow directly from Theorem 2. In particular, for

δ > β (for complete) and δ >√β (for star) we obtain that

limk→∞

limn→∞

1n·∣∣V nk

∣∣ = 0,

therefore asymptotic learning occurs in the respective societies.

(iii) Follows directly from van den Esker, van der Hofstad, Hooghiemstra, and Znamenski (2005).

(iv) Consider the following two events A and B.

Event A: Layer 1 (the top layer) has more than k agents, where k > 0 is a scalar.

Event B: The total number of layers is more than k.

From the definition of a hierarchical sequence of communication networks, we have

P(A) =k∏i=2

(1− 1

i1+ζ

)< exp

(−

k∑i=2

1i1+ζ

). (52)

Also,

P(B) ≤ E(L)k

=1k

∞∑i=2

1i1+ζ

, (53)

from Markov’s inequality, where L is a random variable, that denotes the number of layers in the

hierarchical society. Let ζ(ε) be small enough and k (and consequently n) large enough such that

53

Page 54: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

∑ki=2

1i1+ζ > log 4

ε and∑∞

i=21

i1+ζ < k·ε4 . For those values of ζ and k we obtain P(A) < ε/4 and

P(B) < ε/4. Next, consider the event C = Ac ∩ Bc, which from Eqs. (52) and (53) has probability

P(C) > 1− ε/2 for the values of ζ and k chosen above. Moreover, we consider

Event D: The agents on the top layer are information hubs, i.e.,

limn→∞

|Bni,1| =∞, for all i ∈ N n

1 .

We claim that event D occurs with high probability if event C occurs, i.e., P(D∣∣ C) > 1− ε/2, which

implies

P(C ∩D) = P(D∣∣ C)P(C) > (1− ε/2)2 > 1− ε. (54)

In particular, note that conditional on event C occurring, the total number of layers and the total

number of agents in the top layer is at most k. From the definition of a hierarchical society, agents

in layers with index ` > 1 have an edge to a uniform agent that belongs to a layer with lower index,

with probability one. Therefore, if we denote the degree of an agent in a top layer (say agent 1) by

Dn1 we have

Dn1 =T n2∑i=1

I level2i,1 + · · ·+T nL∑i=1

I levelLi,1 , (55)

where T ni denotes the random number of agents in layer i and I levelji,1 is an indicator variable that

takes value one if there is an edge from agent i to agent 1 (here levelj simply denotes that agent i

belongs to level j). Again from the definition, we have P(I levelji1 = 1) = 1Pj−1`=1 T

n`

, where the sum in

the denominator is simply the total number of agents that lie in layers with lower index, and finally,

T n1 + · · · T nL = n.

We can obtain a lower bound on the expected degree of an agent in the top layer conditional on

event C by viewing (55) as the following optimization problem:

minx2

x1+ · · ·+ xk

x1 + · · ·+ xk−1

s.t.∑k

j=1 xj = n,

0 ≤ x1 ≤ k,

0 ≤ x2, · · · , xk−1,

where we make use of the fact that the total number of layers is bounded by k, since we condition

on event C. By solving the problem we obtain that the objective function is lower bounded by φ(n),

where φ(n) = O(n1/k) for every n. Then,

E[Dn1∣∣C] =

k∑`=2

∑k1≤k,··· ,k`k1+···+k`=n

P(L = `, T n1 = k1, · · · , T n` = k`|C) · E[Dn1∣∣C,L = `, T n1 = k1, · · · , T n` = k`]

≥k∑`=2

∑k1≤k,··· ,k`k1+···+k`=n

P(L = `, T n1 = k1, · · · , T n` = k`|C) · φ(n) = φ(n), (56)

where Eq. (56) follows since E[Dn1∣∣C,L = `, T n1 = k1, · · · , T n` = k`] ≥ φ(n) for all values of ` (2 ≤ ` ≤ k)

54

Page 55: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

and k1, · · · , k` (k1 ≤ k, k1 + · · ·+ k` = n) from the optimal solution of the optimization problem. The

same lower bound applies for all agents in the top layer. Similarly we have for the variance of the

degree of an agent in the top layer (we use `, k1, · · · , k` as a shorthand for L = `, T n1 = k1, · · · , T n` = k`)

V ar[Dn1∣∣C] =

k∑`=2

∑k1≤k,··· ,k`k1+···+k`=n

P(`, k1, · · · , k`|C) · V ar[Dn1∣∣C, `, k1, · · · , k`]

=k∑`=1

∑k1≤k,··· ,k`k1+···+k`=n

P(`, k1, · · · , k`|C) ·(k2V ar(I level2i,1 ) + · · ·+ k`V ar(I level`i,1 )

)(57)

≤k∑`=1

∑k1≤k,··· ,k`k1+···+k`=n

P(`, k1, · · · , k`|C) ·(k2E(I level2i,1 ) + · · ·+ k`E(I level`i,1 )

)= E[Dn1

∣∣C], (58)

where Eq. (57) follows by noting that conditional on event C and the number of layers and the agents

in each layer being fixed, the indicator variables (defined above) are independent and Eq. (58) follows

since the variance of an indicator variable is smaller that its expectation. We conclude that the variance

of the degree is smaller than the expected value and from Chebyschev’s inequality we conclude that

P(D) ≥ P(⋂i∈Nn1

Dniφ(n)

> ζ) > 1− ε/2,

where ζ > 0, i.e., with high probability all agents in the top layer are information hubs (recall that

limn→∞ φ(n) =∞).

We have shown that when event C∩D occurs, there is a path of length at most k (the total number

of layers) from each agent to an agent at the top layer, i.e., an information hub with high probability.

Therefore, if the discount factor δ is greater than some lower bound (δ > δ4), then asymptotic learning

occurs. Finally, we complete the proof by noting that P(C ∩D) > (1− ε/2)2 > 1− ε.

References

Acemoglu, D., M. Dahleh, I. Lobel, and A. Ozdaglar (2008): “Bayesian learning in social

networks,” Preprint.

Albert, R., and A. Barabasi (2002): “Statistical mechanics of complex networks,” Reviews of

Modern Physics, 74, 47–97.

Bala, V., and S. Goyal (1998): “Learning from neighbours,” Review of Economic Studies, 65(3),

595–621.

(2000): “A noncooperative model of network formation,” Econometrica, 68(5), 1181–1229.

Banerjee, A. (1992): “A simple model of herd behavior,” Quarterly Journal of Economics, 107,

797–817.

Banerjee, A., and D. Fudenberg (2004): “Word-of-mouth learning,” Games and Economic

Behavior, 46, 1–22.

Barabasi, A., and R. Albert (1999): “Emergence of scaling in random networks,” Science,

55

Page 56: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

286(5439), 509–512.

Bikchandani, S., D. Hirshleifer, and I. Welch (1992): “A theory of fads, fashion, custom, and

cultural change as information cascades,” Journal of Political Economy, 100, 992–1026.

Bollobas, B., and O. Riordan (2004): “The diameter of a scale-free graph,” Combinatorica,

24(1), 5–34.

Crawford, V., and J. Sobel (1982): “Strategic information transmission,” Econometrica, 50(6),

1431–1451.

DeMarzo, P., D. Vayanos, and J. Zwiebel (2003): “Persuasion bias, social influence, and

unidimensional opinions,” The Quarterly Journal of Economics, 118(3), 909–968.

Durrett, R. (2007): Random graph dynamics. Cambridge University Press.

Farrell, J., and R. Gibbons (1989): “Cheap talk with two audiences,” American Economic

Review, 79, 1214–1223.

Gale, D., and S. Kariv (2003): “Bayesian learning in social networks,” Games and Economic

Behavior, 45(2), 329–346.

Galeotti, A., C. Ghiglino, and F. Squintani (2009): “Strategic information transmission in

networks,” Preprint.

Gladwell, M. (2000): The Tipping Point: How little things can make a big difference. Little Brown.

Golub, B., and M. Jackson (2008): “Naive learning in social networks and the wisdom of crowds,”

forthcoming in American Economic Journal: Microeconomics.

Goyal, S. (2007): Connections: an introduction to the economics of networks. Princeton University

Press, Princeton, New Jersey.

H. Seyed-allaei, G. B., and M. Marsili (1999): “Scale-free networks with an exponent less than

two,” Physical Review E, 73(4).

Hagenbach, J., and F. Koessler (2009): “Strategic communication networks,” Preprint.

Jackson, M. (2004): A survey of models of network formation: stability and efficiency. In group

formation in Economics; Networks, clubs and coalitions, edited by Gabrielle Demange and Myrna

Wooders. Cambridge University Press.

(2008): Social and economic networks. Princeton University Press, Princeton, New Jersey.

Jackson, M., and A. Wolinsky (1996): “A strategic model of social and economic networks,”

Journal of Economic Theory, 71(1), 44–74.

Jovanovic, M., F. Annexstein, and K. Berman (2001): “Modeling peer-to-peer network topolo-

gies through small-world models and power laws,” TELFOR 2001.

Mitzenmacher, M. (2004): “A brief history of generative models for power law and lognormal

distributions,” Internet Mathematics, 1(2), 226–251.

Morgan, J., and P. C. Stocken (2008): “Information aggregation in polls,” American Economic

Review, 79, 1214–1223.

56

Page 57: Daron Acemoglu - Communication Dynamics in Endogenous Social Networks (2010)

Newman, M. E. J. (2001): “Scientific collaboration networks: network construction and fundamen-

tal results,” Phys. Rev. E, 64(1), 016131.

Smith, L., and P. Sorensen (1998): “Rational social learning with random sampling,” preprint.

(2000): “Pathological outcomes of observational learning,” Econometrica, 68(2), 371–398.

Toroczkai, Z., and K. E. Bassler (2004): “Network dynamics: Jamming is limited in scale-free

systems,” Nature, 428(716).

van den Esker, H., R. van der Hofstad, G. Hooghiemstra, and D. Znamenski (2005):

“Distances in random graphs with infinite mean degrees,” Extremes, 8(3), 111–141.

57