Erosion of meaning - an experiment Andreas Blume * Charles N. Noussair † Bohan Ye ‡ May 16, 2017 Abstract The experiment reported in this paper investigates communication between a sender and a receiver who have a language that is only imperfectly shared between them. The players agree on the best action conditional on the payoff state. Incomplete information about language, however, leads to incentive problems. Senders who lack the messages to describe the true payoff state have an incentive to indicate other payoff states that they are able to describe. Our central behavioral hypothesis is that initially experimental subjects will use messages consistent with their focal meaning, but behavior will eventually gravitate to a pooling equilibrium, in which messages are not believed and outcomes are inefficient. We refer to this dynamic process as an Erosion of Meaning. The experimental results support our central hypothesis. Receivers learn to ignore the messages they receive, and choose a pooling action that delivers a safe payoff. The strength of this effect depends on the size of the basin of attraction of the pooling equilibrium outcome. * Department of Economics, The University of Arizona, [email protected]† Department of Economics, The University of Arizona, [email protected]‡ Department of Economics, The University of Arizona, [email protected]
31
Embed
Erosion of meaning - an experimentcepr.org/sites/default/files/6721_BLUME - Erosion of meaning.pdf · Erosion of meaning - an experiment Andreas Blume Charles N. Noussairy Bohan Yez
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Erosion of meaning - an experiment
Andreas Blume∗ Charles N. Noussair† Bohan Ye‡
May 16, 2017
Abstract
The experiment reported in this paper investigates communication between a sender and areceiver who have a language that is only imperfectly shared between them. The players agreeon the best action conditional on the payoff state. Incomplete information about language,however, leads to incentive problems. Senders who lack the messages to describe the true payoffstate have an incentive to indicate other payoff states that they are able to describe. Our centralbehavioral hypothesis is that initially experimental subjects will use messages consistent withtheir focal meaning, but behavior will eventually gravitate to a pooling equilibrium, in whichmessages are not believed and outcomes are inefficient. We refer to this dynamic process as anErosion of Meaning. The experimental results support our central hypothesis. Receivers learnto ignore the messages they receive, and choose a pooling action that delivers a safe payoff. Thestrength of this effect depends on the size of the basin of attraction of the pooling equilibriumoutcome.
∗Department of Economics, The University of Arizona, [email protected]†Department of Economics, The University of Arizona, [email protected]‡Department of Economics, The University of Arizona, [email protected]
1 Introduction
A lack of mutually understood terms to describe particular ideas can introduce frictions in
the ability of two parties to communicate. This is a common occurrence, for example, when
experts use jargon with individuals who only understand lay terms. Experts have a more
precise vocabulary to describe concepts in their field of specialty than do non-experts. For
example, a car mechanic might recommend that a cars’ regulator needs to be replaced if
the car is stalling frequently. The car owner may not be aware of what a regulator is, so
the term is meaningless to him. In such circumstances, the car owner’s decision amounts
to whether to believe and trust the suggestion from the car mechanic or not. As another
illustration, consider a doctor who informs a patient that he has a condition for which a new
experimental treatment is optimal, and for which the conventional treatment would be less
effective. The patient, unable to translate the data from medical tests into a diagnosis, has to
decide about whether or not to agree to the recommended treatment. If the patient suspects
that the doctor might be trying to mislead him into a more expensive treatment that is
no more effective than the conventional one, he might opt for the conventional treatment
instead. The costs of such instances of language incompleteness can be substantial, given the
economic value of situations in which there is interaction between experts and non-experts.
For example, 99.7 billion dollars were spent on auto repair in the US alone in US in 2008.
In 2014, 378 billion dollars were spent on prescription drugs.
The abstract, canonical, situation we have in mind is that of an expert (sender) advising
a decision maker (receiver) on the choice of a project to adopt. The project that is optimal
depends on the state of the world. Suppose that both sender and receiver have common
interests, in the sense that in each state of the world they agree on which project is optimal
given that state. Thus, if they have a common language in which they can describe each
state of the world, there are no incentive problems. It is an equilibrium for the sender to be
truthful and the receiver to believe the sender. Coordination on the optimal project results.
Suppose, however, that with positive probability the sender is language constrained and
that it is the sender’s private information as to whether or not this is the case. If she is
constrained, there are states of the world she cannot describe with her available messages in
a way that the receiver would understand. She may then be tempted to indicate a different
state, for which she has the appropriate message. She will do so, for example, if having some
project adopted is better for her than having no project adopted. Given that the language
constraint is private information, the receiver cannot be selectively trusting, and is better off
opting out of any project. Hence, communication may fail even when there is no language
constraint.
The key incentive problem is that a sender who lacks the proper language to describe the
optimal project may instead want to propose an inferior project that she can clearly describe.
1
If the receiver realizes this, and the payoff from the inferior project is less than from adopting
no project, he may be reluctant to adopt any project. This can make it impossible to get
the right project adopted even if the sender has the right message and uses it to propose
the right project. Indeed, as we show below, in the only plausible equilibria of the game, no
project is adopted, and any proposal of the sender is ignored. If it were common knowledge
that the language needed to describe all states were available, successful coordination on
the project appropriate for the actual state would be an equilibrium outcome. Thus, the
possibility of language incompleteness can lead to incentive problems, where there would be
none if it were common knowledge that such incompleteness were impossible.
In this paper, we report an experiment in which we consider whether the meanings of
expressions in a language erode in the face of such incentives. In our experiment, there is a
sender and a receiver. All plausible equilibria are inefficient and involve babbling on the part
of the sender. Thus, expressions of the language lose their meaning. However, substantial
gains can arise to all individuals if they are able to preserve the meaning of expressions
and link messages with their natural or focal meanings. Our experiment considers the
circumstances under which players can avoid such erosion of meaning. The experiment
consists of four treatments. Three of these have language incompleteness, and vary the
complexity of the environment. The measure of complexity is the number of states that the
language must try to describe. The fourth treatment has no language incompleteness. Unlike
in the seminal work of Crawford and Sobel [14], in which efficiency losses arise because expert
and decision maker disagree about the optimal action in each state of the world and therefore
the expert has an incentive to misrepresent the state of the world, in our experiment it is
only through language constraints themselves that incentive problems arise.
Our principal behavioral hypothesis is that initially, receivers will believe the messages
that they receive. However, over time behavior will converge to a pooling equilibrium in
which senders’ messages are not believed and receivers choose a safe, though inefficient
action. The results show that outcomes tend to converge toward a pooling equilibrium in
the relatively complex environment over time, supporting the hypothesis. On the other
hand, in our simplest environment, expressions of the the language retain their meaning and
both senders and receivers are able to attain earnings in excess of the pooling equilibrium
level. Finally, when it is common knowledge that the language is complete, full efficiency is
achieved.
A closer look at the data indicates the presence of some senders who are willing to behave
honestly at some cost to themselves. In the simplest environment, only a small fraction of
honest senders is required to make it optimal for receivers to believe the messages they
receive, while in the more complex treatments, a larger fraction must be honest. The actual
fraction of honest senders exceeds the threshold level in the simple environment but does
not exceed the critical percentage in the more complex environment. The willingness of
2
receivers to believe senders’ messages, coupled with the presence of a sufficient number of
honest players makes it possible for expressions of the language to retain their meaning and
for payoff levels that are greater than pooling equilibrium levels to be realized.
2 Background
A common language is an important driver of coordination and information processing in
organizations. While specialized “organizational codes” (Arrow [1]) facilitate communication
within the organization, they may create barriers to communication with the outside world.
Within organizations, “language compatibility” (March and Simon [24]) has an impact on
which communication channels are activated. If different areas within an organization have
their own specialized codes, it affects the patterns of communication and thereby the overall
performance of the organization. Therefore, it is important to understand how agents handle
communication tasks when they face language constraints. We implement such constraints
in the lab by endowing agents with codes that are individually rich enough to code every
element in the state space, but diverge with positive probability.
There is a wealth of evidence from the field on how language constraints inhibit commu-
nication and shape communication patterns. Tushman [29] studies communication patterns
in R&D settings and observes that “oral communication is effective only where a shared
coding scheme or language exists or where the actors are sufficiently alike in background
or perspective that they can rapidly develop a common language . . . Communication with
areas having different coding schemes may be less efficient, since it will have to overcome
an impedance or mismatch between the communicating areas.” Zenger and Lawrence [32]
find that age and tenure distributions in a firm affect communication patterns. Bechky [2]
documents misunderstandings due to differing occupational codes in manufacturing. Gali-
son [18] describes how scientific disciplines create trading zones to mediate communication
across subcultures within the disciplines. Our experiment can be viewed as capturing the
“impedance or mismatch” pointed out by Tushman.
Language constraints also reduce the effectiveness of expert advice. Experts using jargon
when communicating with non-experts is a common source of misunderstandings and missed
opportunities. Doctor-patient communication suffers from gaps between medical language
and everyday language. Doctors frequently fail to recognize patients’ attempts to switch
to medical language and doctors and patients have substantially different interpretations of
widely used psychological terms like “depression,” “migraine,” and “eating disorder” (Ong,
de Haes, Hoos and Lammes [25]). Williams, Davis, Parker, and Weiss [31] document the
limited “health literacy” of patients. They cite one study of hospital patients in which only
35% understood the word “orally,” 22% “nerve”, 18% “malignant,” and only 13% understood
“terminal.” According to Williams et al such deficiencies in health literacy have dramatic
3
impacts on health care costs. Our experiment embraces this expert-decision maker paradigm.
The expert has sufficiently many messages to label all states, as does the decision maker.
These labels, however, need not coincide, capturing the gap between the language of the
expert and that of the decision maker.
There are two types of formal models of language constraints, those with availability
constraints and others with symmetry constraints. Cremer, Garicano and Prat [16], Jager,
Metzger, and Riedel [20], and Sobel [28] study availability constraints, which consist of
restricted vocabularies. They consider common-interest games, so that it it meaningful to
study optimal strategy profiles. Cremer, Garicano and Prat are concerned with the design
of an optimal organizational code. The code is used to describe states that are drawn from
a known distribution. An optimal code is an optimal assignment of words to subsets of the
state space. The choice of an organizational code interacts with organizational structure
because there is a trade-off between the quality of communication within and across groups
in an organization. Blume [4] [6], extending ideas from Crawford and Haller [12], uses
symmetry constraints to describe situations in which some messages may be meaningless
to some of the agents. One use of this approach is to show how grammar can facilitate
language learning, the association of meaning to a priori meaningless messages (for a related
approach to thinking about structure in language see Rubinstein [26]). In our experiment
we implement both availability constraints, by having each language type only have access
to a subset of the message space, and symmetry constraints, by employing (at least initially)
meaningless messages in addition to focal message. Focal messages, in the sense of Schelling
[27]), provide a clue that makes it possible to agree on their interpretation. The co-occurrence
of both focal and meaningless messages is a feature of our experiment that is novel.
Blume and Board [7] introduce private information about language constraints, which
take the form of availability constraints: an agent’s language type is the set of messages
available to the agent. Each player’s private information is two-dimensional, described by
her payoff type and her language type. In sender-receiver games in which senders have
private information about which messages are available to them, receivers, who only care
about payoff relevant information, face a signal extraction problem. Receivers do not know
to what extent the message they observe is determined by the sender’s language type. This
produces a sense in which misunderstandings can arise in equilibrium, since payoff relevant
information is confounded by information about language competence. Blume and Board
show that this confounding of information about payoffs with information about language
is optimal in common-interest games (in contrast, say, to only using messages which are
commonly known to always be available). This line of work has recently been extended by
Giovannoni and Xiong [19]. Our experiment explicitly relies on this language-type paradigm.
Unlike in Blume and Board, in our experiment there is a distinction between focal and
meaningless messages, and a tension between efficient and focal message use.
4
Extant experiments involving language constraints focus either on the emergence of mean-
ing from initially meaningless messages or on the process of adapting natural language to
novel situations. Blume, DeJong, Kim and Sprinkle [3] (BDKS98) investigate the emergence
of meaning of messages in sender-receiver games inspired by Lewis [23]. They look at games
with two payoff states and enforce initial absence of meaning by randomizing over private
representations of messages. They find that in simple common-interest games with feedback
about population behavior, message meaning gradually emerges to the point where after 20
periods players are fully coordinated. Bruner, O’Connor, Rubin and Huttegger [8] (BORH)
obtain similar results without feedback about population behavior, having subjects interact
for more periods. Our experiment also uses meaningless messages, except that they are avail-
able concurrently with focal messages and that there is a temptation to use focal messages.
This temptation is a result of players’ agreeing only on the most preferred action in each
state, and not the entire ranking of actions, as in BDKS and BORH.
Krauss and Weinheimer [21] [22] experimentally study the evolution of reference phrases
for novel items (non-standard geometric objects). Subjects use free-form communication with
natural language as opposed to a restricted set of pre-fabricated messages. They find that
the length of a reference phrase used to describe a figure is inversely related to the frequency
with which it is mentioned and decreases over time. Reference phrases are shorter with
two-way than one-way communication. Weber and Camerer [30] use a picture identification
task to study cultural conflict and mergers in organizations. Senders describe office scenes to
receivers, using free-form communication. The sender is given pictures in a particular order
and describes them to the receiver. Payoffs depend on how quickly the receiver identifies
the correct order. Over time teams develop succinct descriptions for each figure and these
descriptions differ across teams. These conflicting languages create problems when teams
are merged. Post-merger completion times at first significantly increase, and while they drop
subsequently, they never achieve pre-merger levels. Weber and Camerer achieve mismatch of
codes, and as a result efficiency losses, by merging teams with team specific languages. We
impose mismatches by fixing the receiver language while privately randomizing the sender
language. Weber and Camerer document dramatic reductions in efficiency right after a
merger and a partial recovery thereafter. In our experiment, as reported in Section 5, we
observe a gradual decline in efficiency, as receivers learn to doubt the meaning of messages.
3 Theoretical predictions
In our experiment, we implemented three sender-receiver games, which differed in terms of
the number of payoff types of the sender, the number of actions available to the receiver,
and the distributions over sets of messages available to the two parties. There was also a
control treatment, in which the complete language was always available.
5
3.1 The game with three payoff states
One of the conditions we implemented used the payoff table shown in Figure 1. Each row
corresponds to one of three possible (payoff) states, A, B, and C. Each column represents
one of the four possible actions of the receiver. The actions W,X, and Y , can be thought of as
the adoption of one of three possible projects and the action Z as not adopting any project.
The actions W,X, and Y are optimal for both parties in states A,B, and C, respectively,
yielding a payoff of 10 to both players. The action Z represents choosing not to undertake
a project, and results in a sure payoff of 7 to both sender and receiver. If a project that
is inappropriate for the current state is undertaken, then the sender receives a relatively
attractive payoff of 9 while the receiver receives 0.
The timing of the game is as follows. The state is drawn from a uniform distribution.
The sender privately learns the true state (her payoff type). The sender then sends a message
to the receiver. After observing the sender’s message the receiver chooses an action. After
the choice is made, the receiver learns the state and both parties realize their payoff.
A
B
C
W X Y Z
10,10 9,0 9,0 7,7
9,0 10,10 9,0 7,7
9,0 9,0 10,10 7,7
Figure 1: The game with three payoff states
The prior probability of each payoff type is π(A) = π(B) = π(C) = 13. The message
space is M = {∗,#, a, b, c}. Using semantic evaluation brackets J·K, messages a, b and c
have focal meanings JaK=“The state is A,” JbK=“The state is B” and JcK=“The state is C,”
respectively. There is no prior specification of the meanings of ∗ and #, which we express
via J∗K = J#K =?.
In addition to her payoff type the sender privately learns her language type, the subset
of messages available to her. The language type space is Λ = {λa, λb, λc, λabc}, where λa =
{a, ∗,#}, λb = {b, ∗,#}, λc = {c, ∗,#} and λabc = {a, b, c}. Note that a sender always has
three messages and therefore as many messages as there are payoff states. The probability
of each language type is q(λa) = q(λb) = q(λc) = q(λabc) = 14, i.e. language types are equally
likely. Language types and states are drawn independently. The sender privately learns her
payoff and language type and sends a message from her language type to the receiver. After
seeing the sender’s message, the receiver takes an action W,X, Y, or Z.
6
3.2 Six payoff states and two payoff states
Another condition we implement has 6 payoff states. The payoff table is shown in Figure 2
below. As in the three-state setting, both players receive a payoff of 7 if action Z is chosen.
If the receiver makes an optimal choice given the actual state, both players earn 10. If the
receiver chooses an action other than Z that is not optimal in the current state, the sender
receives a payoff of 9 and the receiver receives 0. There are 7 language types, one containing
the full set of messages with focal meaning, and 6 others with one focal message and six that
are devoid of meaning.
A
B
C
D
E
F
T U V W X V Z
10,10 9,0 9,0 9,0 9,0 9,0 7,7
9,0 10,10 9,0 9,0 9,0 9,0 7,7
9,0 9,0 10,10 9,0 9,0 9,0 7,7
9,0 9,0 9,0 10,10 9,0 9,0 7,7
9,0 9,0 9,0 9,0 10,10 9,0 7,7
9,0 9,0 9,0 9,0 9,0 10,10 7,7
Figure 2: The game with six payoff states
The possible states are A,B,C,D,E, and F , and the available actions of the receiver
are T, U, V,W,X, Y, and Z. Payoff types are equally likely: π(A) = π(B) = π(C) = π(D) =
π(E) = π(F ) = 16. The message space is M = {∗,#,%,@,∧, a, b, c, d, e, f} with focal mean-
ings JaK]=“I am typeA,” JbK=“I am typeB” etc. and no prior specification of the meanings of
∗,#,%,@, and ∧. The language type space is Λ = {λa, λb, λc, λd, λe, λf , λabcdef}, where λa =
{a, ∗,#,%,@,∧}, λb = {b, ∗,#,%,@,∧}, λc = {c, ∗,#,%,@,∧}, λd = {d, ∗,#,%,@,∧},λe = {e, ∗,#,%,@,∧}, λf = {f, ∗,#,%,@,∧} and λabcdef = {a, b, c, d, e, f}. The language
type distribution assigns equal probability to all language types: q(λa) = q(λb) = q(λc) =
q(λd) = q(λe) = q(λf ) = q(λabcdef ) = 17. As in the three-payoff-state case, language types and
payoff types are drawn independently. The sender privately learns her payoff and language
type and sends a message from her language type to the receiver. After seeing the sender’s
message, the receiver takes an action.
The third parameterization we study has two payoff states, A and B. As before, both
players receive a payoff of 7 if action Z is chosen. If the receiver chooses an action other
than Z that is not optimal given the state, the sender receives a payoff of 9 and the receiver
7
receives 0 as shown in Figure 3.
A
B
X Y Z
10,10 9,0 7,7
9,0 10,10 7,7
Figure 3: The game with two payoff states
The two states are equally likely. The message space includes two messages with focal
meanings, a, and b, and one message ∗ without a focal meaning. The language type space
is Λ = {λa, λb, λab} with λa = {a, ∗}, λb = {b, ∗}, and λab = {a, b}.
3.3 Equilibrium Analysis
The sender-receiver games we consider have a finite set of payoff types Θ for the sender, a
finite set R of actions for the receiver, and a finite set of messages, M. There is a unique
action Z ∈ R that is the receiver’s best reply to beliefs concentrated on the (uniform) prior
π over the set of payoff types. There are two kinds of messages, messages whose meaning is
a (single) type and messages that are meaningless. We use semantic evaluation brackets, JK,to evaluate the meaning of each message. Let JK : M → Θ ∪ {?} be a mapping that assigns
either a meaning θ ∈ Θ to a message or declares the message meaningless. For each θ ∈ Θ,
let mθ be the message with meaning JmθK = θ. If a message m ∈ M is meaningless, then
JmK =?. We assume that for each θ ∈ Θ there is a message mθ in M. Thus, for each payoff
type θ in Θ there is a message mθ in the universe of all messages, M , whose meaning is θ.
In addition to her payoff type θ the sender privately learns her language type λ ∈ Λ =
2M \∅. Language types are drawn from a commonly known distribution q. The sender chooses
a message given the language she has available and the state she has observed. The sender
strategy σ : Θ × Λ → ∆(M) must satisfy the constraint σ(θ, λ) ∈ ∆(λ) for all θ ∈ Θ and
λ ∈ Λ. Given her language type λ and her payoff type θ, her strategy σ assigns probability
σ(m|θ, λ) to message m ∈ λ.
We say that σ is indicative focal if mθ ∈ λ ⇒ σ(mθ|θ, λ) = 1, and σ(mθ′ |θ, λ) = 0 for
all θ′ 6= θ and all λ ∈ Λ. This is the property that the sender is as truthful as possible. She
will indicate her type if her language allows her to do so and never pretend to be another
type. If her payoff type is θ and the message mθ whose meaning is θ, JmθK = θ, is available,
she will send message mθ; if her payoff type is θ and the message mθ is not available, she
will not pretend that her payoff type is θ′ by sending message mθ′ . In our experimental
condition with three states, for example, a plausible indicative focal strategy would be to
send messages a, b, and c, when the true state is A,B, and C respectively whenever the
8
message corresponding to the true state is available, and to send ∗ when the appropriate
message is not available.
The receiver’s strategy ρ : M → ∆(R) prescribes a probability distribution over actions
conditional on the message he has received. Thus, given any message m ∈ M , ρ(r|m) is
the probability that the receiver assigns to action r ∈ R. We restrict attention to games in
which the receiver has a unique best reply, rθ, whenever his beliefs are concentrated on θ,
and rθ 6= rθ′ for θ 6= θ′.
We say that the receiver strategy ρ is imperative focal if ρ(rθ|mθ) = 1 for all mθ. That
is, a receiver strategy is imperative focal if the receiver acts as though he is credulous in
response to meaningful messages. In response to any meaningful message mθ it assigns the
unique best reply to believing that the payoff type is θ.
The receiver’s strategy is said to be neutral if ρ(Z|m) = 1 for all m with JmK =?.
Under a neutral strategy, the receiver chooses Z in response to meaningless messages that
he receives. A strategy profile (σ, ρ) is neutral if ρ is neutral. Similarly, a (perfect Bayesian)
equilibrium (σ, ρ, µ), where µ denotes the belief system, is neutral if ρ is neutral.
A strategy profile (σ, ρ) is weakly focal if σ is indicative focal and ρ is imperative
focal. An equilibrium (σ, ρ, µ) is weakly focal if the corresponding strategy profile is weakly
focal.1 A strategy profile is focal if it is weakly focal and neutral. Thus, under a focal
strategy profile, the sender is as truthful as possible, the receiver is credulous after meaningful
messages and takes the pooling action after meaningless messages. Therefore, under a focal
strategy profile the sender always sends a message corresponding to the true state if it is
available and sends a meaningless message otherwise; the receiver responds to meaningful
messages with the unique best reply given the meaning of that message and responds to
meaningless messages with the pooling action.
Finally, an equilibrium (σ, ρ, µ) is a pooling equilibrium if ρ(Z|m) = 1 for all m that
are sent with positive probability in equilibrium.
We believe that neutral equilibria are the most plausible ones for the three experimental
conditions. The following result shows that all neutral equilibria must be pooling equilibria,
establishes that there are no focal equilibria, shows that the payoff from focal strategies
exceeds that from pooling equilibria and shows that there are non-focal equilibria with
significantly higher payoffs to both players than the pooling payoff. The bottom line is that
we predict pooling, even though both naive focal strategies and non-focal equilibria yield
higher payoffs than pooling.
1Note that an indicative focal equilibrium is imperative focal.
9
Proposition 1 In all three games:
1. All neutral equilibria are pooling equilibria.
2. The (common) payoffs from focal strategy profiles (9 in the two-state game, 8.5 in the
three-state game and 7.86 in the six-state game) exceed those from pooling equilibria (7
in all three games).
3. There are no focal equilibria.
4. There are equilibria in which the pooling action Z is not taken and payoffs exceed those
from pooling equilibria: Both the two-state and the three-state game have an equilibrium
with sender payoff 9.83 and receiver payoff 8.3. The six-state game has an equilibrium
with sender payoff 9.88 and receiver payoff 8.81.
Proof:
Part 1: In the two-payoff type game, suppose the receiver employs a neutral strategy, but
there is a message after which the receiver responds with an action other than action Z.
Without loss of generality, let that message be a. Then language type λa will send a re-
gardless of the payoff type, since the sender is always worst off if the receiver chooses Z.
The posterior probability on payoff type θ following message a is maximized if payoff type
θ sends message a when all messages are available, and no other payoff type ever sends a.
Suppose, again without loss of generality, that only payoff type A sends message a when all
messages are available. Then the posterior probability of the sender having payoff type A
following message a is 23< 7
10and hence the unique best reply to message a is action Z. A
similar argument holds for message b. The best response to any focal message is to play Z.
The argument for the three-and six payoff type games is nearly identical, except that the
maximal posterior probability on any payoff type is even lower than 23. Similar logic applies
to the 3-state and 6-state games.
Part 2: Consider state θ. For any language state λ with mθ ∈ λ the common payoff from a
focal strategy is 10. Otherwise, the common payoff from a focal strategy is 7. Since there
there is positive probability that mθ ∈ λ, the expected payoff exceeds the pooling equilibrium
payoff 7, which we know from Part 1 is the neutral equilibrium payoff. The common focal
strategy payoff is 9 in the two-state game, 8.5 in the three-state game and 7.86 in the six-state
game.
Part 3: Every focal equilibrium is neutral by definition. Therefore, it follows from Part
10
1 that a focal equilibrium has to be a pooling equilibrium. This is incompatible with the
equilibrium being imperative focal.
Part 4: We provide an example of a non-pooling equilibrium for each of the three games.
For the two-state game: Consider the following strategy profile : σ(a|A, λa) = σ(b|A, λb) =
Standard errors in parentheses∗ p < 0.05, ∗∗ p < 0.01, ∗∗∗ p < 0.001
21
if it were true, and the choice resulted in payoff of 0.2
The estimates show that receiving a focal message makes it less likely that a receiver
chooses Z, indicating that messages are believed at least to some extent. Receiving a focal
message in the two-state case makes the play of Z 57% less likely in 2-state, 24% in 3-state,
and 26% in 6-state. Being burned in the last ten periods has no effect. This means that
the increasing frequency of the play of Z over time is a general trend, rather than a series of
short-term changes as a result of receiving a recent payoff of 0. The coefficient on Period is
significantly positive in all conditions, revealing a trend toward greater play of Z. The trend
is slower in 2-state than in the other treatments. The estimates confirm the impression
from the figures that 2-state is more conducive to successful coordination than the other two
conditions.
Table 4: Fraction of time lying when truth unavailable
State Lying ratio when truth unavailable2 0.4828383 0.40534016 0.3153812
5.3 Explaining the patterns in the data
As we have seen, there is more honest behavior on the part of senders than would be con-
sistent with our level-k analysis or would be optimal given receiver behavior. The latter
is the case because both levels of the sender above 0 should lie, and it is not reasonable
to believe that many senders can be considered level-0s after playing this relatively simple
game for many periods. Moreover, there is a widespread tendency for receivers to follow
senders’ recommendations. In the two-state treatment, this combination of sender honesty
and receiver credulousness leads to payoffs higher than in equilibrium.
To account for this pattern, suppose that some senders prefer being truthful to lying,
in line with Crawford [15]. Define an honest sender as one who uses an indicative focal
strategy. Honest senders are truthful when truth is available and send a meaningless message
otherwise. In contrast, a strategic sender will send a message corresponding to the true state
if it is available and a focal message corresponding to another state when the true message
is not available.
The payoff to a receiver from playing action Z is always equal to 7 in all games. In each
of the conditions, there are s + 1 language types, where s is equal to the number of states.
2Using the number of times burned in the last 10 periods in the specification rather than a dummyvariable for whether one has been burned or not generates the same results in that the same variables aresignificant at the same thresholds of significance.
22
The true message is available in 2 of the s + 1 language profiles. Consider a situation in
which all senders are either honest or strategic. Denote the proportion of honest senders as
h, 0 ≤ h ≤ 1.
Figure 10 shows that a substantial fraction of senders are consistently honest or strategic.
The figure shows the percentage of instances, in the last 30 periods of each of the two 60-
period segments of the sessions, in which individuals lied and did not lie when the truth was
unavailable. The data are clustered at the boundaries indicating that a considerable share
of individuals behaved consistently.
Given the presence of both honest and dishonest types, the expected payoff to a receiver
of following the message equals
10 ∗ 2
s+ 1+ 7 ∗ s− 1
s+ 1∗ h+ 0 ∗ s− 1
s+ 1∗ (1− h) (1)
The first term is the payoff to the receiver from successful coordination, 10, multipled by
the probablity that the sender has the true message available in their language profile. The
second term is the payoff from the receiver playing Z in response to a message devoid of
meaning, if the sender is of an honest type. The third term is the payoff from following
the sender’s message, if it has focal meaning, the sender is dishonest, and the sender does
not have the true message available. This is compared to the payoff from not following the
message and playing it safe instead, which equals 7. The payoff from following the sender’s
recommendation is greater than that of playing Z if
h ≥7− 20
s+1
7 ∗ s−1s+1
(2)
In the three conditions of our experiment, s = 2, 3, and 6. Thus, the fraction of individuals
h that must behave honestly to make it optimal for the sender to believe the message is 14.3%
in the two-state condition, 57.1% in the three-state condition, and 82.4% in the six state
case. Table 4 indicates that in all three conditions, the percentage of instances in which the
sender lies is between 30% - 50%. This means that it is optimal for the receiver to believe
the sender in the two-state case and not to in the six-state and three-state cases, which is
consistent with the data.3
Another pattern that is apparent in Figure 10 is the greater incidence of lying in the two-
state case than in the other two environments. This is consistent with a greater benefit from
lying in the two-state case than in the other two. As Figure 4 shows, a greater proportion
of receivers believe the sender’s message in two-state than in the other environments. This
affects the relative expected payoffs from lying and from being honest. The proportion
3Figure 3 shows that there is some endogeneity in the extent of lying aversion. This is consistent withindividuals’ being more willing to lie if the private benefits from lying are greater.
23
of receivers whose choices are consistent with believing the message, is 0.70, 0.48, 0.38
respectively for 2-state, 3-state, and 6-state.
Figure 10: Ratio of sending messages honestly
We also calculate the potential lying cost for senders that would make them indifferent
between lying and being honest based on the proportion of receivers who chose the safe
option in the last 10 periods. The indifference conditions are as follows:
2-state:
0.70× 9 + 0.30× 7− 7 = 1.40 (3)
3-state:
0.48× 9 + 0.52× 7− 7 = 0.96 (4)
6-state:
0.38× 9 + 0.62× 7− 7 = 0.76 (5)
The threshold is lowest in the 6-state case, followed in turn by the 3-state and 2-state
conditions. This means that the cost that would make a sender indifferent between lying
and telling the truth is highest in the 2-state case and lowest under 6-state. Thus, if the
distribution of lying costs is similar in the three treatments, there would be more lying in
the 2-state treatment, than in the 3-state treatment, and the least lying should occur in the
24
6-state case. This pattern is consistent with the data and with the notion that the percentage
of players that is honest is endogenous.
6 Conclusion
In this paper, we have considered a setting in which language may be degraded over time.
The meaning of messages may erode. The potential opportunistic use of language on the
part of senders can create risks for receivers. If they naively follow the recommendations in
the focal language, and the sender tries to mislead them, receivers receive low payoffs. In
response, they ignore messages from senders, removing the incentive for senders to use the
language in a meaningful way. This equilibrium outcome comes at a cost to both players,
who would benefit if they could find a way to guarantee the truthfulness of the messages
that are sent.
We find that the complexity of the environment, measured in terms of the number of
possible states, is an important factor determining the extent to which language degrades
over time. In the six-state case, our most complex condition, behavior converged to close
to the pooling equilibrium outcome, as predicted. In the two-state case, messages largely
retained their meaning in the sense that opportunistic behavior was sufficiently limited to
make it profitable for receivers to believe the senders’ messages and respond accordingly.
While, at least for receivers, the data, at first glance, are consistent with a level-k model
in which there are two-thirds level-1 players and one-third level-2 players (consistent with
prior findings in the literature), we believe that there is more at work than heterogeneity in
rationality and beliefs. The data are also consistent with a fraction of individuals who have a
preference for behaving honestly. The game can be viewed as one in which there is an honest
type and a strategic type of the sender. The critical percentage of honest senders required to
make it optimal for receivers to follow the messages they receive is far lower in the two-state
case than the six-state. The data show that the critical value is exceeded in the two-state
case but not in the six-state case. Thus, the presence of a fraction of individuals who are
honest, coupled with a relatively small number of instances in which lying is profitable, leads
to a level of coordination considerably greater than would occur in equilibrium. This occurs
in the two-state case and the expressions in the language retain their meaning. On the other
hand, the percentage of senders who are honest does not exceed the threshold in the 3 and
6 state cases and, as a consequence, the meaning of expressions in the language do degrade.
25
Figure 11: Screen Shot for sender in 2-state condition
26
Figure 12: Screen shot for receiver in 2-state condition
27
References
[1] Arrow, Kenneth J. [1974], The Limits of Organization, Norton, New York NY.
[2] Bechky, Beth [2003], “Sharing Meaning Across Occupational Communities: The
Transformation of Understanding on the Production Floor,” Organization Science 14,
312-330.
[3] Blume, Andreas, Douglas V. DeJong, Yong-Gwan Kim and Geoffrey B.
Sprinkle [1998], “Experimental Evidence on the Evolution of Meaning of Messages in
Sender-Receiver Games,” American Economic Review 88, 1323–1340.
[4] Blume, Andreas [2000], “Coordination and Learning with a Partial Language,” Jour-
nal of Economic Theory 95, 1–36.
[5] Blume, Andreas, Douglas V. DeJong, Yong-Gwan Kim and Geoffrey B.
Sprinkle [2001], “Evolution of Communication with Partial Common Interest,” Games
and Economic Behavior 37, 79–120.
[6] Blume, Andreas [2004], “A Learning-Efficiency Explanation of Structure in Lan-
guage,” Theory and Decision 57, 265–285.
[7] Blume, Andreas and Oliver J. Board [2013], “Language Barriers,” Econometrica,
81, 781–812.
[8] Bruner, Justin, Cailin O’Connor, Hannah Rubin, and Simon M. Hut-
tegger [2014], “David Lewis in the Lab: Experimental Results on the Emergence
of Meaning,” Synthese 88,1–19.
[9] Cai, Hongbin, and Joseph Tao-Yi Wang [2006], “Over-Communication in Strate-
gic Information Transmission Games,” Games and Economic Behavior 56, 7-36.
[10] Camerer, Colin F., Teck-Hua Ho, and Juin-Kuan Chong [2004], “A Cognitive
Hierarchy Model of Games,” The Quarterly Journal of Economics 119, 861–898.
[11] Costa-Gomes, Miguel A., and Vincent P. Crawford [2006], “Cognition and
Behavior in Two-Person Guessing Games: An Experimental Study,” American Eco-
nomic Review 96, 1737–1768.
[12] Crawford, V.P. and H. Haller [1990], “Learning how to Cooperate: Optimal Play
in Repeated Coordination Games,” Econometrica 58, 571–595.
28
[13] Crawford, Vincent P., and Nagore Iriberri [2007], “Fatal attraction: Salience,
naivete, and sophistication in experimental hide-and-seek games,” The American Eco-
nomic Review 97, 1731–1750.
[14] Crawford, V.P.and J. Sobel [1982], “ Strategic Information Transmission,” Econo-
metrica 50, 1431–1451.
[15] Crawford, Vincent P. [2003], “Lying for Strategic Advantage: Rational and Bound-
edly Rational Misrepresentation of Intentions,” American Economic Review 93, 133–
149.
[16] Cremer, Jacques, Luis Garicano and Andrea Prat [2007], “Language and the
Theory of the Firm,” Quarterly Journal of Economics 122, 373-407.