Top Banner
Now that you mention it: Awareness Dynamics in Discourse and Decisions Michael Franke & Tikitu de Jager * ILLC, Universiteit van Amsterdam [email protected], [email protected] W orking draft, July 2008 Please do not cite or quote without permission. All comments gratefully received. Abstract We model unawareness of possibilities in decision-making and (linguistic) pragmatic reasoning. A background model is filtered through a state of limited awareness to provide the epistemic state of an agent who is not attending to all possibilities. We extend the standard notion of awareness with assumptions (implicit beliefs about propositions the agent is un- aware of) and define a dynamic update for ‘becoming aware.’ We give a propositional model and a decision-theoretic model, and suggest that de- cision problems should in general be seen as filtered models in this sense, describing only those features of the situation which the modeller consid- ers relevant and the agent is aware of. We show how pragmatic relevance reasoning can be described in this framework, extending a standard defi- nition to the case of awareness updates. An utterance can be relevant even if semantically uninformative, if it brings relevant alternatives to aware- ness. This gives an explanation for the use of possibility modals and ques- tions as hedged suggestions, bringing possibilities to awareness but only implicating their degree of desirability or probability. * The authors are listed in alphabetical order.
37

Now that you mention it: Awareness Dynamics in Discourse ...

Dec 19, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Now that you mention it: Awareness Dynamics in Discourse ...

Now that you mention it:Awareness Dynamics in Discourse and

Decisions

Michael Franke & Tikitu de Jager∗

ILLC, Universiteit van [email protected], [email protected]

Working draft, July 2008

Please do not cite or quote without permission.All comments gratefully received.

Abstract

We model unawareness of possibilities in decision-making and (linguistic)pragmatic reasoning. A background model is filtered through a state oflimited awareness to provide the epistemic state of an agent who is notattending to all possibilities. We extend the standard notion of awarenesswith assumptions (implicit beliefs about propositions the agent is un-aware of) and define a dynamic update for ‘becoming aware.’ We give apropositional model and a decision-theoretic model, and suggest that de-cision problems should in general be seen as filtered models in this sense,describing only those features of the situation which the modeller consid-ers relevant and the agent is aware of. We show how pragmatic relevancereasoning can be described in this framework, extending a standard defi-nition to the case of awareness updates. An utterance can be relevant evenif semantically uninformative, if it brings relevant alternatives to aware-ness. This gives an explanation for the use of possibility modals and ques-tions as hedged suggestions, bringing possibilities to awareness but onlyimplicating their degree of desirability or probability.

∗The authors are listed in alphabetical order.

Page 2: Now that you mention it: Awareness Dynamics in Discourse ...

2

Contents

1 Introduction 31.1 Pragmatic considerations (Farmer Pickles bakes a cake) . . . . . 5

1.2 Paper overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Formalising unawareness 62.1 The propositional case . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Decision problems and awareness dynamics . . . . . . . . . . . 9

2.2.1 The basic picture . . . . . . . . . . . . . . . . . . . . . . . 9

2.2.2 Refinement 1: Individuating states, unawareness with-out assumptions . . . . . . . . . . . . . . . . . . . . . . . 12

2.2.3 Refinement 2: Unawareness of outcomes . . . . . . . . . 14

2.2.4 The final model . . . . . . . . . . . . . . . . . . . . . . . . 15

2.2.5 Example: Branestawm’s allergies come to awareness . . 18

2.3 Assumptions and associations . . . . . . . . . . . . . . . . . . . 20

3 Information dynamics under awareness 213.1 The propositional case . . . . . . . . . . . . . . . . . . . . . . . . 22

3.2 Updates for decision problems . . . . . . . . . . . . . . . . . . . 23

3.3 Old information in the light of new awareness . . . . . . . . . . 24

4 Awareness dynamics, decisions, and pragmatics 254.1 Bob & Pickles revisited . . . . . . . . . . . . . . . . . . . . . . . . 25

4.2 Decision-theoretic relevance . . . . . . . . . . . . . . . . . . . . . 28

4.3 Bob & Pickles made formal (at last) . . . . . . . . . . . . . . . . 29

5 Related work 315.1 Formal awareness models . . . . . . . . . . . . . . . . . . . . . . 31

5.2 Unawareness in linguistics . . . . . . . . . . . . . . . . . . . . . . 33

6 Conclusion 35

Page 3: Now that you mention it: Awareness Dynamics in Discourse ...

3

1 Introduction

Example 1: Little Bo Peep. Little Bo Peep has lost her keys and doesn’tknow where to find them. She’s checking her pockets, the nail behind thedoor, beside the telephone, and so on, but the keys are nowhere to be found.Frustrated and pouting, Bo slams onto the sofa. From his corner Little JackHorner helps her out:

jack: Did you leave them in the car when you came in drunk lastnight?

Little Bo slaps her forehead (and his, impudent little scamp) and goes out tothe car. At this point of the story, it is equally reasonable that the search endssuccessfully, or to imagine Bo’s frustration continuing until she finds her keysseveral days later in the sugar jar.

Nobody should have trouble understanding at an intuitive level the kindsof changes Little Bo Peep’s epistemic state undergoes in this example. Bodidn’t think of the car as a possible place to look for the keys, but when Jackmentioned it to her, her oversight struck her as foolish and she promptly tookthe proper action. Should anybody be worried about a situation as common-place as this?

Peculiarities show up, though, once we take a closer look. Most impor-tantly, Jack manages to change Bo’s epistemic state in quite a significant way,simply by asking her a question. Since under any standard semantic analysisquestions are uninformative, there is some explaining to be done here. Beforewe get sidetracked by considering rhetorical questioning or possible prag-matic analyses, though, consider the following alternatives to Jack’s helpfulobservation:

(1) Jack: Do you think it’s possible the keys are in the car?

(2) Jack: The keys might be in the car.

(3) Jack: [Not paying attention] Hey, this tv show is really funny, this guy islooking everywhere for his keys and they were in his car the wholetime!

(4) Advertiser on tv: Do you forget your keys in the car? You need theExtendaWristLock KeyChain! (Patent pending.) Order now and payjust $19.99!

(5) Passing motorist: Honk honk!

Bo’s response to any of these might quite naturally be the same: she slapsher forehead at her foolishness and goes immediately to check the car. Whilethe first two should be amenable to pragmatic explanation, this clearly won’tdo for the others.

Intuitively what’s going on here is that Bo is failing to consider a possi-bility, which when brought to her attention she realises should not be ruledout. We will say that Bo is unaware of the possibility that the keys mightbe in the car and we will investigate this kind of epistemic attitude both for-mally and in linguistic applications in this paper. Indeed, something like ournotion of awareness has been approached sidelong in the linguistic literature,

Page 4: Now that you mention it: Awareness Dynamics in Discourse ...

4

but until quite recently has received no thorough treatment. The recent ex-ception is work by Eric Swanson on meaning and use of the epistemic modal‘might’ ([Swa06b], extended in [Swa06a]) which is very close in spirit, but notin specifics, to our approach. Outside linguistics, however, a formal notionof unawareness of possibilities has been explored very fruitfully in theoreticalcomputer science (the standard reference for the origin of the field is [FH88])and has recently been applied variously in rational choice theory, i.e., gameand decision theory ([Fei04; Fei05; HR06]; we describe other related work inSection 5). We aim to combine these insights with a linguistic treatment, andshow the surprising range of outstanding linguistic puzzles that the notion isapplicable to.

So what is unawareness of a contingency? Here are three basic interrelatedproperties of the notion in question.

Slogan 1: Unawareness is not uncertainty. Little Bo Peep’s behaviour (pout-ing, sitting on the couch) does not indicate uncertainty about whether thekeys are in the car. At the point of the story where Bo gives up the search, it isimplausible to assume that she puts any credence, however small, on the pos-sibility that the keys are in the car; if she did, she would surely have gone andchecked. Judging from her behavior, if anything, it seems as if Bo believes thatthe keys are not in the car. We will say that Bo has an implicit belief that thekeys are not in the car and argue that, firstly, implicit beliefs are different fromthe more familiar explicit beliefs (see the next two slogans) and that, secondly,unawareness of a possibility typically goes together with an implicit belief wecall an assumption.1 Implicit beliefs are typically assumptions of ‘normality’(we’ll discuss them in more detail, including the connection to closed-worldreasoning, in Section 2.3).

Slogan 2: Unawareness is not introspective. Although Bo’s behavior indi-cates an implicit belief, she does not explicitly (or consciously) believe that thekeys are not in the car. In fact, she holds no explicit beliefs about the car, noteven the tautological belief that the keys are either in the car or elsewhere. Aself-referential way to get to grips with this fundamental intuition is to saythat she is unaware of her own implicit beliefs. This failure of (negative) in-trospection2 leads us to a definition of awareness in terms of the language Bouses: if she were asked to describe all her beliefs about the keys she would notmention the car at all. (If prompted she might start enumerating tautologies:“The keys aren’t on the table, so I suppose logically speaking they’re either onthe table or not on the table”; she would never, however, extend this sequencewith “The keys are either in the car or not in the car”.) The formal model wegive in Section 2 will distinguish syntactically between the agent language

Bo would use spontaneously to describe her explicit beliefs and the languagewe as modellers use to describe her implicit beliefs.

1Although the point will not be appreciated until we have presented our model, in the interestsof fairness we should point out that the slogan “Unawareness is not uncertainty” has a slightlydifferent connotation in the rational choice literature. We describe the differences between thetwo notions in Section 5.

2She does not know, but does not know that she does not know. [MR94] and [DLR98] show thatunawareness must be more than simply failure of negative introspection, if we are to capture theproperties we want. Still, starting with this notion is a good way to get the intuitive juices flowingin the right direction.

Page 5: Now that you mention it: Awareness Dynamics in Discourse ...

5

Slogan 3: Unawareness is easily overturned. Bo’s implicit belief is very frag-ile, in a way that her explicit beliefs are generally not: it does not take anyparticularly convincing argument to overturn it. The example shares withother crucial examples in this paper what we might call the ‘forehead-slapproperty’: in becoming aware of her implicitly held belief, Bo realises themistake she made in overlooking a certain (intuitively: relevant) possibility.We will concern ourselves exclusively with unawareness of this kind in thispaper: unawareness through inattentiveness or absent-mindedness.3 Indeed, asthe alternative (5) in Example 1 shows, overturning this kind of unawarenessneed not even involve anything linguistic or intentional. But where languageis concerned, we argue that the mere mentioning of some possibility that anagent is unaware of is sufficient for this unawareness to be overturned. Thisis why it is not possible to talk to Bo about her implicit beliefs: if Jack wereto ask her the question in (1), we might imagine her answer to be somethinglike: “Well, now that you ask me I do, but I wouldn’t have if you hadn’t.”

1.1 Pragmatic considerations (Farmer Pickles bakes a cake)

Not being attentive to all possibly relevant factors in, say, a decision-makingsituation such as Bo’s key-search is anything but unusual. Similarly, it isperfectly natural for conversationalists to attend to possible unawareness. Themost interesting cases seem to occur when a speaker has enough informationto motivate an awareness update, but not enough to say anything stronger(such as “I saw your keys in the car,” “You always leave your keys in the car,”or similar). Since questions usually don’t carry assertive force, they are oftena good way to produce only an awareness update, with no correspondingspeaker commitments. (Other possibilities include possibility statements with“might” or “could”, explicit epistemic hedging as in “They’re not in the car, Ipresume”, and so on.)

It is because Jack finds it likely that Bo is unaware of the car as a possiblehide-out for the keys that he asks his question; we feel, even more strongly,that Jack first and foremost intends to make Bo aware with his question (hedoes not want an answer in the first place; he is not himself interested in theinformation he is ‘officially’ asking for). Moreover, it is natural for Jack to as-sume that thus making Bo aware will have a relevant impact on her decisionsto act. And, Bo, in turn, might recognize that Jack is making her aware ofsomething which he deems relevant to the case at hand. This recognition ofthe nature of Jack’s conversational move might, of course, trigger (or at leastlicense) further pragmatic reasoning on Bo’s side: for instance, Bo might con-clude that, since Jack is obviously a helpful (though impudent) little scamp,he himself must deem it sufficiently likely that the keys might be in the car.

In Bo’s case the awareness update produces the desired effect withoutneeding any pragmatic reasoning (this is why a passing motorist can unin-tentionally trigger exactly the same update for Bo). However this need not bethe case, as shown in the following example.

Example 2: Bob the Baker. Bob (who is an expert baker) is visiting his friendFarmer Pickles (who isn’t).

3Again, this distinguishes our approach from the rational choice literature; see Section 5.

Page 6: Now that you mention it: Awareness Dynamics in Discourse ...

6

pickles: I was going to bake a cake but I haven’t got any eggs!bob: Did you think of making shortbread instead?pickles: I didn’t, in fact I didn’t even know that you don’t need

eggs to make shortbread! Thanks, Bob!

Since Pickles isn’t an experienced baker, his unawareness of shortbread asan option conceals a real uncertainty about whether the recipe requires eggs ornot. Simply overturning this unawareness by accident (as the passing motoristmight have done for Bo) would not produce the effect we see here: flippingthrough his cookbook he might see a photograph of shortbread, but he wouldhave to check the recipe to see whether there are eggs in the ingredients listor not. Assuming that Bob is being helpful, though, Pickles can reason asfollows: “Bob is deliberately bringing up a possibility because he thinks I’moverlooking it; that’s only helpful if I should in fact consider it; then he shouldat least hold it possible that shortbread doesn’t require eggs; but Bob is anexpert baker, so he wouldn’t be uncertain about such things; so he must believethat you don’t need eggs to make shortbread.” Pickles’ response, showing thathe makes this inference, seems perfectly natural; but it can only be justified(that crucial first step) by taking the sort of unawareness perspective we arguefor.

1.2 Paper overview

We will be concerned with the dynamics of an agent’s awareness and its role inconversation in this paper. Our main aim is to apply insights from the study ofunawareness in rational choice theory to linguistics. In particular, we wouldlike to show how awareness dynamics are applicable to a surprising rangeof outstanding linguistic puzzles. Towards this end, we seek to representunawareness in (multi-speaker) discourse and investigate how and to whateffect conversation changes the epistemic states of conversationalists.

In the next section we will introduce a formal representation of unaware-ness and implicit belief. Section 3 describes the dynamics of awareness up-dates and their interaction with factual information growth, while Section 4

works out an example of pragmatic reasoning based on awareness dynamicsin detail. Finally, we site our work in the intersection between linguistics andrational choice in Section 5, before Section 6 concludes.

2 Formalising unawareness

We’ll start by presenting a simple propositional model of awareness and itsdynamics, just enough to model Little Bo Peep’s predicament, in Section 2.1.We will then extend this basic propositional treatment to decisions undergrowing awareness in Section 2.2. In both sections, we will start by developingthe basic formal notions alongside a few intuitive examples; in the decision-theoretic case in particular the definitions will undergo some revision as weintroduce complications. For the reader’s convenience, all final definitions arecollected at the end of each section, in Definitions 1 and 2 on page 8 (for thepropositional case) and Definitions 3–7 in Section 2.2.4 (for decision problemsin full detail).

Page 7: Now that you mention it: Awareness Dynamics in Discourse ...

7

2.1 The propositional case

We start with a set P of proposition letters, representing as usual state-ments about how the world might be. For Bo these express the location ofthe key: “(they’re in her) pocket”, “nail”, “phone”, “car”, and “sugar-jar”and so on. A possible world w is associated with a valuation functionvw : P → {0, 1} as is usual. Since in fact (and according to Bo’s most funda-mental beliefs) the keys can only be in one place at a time, we only need tolook at a few of the combinatorially possible worlds, and we can give themthe same names as the propositions themselves: “pocket” is the world wherethe proposition “pocket” is true and none of the others is. We’ll call this fullset W: W = {pocket, nail, phone, car, sugar-jar}.

In a standard model, then, Bo’s epistemic state would be the set of worldsshe has not ruled out by observation or other trustworthy information sources.We’ll call this her information set σ. We would standardly assume that Bo’sinitial information set σ0, before her search starts, is simply W, i.e., her infor-mation rules out none of the worlds in question (this may of course change asshe learns more). However according to our observation of Bo’s behaviour, herepistemic state as our story opens seems instead to be {pocket, nail, phone}(these are the places that she goes on to check before giving up in frustra-tion). We’ll capture this by filtering her information set through an aware-ness state α, which models the proposition letters she is unaware of, andthe assumptions she holds about their valuation. (Strictly this might be bet-ter named an “unawareness state”; we will use the terms interchangeably.)Formally, an awareness state α is a pair 〈U, v〉 where U ⊆ P is the set of un-mentionables (proposition letters the agent is unaware of; we will mentionthem frequently but the agent herself may not) and v : U → {0, 1} is a val-uation function giving the assumptions the agent holds. In Bo’s case, weinitially have U = {car, sugar-jar} and v = {car 7→ 0, sugar-jar 7→ 0}, i.e., sheassumes (in typical ‘default’ fashion) that the keys are not in the car and notin the sugar jar. Taking a different perspective, an awareness state α specifiesa set of worlds Wα = {w ∈ W ; v ⊆ vw}, those worlds in W which agree withthe assumptions. This latter, equivalent view of awareness states facilitatesthe definition of filtering through awareness: we’ll write σ�α for Bo’s infor-mation set filtered through her awareness, and define σ�α = σ ∩Wα. Takentogether, σ captures the complete factual information an agent like Bo has: σwould be her epistemic state if she was aware of all relevant contingencies; Wα

is the set of worlds she entertains given her (possibly limited) awareness;and σ�α is the subset of these worlds that her information does not rule out,the ones which generate her beliefs.

As our story opens, Bo has no factual information, but she is unaware ofsome propositions: σ0�α0 = Wα0 = {pocket, nail, phone}. But, if this is Bo’sepistemic state, she should believe that the keys are not in the car. That’strue from the modellers perspective (her implicit belief) but her explicit be-liefs shouldn’t mention the car at all. As we argued for in connection withthe slogan “Awareness is not introspective”, we rely on a syntactic notion tocapture this: a belief formula φ can be explicit with respect to an epistemicstate under unawareness σ�α only if φ does not use any proposition letters inU. These are the unmentionables according to Bo’s awareness, and her explicitbeliefs must not mention them. (This story is explicitly intensional: what mat-

Page 8: Now that you mention it: Awareness Dynamics in Discourse ...

8

ters is not the extension of the proposition letters, but whether their namesappear in U. This is what allows us to exclude a tautology such as “The keysare either in the car or not in the car” from Bo’s explicit beliefs.)

Let’s now try to model the different kinds of updates given in the story:factual and awareness. Given Bo’s initial information set σ0 = W and aware-ness Wα0 = σ0�α0 = {pocket, nail, phone}, she begins to systematically in-vestigate the three places she is aware of as possible hide-outs for the keys,and eliminates them one by one. Now σ1 = {car, sugar-jar} but she has notgained awareness of anything new: α1 = α0, so σ1�α1 = ∅! This explainsBo’s frustration: as far as she can see, she is in the inconsistent state. How-ever inconsistency with unawareness is not as destructive as in the standardpicture: it’s quite natural for Bo to realise that there is (or that there mustbe) some possibility she has missed. Her frustration arises because nothingin the situation gives her any guidance as to what this might be, so there’sno reasonable action she can take to get out of the trap she’s entered.4 But,then comes Jack’s offhand question from his corner, and the scales fall fromBo’s eyes! That is, on hearing an expression mentioning the proposition letter“car”, Bo becomes aware of it: it disappears from her (un)awareness state.So α2 = 〈{sugar-jar} , {sugar-jar 7→ 0}〉 and Wα2 = {pocket, nail, phone, car};σ1�α2 = {car}, and it’s easy to see why Bo immediately runs to check the car.

For convenience we collect here the formal features of this model.

Definition 1: Propositional unawareness. Let P be a set of proposition let-ters and W a set of worlds, with each world w ∈W associated with a valuationfunction vw : P → {0, 1}. An epistemic state for an agent is a pair 〈σ, α〉with σ an information set (a subset of W representing the worlds that herinformation has not ruled out) and α is an awareness state. The awarenessstate α = 〈U, v〉 specifies her unmentionables U ⊆ P and assumptions

v : U → {0, 1}, that is, the proposition letters she is unaware of and the truth-values she unconsciously assumes they hold. The state α gives rise to a set ofworlds Wα = {w ∈ W ; v ⊆ vw}, the worlds entertained by the agent. Aninformation state under unawareness σ�α (also to be read as σ filtered

through α) is simply σ ∩Wα.

For clarity we define here also the syntactic sublanguages we use, althoughthese will feature only implicitly in the rest of the paper.

Definition 2: Syntax and belief statements. The language we define con-tains two belief operators for each agent: Bi (for implicit belief) and Be (forexplicit belief). An awareness state α = 〈U, v〉 defines an agent language

Lα, the language inductively defined using only the mentionable proposi-tion letters P \U and the explicit belief operator Be. Implicit belief correspondsto belief in a standard model: Bi(φ) holds for an agent in epistemic state σ�αiff σ�α supports φ. However explicit belief has a stronger requirement: Be(φ)holds in σ�α iff Bi(φ) holds and φ ∈ Lα. (Under this definition all explicitbeliefs are implicit; we will often use “implicit belief” loosely where “strictlyimplicit belief” would be more correct.)

4Another natural reaction would be to search again the places she has already looked. This showsanother way that inconsistency might not be fatal: if some of the information leading to it turnsout to be incorrect. However this perspective requires belief revision as opposed to update, andhas little to do with awareness.

Page 9: Now that you mention it: Awareness Dynamics in Discourse ...

9

2.2 Decision problems and awareness dynamics

Strictly speaking, the propositional treatment of Bo’s growing awareness is arather crude oversimplification: names such as “pocket” or “car” could at thesame time represent states of the world (“(they’re in her) pocket”) or actionsthat Bo might wish to execute (“(search in her) pocket”). So, for example,when we concluded that in the epistemic state σ1�α2 = {car}, where Bo isaware of the car as the only open possibility, she would go check the car, wehave silently succumbed to this equivocation between states and actions. Butwhen the identification of propositions and actions is unwarranted, an exten-sion of the analysis of awareness dynamics to decision problems is called for;not least because unawareness of propositions shows first and foremost in theagent’s behavior. However, formalizing the dynamics of awareness of decisionmakers is not a trivial task and the final model is rather complex. In orderto keep the exposition perspicuous we will have to work towards it in stages.We will start with a naıve approach, present a number of problems that arise,and thus hope to motivate the additional complexity of the solutions we’vechosen to apply at each step. We end with final definitions in Section 2.2.4.

2.2.1 The basic picture

A decision problem is usually conceived as a tuple 〈S, A, P, U〉 where S isa set of relevantly distinct states of the world, A a set of possible actions,P a probability distribution over S, and U : S × A → R is a utility functiongiving a numerical desirability for each action in each state. There is a sensein which this definition already implicitly includes unawareness, in its limitedset S and, more palpably even, in the limited actions A under consideration:common sense dictates that when modelling a particular decision problem wedo not include in S every potentially relevant distinction of the state of theworld that may affect the outcome of the agent’s choice of action, but onlycertain distinctions that the agent can entertain herself (given her awareness-limited vocabulary); similarly, and more obviously even, we do not want toinclude in A all conceivably possible actions but only the ones that the agentis aware of as relevant to the task at hand. One of the main ideas we wish tostress in this paper is that a classical decision problem should be seen as anagent’s limited subjective conceptualization of a decision making situation:

Slogan 4: Decision problems represent subjective awareness. A decisionproblem, which by definition includes only a small set of states and possibleactions and thus restricts attention to only a small facet of reality, representsthe agent’s subjective assessment of the relevant factors of the situation, givenher state of awareness.

Here is a simple example for the kind of subjective unawareness repre-sented in decision problems. At the beginning of her search, Bo is aware ofthe nail, the phone and her pocket as places where her keys might be. Herdecision problem δ = 〈S, A, P, U〉 which comprises her limited awareness atthis point of the story contains exactly these states:

S = {nail, phone, pocket} .

We assume that the actual state is “sugar-jar” but this is a state that Bo is nei-

Page 10: Now that you mention it: Awareness Dynamics in Discourse ...

10

ther entertaining, nor considering possible at the outset. Instead, Bo considersall and only the states in S possible. This is represented in the decision prob-lem δ by assuming that the probability distribution P, which captures Bo’sbeliefs, has full support, i.e., assigns some non-zero probability to all states inS. (By definition it can assign no probability outside the states given by thedecision problem.) The actions Bo can take in this key-search scenario corre-spond one-to-one with the possible states (“(the keys are in her) pocket” and“(search in her) pocket”) and so we can, for modelling purposes, use the samenames for states and actions: A = S. (Whence the constant equivocation inthe exposition of the propositional case.) And, of course, since we assumethat Bo wants to find the keys, her utility function should be something like(formally some positive linear transformation of):

U(s, a) =

{1 if s = a0 otherwise.

Taken as a whole, then, the decision problem δ represents Bo’s own subjectiveassessment of the decision situation under her own limited awareness.

It is obvious how this model of Bo’s epistemic state would treat factualinformation flow. If Bo learns (for instance, by checking) that the keys are noton the nail, she would revise her probabilistic beliefs (by a simple Bayesianupdate with the proposition “¬nail”). But what about extending Bo’s aware-ness? Suppose, whatever her probabilistic beliefs P might be, that she becomesaware of the car as a possible hide-out of the keys and of the correspondingaction “car”. Most straight-forwardly, we would like to update Bo’s decisionproblem δ so as to include a state and action “car”. This much is easy. Butwhat should Bo’s probabilistic beliefs be after she becomes aware of the newcontingency? And what would her utilities be in the new updated decisionproblem?

Clearly, we would not want to specify these features by hand with everyupdate. We would much prefer a model which fully determines the outcomeof an awareness update. This is where the idea of filtering that we used in thepropositional case applies: in order to model how a single agent’s epistemicstate changes under growing awareness we assume that there is a structurein the background, called a background model, which represents the agent’sepistemic state under full awareness; unawareness is then modelled by anawareness state as a restriction, or filter, on the background model; the out-come of the filtering process is (or gives rise to) a decision problem, which isinterpreted as the agent’s assessment under limited awareness in line with theabove slogan. Awareness updates are then fairly simple updates of the aware-ness state (basically: adding or removing elements from sets), which howevermay have rather far-reaching repercussions on the agent’s decision problemvia the background model and filtering.

Here is a first simplified attempt at implementing this architecture forBo’s decision problem. We assume in the background another decision prob-lem δ∗ = 〈S∗, A∗, P∗, U∗〉 which represents Bo’s decision problem under fullawareness. According to our slogan this should also represent subjectiveawareness; indeed, it represents the features the modeller is aware of as possi-bly relevant. So, for this background model in Bo’s case we have chosen

S∗ = A∗ = {nail, phone, pocket, car, sugar-jar}

Page 11: Now that you mention it: Awareness Dynamics in Discourse ...

11

(taking advantage again of the naming convention conflating states, proposi-tions and actions) and appropriate beliefs P∗ and utilities U∗. We should con-sider δ∗ the equivalent of the information set σ in the propositional case: δ∗

contains all the factual information that Bo would have under full awareness.This background structure δ∗ is then filtered through an awareness state, asbefore in the propositional case. Of course, our propositional awareness stateshad no component to represent awareness of actions, while Bo’s restrictedawareness is both a restriction on the set of states S and on the set of possibleactions A. Consequently, we need to enrich the notion of an awareness stateto include a component A, analagous to U, which represents the actions theagent is unaware of: our new awareness states will be triples 〈U, v, A〉 whereA is a subset of A∗ giving the actions the agent does not consider.5

In Bo’s case, it makes sense to assume that U and v are as before: Bo’sinitial awareness state α0, before she starts her search, has her aware of “nail”,“phone” and “pocket” as the possible states and possible actions, so that:

U = {car, sugar-jar}v = {car 7→ 0, sugar-jar 7→ 0}A = {car, sugar-jar} .

Just as in the propositional case we can define Sα as the set of states from S∗

that are compatible with the assumptions of α.Filtering δ∗ through this awareness state gives us the restricted decision

problem δ that we started with. In general, filtering in this case comes downto this: if δ∗ = 〈S∗, A∗, P∗, U∗〉 is a decision problem and α = 〈U, v, A〉 thenthe filtered decision problem δ�α is the decision problem 〈S, A, P, U〉 with6

S = S∗αA = A∗ \A

P = P∗(· | S∗α)U = U∗�(S× A).

The set Sα of states being entertained drives the agent’s probabilistic beliefsunder limited awareness by updating P, the agent’s beliefs under full aware-ness, with all implicit assumptions the agent is making due to her unaware-ness. This is exactly what the beliefs in P(· | Sα) represent.

Suppose that in Bo’s δ∗ the probabilities are 0.24 for each of “nail”, “phone”,“pocket” and “car”, and 0.04 for “sugar-jar”. In her initial state of unaware-ness she holds states “nail”, “phone” and “pocket” possible, each with prob-ability 1

3 (because P(nail | {nail, phone, pocket}) = 0.240.96 = 1

3 ). If she becomesaware of new possibilities without eliminating the existing ones by searching(if Jack helps her out before her search begins, for instance) these probabilities

5Some readers might wonder at this point whether it would pay to adopt a representation ofdecision problems in the style of [Jef65], where actions are treated as propositions. After all, thiswould allow us to treat unawareness of actions on a par with unawareness of propositions andtherefore seems prima facie the simpler modelling solution. We reject this alternative for a numberof reasons, the most interesting of which in the present context is that we would like to side-stepthe question which kinds of (implicit) beliefs an agent has about actions that she is unaware of.

6Strictly speaking, we’d have to define P = P∗(· | S∗α)�S∗α . However, here and in the following werule readability over formal precision.

Page 12: Now that you mention it: Awareness Dynamics in Discourse ...

12

decrease: to 14 if she becomes aware of the car, and to the limit value of 0.24

under full awareness.Awareness dynamics are now easy to define. If Bo becomes aware of

the proposition and action “car” we simply remove this proposition fromU and the corresponding assumption from v (thus enlarging the set of en-tertained states Sα), removing also the corresponding action from A. Thebackground model makes sure that utilities and probabilities are defined inthe updated decision problem which is retrieved from filtering through thisupdated awareness state. Effectively, this filtering process allows an easy im-plementation of deterministic awareness updates on decision problems: we,as modellers, specify the limit-stage of the agent’s growing awareness to theextend that it is important for the modelling purposes. We also have a simplestructure that captures which bits and pieces the agent is aware of. Simplyadding the parameters that the agent becomes aware of to her awareness statein the most straight-forward fashion produces, via the background decisionproblem, the new and updated decision problem with all (numerical) infor-mation specified correctly.

2.2.2 Refinement 1: Individuating states, unawareness without assump-tions

Bo’s key-search example is fairly simple because propositions, states and ac-tions correspond one-to-one and so the awareness update involved a nearlytrivial extension of the idea of filtering from the propositional case to a richerdecision-theoretic structure. Bob’s shortbread suggestion in Example 2, onthe other hand, requires further scrutiny of the notion of a state and anotherrevision to the notion of an awareness state. Here is why.

Let’s first consider the most intuitive background decision model for FarmerPickles’ epistemic state in Example 2. What would his decision problemlook like if he were aware of all contingencies that we as modellers are in-terested in? First of all, Pickles considered baking a cake a possible actionand he is made aware of a further possible action, namely baking shortbread.We should maybe allow Pickles to abstain from all baking, but further ac-tions clearly do not play a role, so that for the background decision modelδ∗ = 〈S∗, A∗, P∗, U∗〉 we should assume that

A∗ = {cake, shortbread, abstain} .

But what should Pickles’ assessment of the relevant states be (from the mod-eller’s perspective)? Pickles knows that there are no eggs available, so thisis not something that the model needs to distinguish. But there is a relevantpiece of subjective uncertainty that we would like to model and that is whetherthe recipe for shortbread contains eggs or not. So, when fully aware, Pickleswould make a distinction between two possible relevant states of affairs, onein which baking shortbread requires eggs and another one in which it doesnot:

S∗ = {sb-req-eggs, sb-req-no-eggs} .

It is not significant at the moment whether Pickles has any beliefs as to whichstate is more likely, but we should assume that he does not rule out any statecompletely. So again we assume that P∗ has full support on S∗. As for utilities,

Page 13: Now that you mention it: Awareness Dynamics in Discourse ...

13

it is natural to assume that U∗ is a function that orders state-action pairs asfollows:

〈·, cake〉 , 〈sb-req-eggs, shortbread〉 ≺ 〈·, abstain〉 ≺ 〈sb-req-no-eggs,shortbread〉

In words: since there are no eggs baking a cake is as bad as baking shortbreadif this does indeed require eggs; it’s better not to bake anything; but mostpreferred, of course, is baking shortbread when it in fact does not requireeggs. (We might call baking failed cakes simply as a waste of time, or gofurther and imagine the expressions of the two friends when they bite into afloury mess, as motivation for this utility ordering.)

So, how do we represent Pickles’ epistemic state as a decision problemwhen he is unaware of shortbread as an option for baking? The obvious an-swer to simply leave out the action “shortbread” in his representation of thesituation leaves us puzzling why, if Pickles is unaware of shortbread as an ac-tion alternative, his decision problem would nevertheless distinguish the statewhere the shortbread recipe specifies eggs and where it does not. Rather, anintuitively attractive representation of Pickles’ decision situation before becom-ing aware of shortbread should only have one state: there is indeed no sub-jective epistemic uncertainty about whether baking shortbread requires eggs.But it is also not the case that we should simply leave out either one of the twostates in S∗ in the representation of Pickles’ initial state. For, unlike in Bo’sexample, it does not seem defensible that Pickles holds any assumption aboutwhether shortbread requires eggs. His unawareness of baking shortbread asan action shows in his behavior: he does not attempt to bake shortbread, doesnot mention it etc. But his behavior, in particular his answer to Bob’s sugges-tion, also shows that he is uncertain about the ingredients of shortbread afterbecoming aware of this alternative action and before further pragmatic consid-erations. Of course, we could assume that Pickles indeed first has an implicitbelief (say: shortbread requires eggs), which he then loses as soon as he be-comes aware of shortbread baking (leaving him uncertain whether shortbreadrequires eggs). But this is at best a dirty hack. And it is also not necessary. Infact, we should improve the modelling attempt of the previous section basedon this example in two respects: firstly, we should allow for unawareness ofrelevant propositions that does not go along with an assumption, and, sec-ondly, we should consider the states in a decision problem as conglomeratesof states (or possible worlds, as we will call them in the final model) that theagent does distinguish as potentially distinct.

To implement these amendments, we should firstly alter the definition ofan awareness state 〈U, v, A〉 to allow v to be a partial function from U to truth-values. This way we can represent which unmentionables an agent holdsassumptions about, as well as what the assumptions are. We should also de-fine a reasonable grouping mechanism that specifies which states (or worlds)of the background model together form a state in the decision problem underan agent’s limited awareness. We will execute these ideas in section 2.2.4 aftermotivating a further slight but necessary extension to the model in the nextsection.

Page 14: Now that you mention it: Awareness Dynamics in Discourse ...

14

2.2.3 Refinement 2: Unawareness of outcomes

Example 3: Professor Branestawm. The scene is the banquet of this year’sprestigious pepper conference.7 First to the buffet is Professor Branestawm,with his friend Professor Calculus alongside. Branestawm is helping himselfto a big bowl of fruit salad, when Calculus is suddenly taken by fear:

calculus: Hey hold on a second! Won’t fruit salad set off one ofyour allergies?

branestawm: [After some thought] Ah, no, I don’t think so. I haven’thad an allergic reaction in months.

calculus: [Obviously still shaken] Well, we wouldn’t want to repeat thedisaster of last year, would we?

Not only would we not want to repeat the disaster of last year, we wouldalso not like to represent the possibility that Branestawm has an allergic re-action as a state that Branestawm is uncertain about, in the same way thatPickles in the previous example is uncertain about whether shortbread re-quires eggs or not. Of course, this is technically possible. We could representBranestawm’s fully aware decision situation with states and actions

S∗ = {allergy, no-allergy}A∗ = {eat, abstain} .

But there is something decidedly odd about this representation: unlike in theshortbread-recipe example of the previous section, in which states were dis-tinguished by some describable parameter (whether shortbread takes eggs ornot), the only reason we can name for distinguishing states in the Branestawmexample is the outcome of performing the action “eat”. In other words, the onlyreason for wanting to distinguish states is in order to distinguish different pos-sible outcomes of one of our actions. We prefer instead to add these outcomesexplicitly, and to distinguish between two kinds of uncertainty: epistemic

uncertainty (about what is currently true in the world) and metaphysical

uncertainty about the outcome of inherently unpredictable events.8

Consequently, we take it to be much more natural to represent Brane-stawm’s epistemic state under full awareness as a decision problem δ∗ thatdistinguishes states before performance of actions from outcome states. Inparticular, δ∗ has only one current state, two actions as before and three out-come states (of which one is the current state, because it is the outcome of“abstaining”, i.e. an empty action that does not change any relevant parame-

7The initial P most likely stands for ‘Pragmatics’.8Of course, a determinist may wish to defend that the outcome of eating fruit salad is indeed fixedby the true state of affairs, just as a die-hard determinist might even argue that the outcome of afair coin toss is fixed by the material facts just prior to the toss. Even so the point about namingremains: no determinist will be able find a name for the state leading to a coin-flip landing headsexcept for some variation on “pre-heads”. A determinist not willing to give these arguments theright of way will probably also not be impressed by the argument from probabilistic indepen-dence, so we will omit it, except to remark that it is also a claim about relative ease of modellingrather than a statement of impossibility.

Page 15: Now that you mention it: Awareness Dynamics in Discourse ...

15

ter):

S∗now = {current}A∗ = {eat, abstain}

S∗fut = {allergy, no-allergy, current}

We will model the agent’s metaphysical uncertainty about the outcomes ofactions if performed in a given state as a function giving result distribu-tions Π : S∗now × A∗ → ∆(S∗fut) that maps each current state and action toa probability distribution on the set of outcome states. (As is standard wewrite ∆(X) for the set of all probability distributions on some set X.) InBranestawm’s case, Π would be a function that might reasonably look likethis:

〈current, abstain〉 7→

allergy 7→ 0no-allergy 7→ 0current 7→ 1

〈current, eat〉 7→

allergy 7→ .001no-allergy 7→ .999current 7→ 0

On this model utilities can be defined as a function from outcome states toreals: u : Sfut → R, and the utility of an action in a (current) state incorporatesan expectation calculation over all possible outcomes.

After these conceptual considerations, it is high time to take stock. Westill owe the reader a rigorous presentation of the final model, in particular(i) the background model, (ii) the awareness state and (iii) the filtering mech-anisms which produces a decision problem as a representation of the agent’sepistemic state given the first two components.

2.2.4 The final model

As in the propositional case of Section 2.1, we will model an agent’s unaware-ness via possible restrictions in the language that she would use to describeher situation. Towards this end, we assume, as before, a set P of propositionletters that capture the model-relevant distinctions before and after the agentperforms an action. Where before we had only one, we now consider twosets of model-relevant possible worlds: a set W of present worlds before theagent performs an action; and a set O (for ‘outcome worlds’) for the state ofthe world after the agent performed an action. Here,W and O need not havean empty intersection. Again, we associate with each world in w ∈ W ∪O avaluation function vw : P → {0, 1}.9

We define the background model in terms of these worlds. This will keepconceptually distinct worlds as the minimal modelling units in the background

9The division of worlds into current and future is natural in a decision-theoretic setting, wherewe only consider ‘single-step’ actions. In a full planning setting the underlying model should besome sort of extended temporal structure allowing for sequences of actions. This simplificationwill be harmless, so long as we are careful about distinguishing propositions that apply now orin the future ‘by hand’.

Page 16: Now that you mention it: Awareness Dynamics in Discourse ...

16

model from states as they occur in decision problems, as representations of theagent’s transient subjective epistemic state, even when worlds and states areto be identified, e.g. under an empty or otherwise trivial awareness state. Abackground model captures the agent’s epistemic state under full awareness,just as an information state did in the propositional case.

Definition 3: Background Models. A background model is a structurewith six components: 〈W ,O,A, P,U , Π〉 where

• W and O are sets of current and outcome worlds;

• A is a set of actions;

• P ∈ ∆(W) a probability distribution on current worlds;

• U : O → R is a utility function giving the desirability of future worlds;

• Π : W ×A → ∆(O) is a function giving result distributions: foreach w ∈ W , a ∈ A, and o ∈ O Πw

a (o) gives the probabilities of outcomeo in state w by performing the action a.

Definition 4: Awareness States. An awareness state α is a triple 〈U, v, A〉such that U ⊆ P is a set of unmentionables, v : U → {0, 1} is a (possiblypartial) valuation function on the set of unmentionables and A ⊆ A is a setof actions. The unmentionables U are propositions that the agent is unawareof; the assumptions v capture her implicit beliefs or assumptions (where anagent need not hold assumptions about all unmentionables); the actions A arelikewise those she is unaware of.

Based on an agent’s awareness state we can define for future use the setof worlds and outcomes that the agent entertains, i.e., the set of worlds oroutcomes not ruled out by her assumptions. Since according to our revisednotion of awareness states the assumption function v may be partial, the set ofentertained worlds or outcomes is no longer necessarily the set of worlds theagent can distinguish given her awareness-language. In particular, she mayentertain possibilities, because she does not hold any assumption that wouldrule them out, but still not be able to distinguish these possibilities in her lim-ited vocabulary. (Think of Pickles in Example 2 who could not distinguisha state where shortbread requires eggs from one where it does not, becausehe is unaware of this distinction, but nevertheless held no assumptions aboutthe recipe for shortbread.) We therefore also define how an agent’s limitedawareness aggregates worlds into states: here we should consider as a singlestate all those entertained worlds that agree on everything the agent can dis-tinguish in her language. (The aggregation relation will define states in theagent’s decision problem, see below.)

Definition 5: Entertaining and Aggregation. Let α = 〈U, v, A〉 be an aware-ness state. The worlds and outcomes that an agent in α entertains, i.e. theworlds the agent does not rule out by an assumption, are the sets

Wα = {w ∈ W ; v ⊆ vw}Oα = {w ∈ O ; v ⊆ vw}.

Page 17: Now that you mention it: Awareness Dynamics in Discourse ...

17

Furthermore, the agent considers equivalent by reason of unawareness two worldsw, w′ ∈ Wα, iff

vw(p) = vw′(p) for all p ∈ P \ U.

Obviously this is an equivalence relation onWα, which we write ≡α; the intu-ition is however very different from that of the epistemic accessibility relation.Two worlds are equivalent in this sense if the agent is not aware of anythingthat would distinguish them. We will define the states in a decision problemby aggregation using this relation: a state is simply an equivalence classunder ≡α. (See below for the details.)

A background structure and an awareness state together give us the agent’ssubjective assessment of her situation. This includes limited awareness andpossible implicit beliefs. We capture this in the notion of a filtered model.

Definition 6: Filtered Models. Given a background modelM = 〈W ,O,A, P,U , Π〉and an awareness state α = 〈U, v, A〉, the filtered modelM�α is the structure(of the same type as the background model) 〈W ′,O′,A′, P′,U ′, Π′〉 where:

W ′ =Wα

O′ = Oα

A′ = A \A

P′ = P(· |Wα)

U ′ = U�Oα

Π′ :W ′ ×A′ → ∆(O′) is such that Π′(w, a) = Πwa (· | Oα)

A filtered model is the same kind of object as a background model; theonly direct effect of filtering is to restrict attention to a sub-part of the back-ground model. Both filtered and background models represent an agent’sepistemic state (possibly given awareness restrictions) in a decision-makingsituation. These models are, in a sense, decision problems that just containmore information than the classical variety. We can obviously read off a de-cision problem in its classical guise from any such model, be that filtered orbackground. The only noteworthy elements in the following construction arethe formation of states by aggregation, and the definition of the utilities: herewe need to compute expected utilities where expectations are a mixture ofepistemic uncertainty (which world in which state am I in?) and metaphysi-cal uncertainty (what might happen if I do such and such?).

Definition 7: Decision Problem. LetM be a background model 〈W ,O,A, P,U , Π〉and α an awareness state 〈U, v, A〉. As above, call the elements ofM�α (the fil-tered model) 〈W ′,O′,A′, P′,U ′, Π′〉. The agent’s decision problem δ(M�α),defined on the filtered model, is of the classical form

⟨S, A, P, U

⟩where:

S =W ′/≡α

A = A′ = A \A

P(s) = ∑w∈s

P′(w |W ′)

U(s, a) = ∑w∈s

P′(w | s) ∑o∈O′

Π′(w, a, o)U ′(o).

Page 18: Now that you mention it: Awareness Dynamics in Discourse ...

18

In words: S is the set of equivalence classes on W ′ given by the aggregationrelation; A is simply the actions being entertained; P is in fact the same filteredprobability distribution but interpreted on states (that is, on sets of worlds);and U gives the expected utility of a in s, under (epistemic) uncertainty aboutwhich world w from s obtains but also (metaphysical uncertainty) about whichoutcome o will result from doing a in w.

2.2.5 Example: Branestawm’s allergies come to awareness

An example will help make clear the formal model given in the last section.Branestawm’s allergy case in Example 3 makes for a fairly simple case ofawareness dynamics involving decision problems. For the sake of a simple ex-ample, we merely want to model Branestawm’s epistemic state before and af-ter becoming aware of his allergies which might be set off by the fruit salad. Inorder to do so we will have to specify a reasonable language (for propositionsand actions) restrictions of which will capture Branestawm’s initial unaware-ness —indeed, the choice of a language thus becomes the initial modellingstep— and subsequently define a background model based on that language.In Branestawm’s case, this is rather easy. We get by perfectly with just twopropositions, namely

“fruit”: Branestawm has enjoyed some fruit salad;“allergy”: Branestawm’s allergies go rock-a-doodle.

So, let’s fix that P = {fruit, allergy}. Similarly straight-forward is thechoice of actions as A = {eat, abstain} containing actions representing Brane-stawm eating or not eating fruit salad. Next, consider the worlds and out-comes that should enter our background model. For the sake of simplicity,we identify worlds and outcomes with their valuation functions. So we willassume thatW = {w} and O = {w, o1, o2} with the following valuation func-tions and utilities:

fruit allergy Uw 0 0 0

o1 1 1 -10

o2 1 0 1

Finally, we specify Branestawm’s beliefs about the results of his actions if hewere fully aware of all propositions in the result distribution Π as follows:

〈w, abstain〉 7→ w

〈w, eat〉 7→{

o1 7→ 0.001o2 7→ 0.999

That is, all the probability mass of Πw(abstain) is placed on w (abstainingdoes not change either of the proposition parameters we are concerned with);eating, on the other hand, leads to an allergic reaction (o1) with very lowprobability, and otherwise to o2.

This completely specifies Branestawm’s background model M. To fullyspecify his epistemic state before Calculus makes him aware of his allergies we

Page 19: Now that you mention it: Awareness Dynamics in Discourse ...

19

need to specify in addition an appropriate awareness state α0. In the case athand this is the triple:

U = {allergy}v = {allergy 7→ 0}A = ∅

This yields Branestawm’s limited awareness of the decision situation as a fil-tered modelM�α0 = 〈W ′,O′,A′, P′,U ′, Π′〉 which comes out as:

W ′ = {w}O′ = {w, o2}A′ = A

P′ = P

U ′ = U�O′

Π′ :W ′ ×A′ → ∆(O′) is such that〈w, abstain〉 7→ w〈w, eat〉 7→ o2

In words, in epistemic stateM�α0 Branestawm believes that eating fruit saladwill not trigger his allergies. This is modelled by the result distribution inhis filtered model which assigns probability 1 to the outcome o2, the outcomeworld where Branestawm does not have an allergic reaction.

The awareness dynamics in this example are fairly simple. Calculus’squestion simply has Branestawm become aware of the proposition “allergy”.This is modelled by updating Branestawm’s awareness state α0 by removingthis proposition from the set of unmentionables and assumptions. The result-ing awareness state α1 is trivial and the filtered model M�α1 in this simpleexample is the background modelM itself.

One interesting point should be noted about this example: when Brane-stawm becomes aware of the possibility of an allergic reaction, his beliefs ef-fectively do not change. That is, he gains a 0.1% uncertainty about the matter,but this is nowhere near enough to alter his choice of action. If Calculus knewthis in advance he would have no reason to bring up the possibility; how-ever because he himself is aware of the possibility of an allergy but uncertainabout its relative probability, he feels compelled to mention the possibility.Branestawm, on the other hand, we take to be expert in the matter of his ownallergies (at least when he is actively considering them). This is then an exam-ple of a pragmatically well-motivated ‘awareness move’ by Calculus, whichnonetheless does not affect Branestawm’s actions. If we were to describe hisbeliefs quantitatively, as the propositions his filtered model gives overwhelm-ing probability mass to, we would say that the awareness update does notoverturn his implicit belief due to assumption, but ratifies it.10

This is, perhaps, the simplest possible example of awareness dynamics,but, of course, there is much more to say about the changes in epistemic states

10Apart from this mention we stick to a purely qualitative view of beliefs, purely for convenience;that is, in terms of the worlds given non-zero probability mass. The extension may be straightfor-ward but we have not looked at it in detail.

Page 20: Now that you mention it: Awareness Dynamics in Discourse ...

20

due to awareness and the interaction of awareness dynamics with factual in-formation dynamics. Section 3 is devoted to these questions, but first we owethe reader a more explicit explanation of just where the convenient assump-tions we have been making use of are supposed to come from.

2.3 Assumptions and associations

The first thing to notice about assumptions is that not just anything goes. Aforgiving reader might not complain that we haven’t sufficiently motivatedBo’s assumption that the keys are not in her car as a cognitive reality, butshould certainly object if in explaining a different scenario we had her un-consciously assuming that they were in fact hiding in the sugar-jar. We haveappealed to intuitions of normality, without really making precise what wemean by this. Clearly ‘normality’ is sensitive to the details of the decision-making context; it is probably normal to assume the library is open whenchecking the remaining to-do list for an almost-finished essay, and equallyreasonable to assume it is closed when plotting to break in at midnight forsome clandestine reading.

The library example is not chosen at random: law-like conditionals such as“If she has an essay to write she studies late in the library” were used in a nowclassic experiment in psychology of reasoning, the ‘suppression task’ [Byr89],which shares many characteristics with the notions of awareness. The basicobservation is that subjects asked to accept the truth of the conditional (as apremise in a logical argument) seem to implicitly hedge it with a normalityassumption: “If she has an essay to write (and nothing unexpected happens)she studies late in the library”. [SL08] gives an explanation of the data interms of closed-world reasoning: Stenning and Van Lambalgen representthe implicit hedge as a ‘dummy’ proposition which is assumed false if thereis no evidence that it is true. While the details do not concern us here (theparallel with unawareness is incomplete, although provocative), the closed-world reasoning is a perfect fit for our notion of assumptions.

That is, if our examples are to be intuitively satisfactory, assumptionsshould have a closed-world flavour: unusual events do not occur and thestatus quo is maintained, unless explicit reason is given to believe otherwise.

This formulation in turn suggests a loose probabilistic constraint on ourassumptions due to unawareness. That is, it should generally be the casethat the probability mass hidden by a particular assumption (an ‘unusualevent’) is relatively small compared to the probability mass on the worldsbeing entertained (including, although not limited to, the ‘status quo’). Inother words, while becoming aware may qualitatively overturn an assump-tion, it should generally replace certainty that p only with uncertainty, notwith near-certainty that ¬p.

We do not believe that this is a ‘hard’ semantic (or even pragmatic) con-straint on acceptable states of awareness. However if we recall that our no-tion of unawareness is linked to absent-mindedness and cognitive limitationsof attentiveness it seems that we should expect our cognitive apparatus (su-perbly evolved as it seems to be for problem-solving) to be reasonably goodat prioritising attention, keeping focussed on the most probable and mostutility-relevant contingencies and letting only the marginal ones slip beneaththe surface.

Page 21: Now that you mention it: Awareness Dynamics in Discourse ...

21

Taking this cognitive perspective also solves a formal problem that we haveso far managed to side-step by choice of easy examples. But consider againthe case of Bob and Pickles. If Bob tells Pickles he could bake shortbread(making Pickles aware of a possible action), nothing in the formal setup we’vegiven so far explains how Pickles gets to entertain new outcomes as well. Still,intuitively he should: when becoming aware of the action “baking shortbread”he should also become aware of certain natural outcomes of that action.11

Although clearest in this case, the problem is not confined to actions andoutcomes. The reality is that some possibilities are cognitively closely asso-ciated, so that becoming aware of one may bring on awareness of the other.However, very little formal or precise can be said in the present frameworkabout this process of association in its full complexity. Hearing a possibilitymentioned at least brings the possibility itself to awareness and mentioninga possible action certainly calls to mind stereotypical outcomes of the action.But beyond this we cannot say much more. That is why in this paper we’vebeen careful not to make associations do any explanatory work. However, forthe Pickles example discussed in detail in Section 4.1 we must at least relyon the association between the action “bake shortbread” and the shortbread-related propositions such as “shortbread has been baked”, “the shortbreadtastes awful (because of the lack of eggs)” and so on.

As in the case of assumptions, we may gesture at the adaptive nature ofour cognitive capabilities in support of the idea that the right associationswill spring to mind when they are needed. Apart from the formal definitionof bringing to awareness propositions that are explicitly mentioned, however,the details of this association process must remain somewhat vague. In thefollowing section we assume a mechanism giving associations and define thedynamic updates to awareness states it gives rise to, and the resulting updatesto filtered information states and decision problems. As before, the strengthof the account is that relatively simple changes in awareness can give rise toradical belief changes through the filtering process.

3 Information dynamics under awareness

The previous section laid out two closely related formal models for represent-ing a single agent’s unawareness, once for a mere propositional setting andonce for a richer decision-theoretic structure. The models contained the ba-sic ingredients for awareness dynamics: removing unmentionables from andadding actions to awareness states. We have not yet addressed the relationbetween awareness updates and the uptake of factual informative. This iswhat we will do presently. Again we proceed in stages from simple to com-plex, starting with the propositional case where we can focus on the mainideas that then carry over to the structurally more complex case of decisionproblems under growing awareness.

11The formal distinction between actions and propositions is of course a theoretical fiction whicha shift to a first-order model (with possibilities of defining unawareness of terms such as “short-bread” be that in descriptions of actions or states of affairs) could alleviate. A first-order unaware-ness model has recently come on the market [BC07] however it’s not yet clear how to combinethis approach with implicit beliefs based on (possibly false) assumptions.

Page 22: Now that you mention it: Awareness Dynamics in Discourse ...

22

3.1 The propositional case

There are two fundamental ideas to the treatment of information dynamicsunder awareness. Firstly, we have already argued in the introduction, in par-ticular in the slogan “Awareness is easily overturned”, that unawareness frominattentiveness is lifted spontaneously whenever agents process linguistic in-formation that contains mention of an unaware contingency. That is whywe will assume that an agent who processes an utterance of some naturallanguage sentence φ, be that for information uptake or anything else, willinvoluntarily become aware of all linguistic elements (proposition letters andactions) used in φ (or rather: a formal representation thereof in propositionallogic) even before she can engage in any further processing.

A second key feature of information dynamics under unawareness is thatinformation uptake can only take place, so to speak, within the window ofawareness: more formally speaking, if, for the propositional case, an agent inthe epistemic state 〈σ, α〉 is aware of all proposition letters in (the formula)φ, an informative update with the (propositional) information in φ will be anupdate on the filtered state σ�α only (that is, only worlds being entertainedare eliminated, not worlds from the background information set that are ex-cluded by assumptions). This is fairly natural once appreciated: an agent wholearns factual information can process this information only in the light of her(possibly limited) awareness. (Things become more complicated when theawareness state itself changes, a complication taken up in Section 3.3.)

These considerations lead to the following treatment of information up-dates for the propositional case. We will write 〈σ, α〉 [φ] for updating an epis-temic state 〈σ, α〉 with a propositional formula φ. This update can be consid-ered a sequential update first of the awareness state, for which we will writeα[φ], and subsequently an update of σ with φ under the agent’s updatedawareness α[φ]. If φ is a propositional formula (representing an utterance),write P(φ) for the proposition letters occurring in φ and [[φ]] for the set ofworlds where φ is true. Then we define propositional update with awarenessas follows:

Definition 8: Epistemic update with (propositional) awareness. Let 〈σ0, α0〉be an epistemic state. Then σ0 ⊆ W is an information set (the worlds notexcluded by the agent’s information) and α0 is as always an awareness state.Let φ be an utterance. Then

〈σ0, α0〉 [φ] def= 〈σ1, α1〉

where α1 = α0[φ] is given by

〈U, v〉 [φ] def= 〈U \ P(φ), v�(U \ P(φ))〉 ,

and σ1 is given by

σ0 \ ((σ0�α1) ∩ [[¬φ]]).

For emphasis: updating σ0 to σ1 uses the new awareness state α1, ratherthan the old one; first we make all proposition letters in φ mentionable andthen we eliminate all entertainable worlds that are incompatible with φ.

Page 23: Now that you mention it: Awareness Dynamics in Discourse ...

23

3.2 Updates for decision problems

The main features of information dynamics under awareness carry over fromthe basic propositional case to the richer decision-theoretic models fairly straight-forwardly. An epistemic state is now the pair 〈M, α〉 where M is a back-ground model and α is an awareness state. Updating an epistemic state with(a formal representation of) an utterance φ proceeds analogously to the propo-sitional case by first making the agent aware of all linguistic elements featuredin φ, where this might now include actions as well, and subsequently updatingthe background model through ‘the awareness window’ of the filtered modelM�α[φ] with the information [[φ]]. This boils down to eliminating from thebackground model all worlds and outcomes where φ is not true that are visi-ble in the awareness window after the agent became aware of all contingenciesmentioned in φ. Let A(φ) be all the actions mentioned in φ and define:

Definition 9: Epistemic update with (decision-theoretic) awareness. Let〈M0, α0〉 be the epistemic state of some agent, where now α0 = 〈U, v, A〉.Let φ be an utterance. Then

〈M0, α0〉 [φ] def= 〈M1, α1〉

where α1 = α0[φ] is given by

〈U, v, A〉 [φ] def= 〈U \ P(φ), v�(U \ P(φ)), A \ A(φ)〉 ,

and M1 is derived from M0 = 〈W ,O,A, P,U , Π〉 (indices omitted for read-ability) as follows:12

W1 =W \ (Wα1 ∩ [[¬φ]]) (but see footnote 12)O1 = O \ (Oα1 ∩ [[¬φ]]) (but see again footnote 12)A1 = AP1 = P(· |W1)U1 = U�O1

Π1 :W1 ×A1 → ∆(O1) is such that Π1(w, a) = Πwa (· | O1)

For clarity: the only non-trivial updates of the background model are theelimination of worlds and outcomes, which is but exactly the same procedureas in the propositional case. The restrictions to probabilities and utilities aresimply required to keep the structure well-defined.13

12Note that we have to be a little careful interpreting the tense of the expression φ correctly: updat-ing with the (true) information “Branestawm has not eaten any fruit salad” should not removeoutcomes in which he does. On the other hand if his doctor declares “Branestawm will not sufferfrom an allergy”, it is exactly the outcome worlds that should be removed. If the backgroundmodel were a fully-fledged temporal model this difficulty would be avoided, but the construc-tion of a decision problem would become much more complex. We prefer to stick to the simplerapproximation, and apply common sense to the updates.

13A different route could also be taken: instead of removing worlds from the information setentirely, simply adjusting their degree of credence, assigned by P, to zero. Which is appropriatedepends on whether you think possibilities ruled out by information are still entertained or not,which might even vary depending on the application under consideration.

Page 24: Now that you mention it: Awareness Dynamics in Discourse ...

24

3.3 Old information in the light of new awareness

The perhaps most fundamental idea behind out treatment of updates withfactual information by agents with limited awareness is that factual informa-tion can only be evaluated (at the time it is observed) within the ‘window ofawareness’ of the agent. But that may mean that assumptions can block theelimination of worlds which, when the implicit belief is given up by grow-ing awareness, the agent might or might not want to rule out as well. Hereis a simple example to illustrate the sequential interaction of awareness andinformation updates.

Suppose for simplicity that P = {p, q} and that our agent is unaware ofp, assuming it to be true, and aware of q but uncertain about it; this givesus four possible worlds W = {pq, pq, pq, pq} (identifying them sloppily withtheir valuations). Then σ0�α0 = {pq, pq}; if the agent now learns that q is true,she will erase the world pq and her information set will become (according tothe definitions we’ve given) σ1 = {pq, pq, qp}.

Now this means that within her awareness window she has come to believeq, because σ1�α0 = {pq}; this is an explicit belief by our definition, but, sur-prisingly, one that is not necessarily stable under awareness updates, becausewhen the agent becomes aware of her implicit assumption about p, a mereawareness update that removes p from the set of unmentionables brings withit the world pq which has not been ruled out by the previous informationupdate. So, taken together, when an agent processes factual information herimplicit beliefs might in fact block correct information uptake. In order torule out worlds that have not been ruled out by an informative update, be-cause these worlds were hidden behind an implicit belief, the agent has to, inour system, reprocess or reconsider the previous factual information in the lightof her extended awareness.

The reader’s response at this point may be: “But then you have definedinformation updates in the wrong way.” Indeed, it is tempting to give up theidea that information is processed only in the light of awareness and insteadassume that information percolates, perhaps secretly, all the way up throughto the background model. This would save us quite some trouble, not only inthe definition of information uptake, but also in dispensing with the “repro-cessing” of factual information.

However there is an important distinction between observing that q holdsand merely hearing reported that q holds, and one that turns on unawareness.If our agent assumes p holds, she does not think to check whether a reportof q is conditional on this assumption or not. The speaker, in turn, mighthold the same assumption and might themselves not be willing to committo the truth of q if they are made aware of p. The point is clearest in thecase of lawlike conditionals discussed briefly in Section 2.3. If I hear “If shehas an essay to write she will study late in the library” and I am assumingthe library is open, it is simply unclear whether the speaker makes the sameassumption or is trying to tell me something stronger (that the student is sofanatical she will find a way to sneak in anyway, for instance). Were this nota case of unawareness I could always ask the speaker for clarification, butthe distinction hinges on possibilities I am not yet entertaining; it is only inretrospect, when they have been brought to my attention, that I realise thepotential ambiguity of the speaker’s intent.

Page 25: Now that you mention it: Awareness Dynamics in Discourse ...

25

This complicates the picture of epistemic update in conversation consid-erably. Rather than simply carrying around an epistemic state, agents mustcarry at least a rough memory of the updates that brought them to that state,in order to be able to reinterrogate that memory in the light of new possibil-ities. Of course this is a more realistic picture of real conversation, but it is asignificantly less tractable one. However it raises one very interesting possi-bility: that a speaker might come to repudiate a statement she has previouslyaccepted, or even made herself, without having in the strict sense learnedanything new in the interim.

In general, these considerations play on the dynamics of awareness in con-versation and open up the possibility for quite complicated pragmatic rea-soning of various sorts. In the next section we give a concrete example, in(perhaps excruciating) detail, in order to show the power of the formal ma-chinery we have defined.

4 Awareness dynamics, decisions, and pragmatics

So far we have seen how limited awareness and awareness growth can influ-ence a single agent’s decision to act. Our conjecture in this paper is that un-awareness from inattentiveness is fairly natural and wide-spread. It is there-fore not surprising to find that the notion of awareness also plays a role ina variety of pragmatic phenomena that arise in conversation. In this sectionwe will revisit the introductory Example 2 where Bob, an expert baker, makeshis friend Pickles deliberately aware of a contingency that he had overlooked.Adding to the brief informal discussion in Section 1, we will spell out formallythe kind of pragmatic reasoning that revolves around the concept of aware-ness in this little dialogue. (We do not believe that this simple example coversall, or even necessarily the most important, aspects of pragmatic reasoningabout awareness in decision-relevant conversations. We merely believe thatthis simple example is indicative enough of the kind of reasoning we have inmind, and its possible formalisation.)

4.1 Bob & Pickles revisited

In the Pickles dialogue (Example 2) we would like to model Pickles becomingaware of baking shortbread as a possible action. We will first spell out thebackground model in order to discuss a simple awareness update. Towardsthis end, let’s first of all fix what Pickles’ language should be able to distin-guish when he is fully aware of all model-relevant contingencies. With quitesome redundancy, we use the following set of proposition letters P :

“eggs”: the recipe for shortbread requires eggs;“yum-cake”: Pickles has baked a tasty cake;“yuck-cake”: Pickles has baked a disgusting cake;

“yum-sb”: Pickles has baked tasty shortbread;“yuck-sb”: Pickles has baked a disgusting cake.

Given these propositions we should distinguish in Pickles’ background modelcertain worlds and outcomes, according to their associated valuation functionsbased on P . But it should be clear that we do not have to consider all possible

Page 26: Now that you mention it: Awareness Dynamics in Discourse ...

26

valuations, because two assumptions rule out quite a number of combinations:firstly, we assume that Pickles can only bake one item, so that there will be noworld where Pickles has baked both a cake and shortbread; secondly, certainnatural meaning postulates apply, (there can be no tasty cake if no cake hasbeen baked, and so on).

Again for simplicity we identify worlds and outcomes with their associ-ated valuation functions; the background modelM for Pickles’ case containsworlds W = {w1, w2} and outcomes O = {w1, w2, o1, . . . , o4} with the fol-lowing valuations and utilities (recall that “egg” means that the recipe forshortbread requires eggs; it is common knowledge that Pickles has no eggs sowe don’t bother including any uncertainty about that fact):

egg yum-cake yuck-cake yum-sb yuck-sb Uw1 0 0 0 0 0 0

w2 1 0 0 0 0 0

o1 0 0 1 0 0 -1o2 1 0 1 0 0 -1o3 0 0 0 1 0 1

o4 1 0 0 0 1 -1

The actions Pickles is aware of in the limit are baking cake, baking shortbreadand abstaining from all baking:

A = {cake, sb, abstain} .

Pickles’ beliefs about the true state of affairs are represented in his probabilitydistribution P. We leave this parametrized for the sake of discussion belowand set P(egg) = p. In turn, Pickles’ beliefs about the outcomes of his actionsare represented in the result distribution Π as follows (all probability distri-butions put probability mass 1 on exactly one outcome in this example, so weonly need to specify which outcome that is):

〈wi, abstain〉 7→ wi

〈wi, cake〉 7→ oi

〈w1, sb〉 7→ o3

〈w2, sb〉 7→ o4.

So far for Pickles’ epistemic state under full awareness. Let’s now modelhis unawareness of the action “sb” and investigate in more detail the aware-ness update and its impact on Pickles’ decision problem. Since Pickles isunaware of the action “sb” his initial unawareness state α0 will have recordthis fact: A = {sb}. But also, Pickles is unaware of shortbread-related propo-sitions, so his set of unmentionables U is {eggs, yum-sb, yuck-sb}. Since, aswe argued above, Pickles does not have any implicit beliefs about any of these,his assumptions v are a trivial (empty) valuation function. For perspicuity: α0is given by

U = {eggs, yum-sb, yuck-sb}v = ∅A = {sb} .

Page 27: Now that you mention it: Awareness Dynamics in Discourse ...

27

When Bob mentions shortbread, we assume that Pickles becomes awarenot only of the action “sb”, but also of the naturally associated propositions,that is, the shortbread-related propositions “yum-sb” and “yuck-sb”. Theresult of this awareness update is the trivial awareness state α1 (full awarenessand no assumptions): after Bob’s remark about shortbread, Pickles representshis decision problem asM�α1 which is identical toM.

But the decision problem represented inM has Pickles uncertain whetherthe recipe for shortbread contains eggs of not. This is, for the time being,how it should be. We have assumed that he himself does not know andthat his initial unawareness of shortbread only concealed this true uncertainty.This uncertainty, however, can be overturned by a pragmatic inference thatcrucially relies on the idea that Bob, an expert baker who knows whethershortbread requires eggs, is cooperative much in the sense of [Gri89] and —enter awareness— has ostensibly made Pickles aware of shortbread. Here is asemi-formal account of this pragmatic reasoning.

Let’s first of all ask ourselves what Pickles would do in the initial decisionproblem when he was still unaware of shortbread. Since eggs are unavailablebaking a cake would seem stupid. In formal terms it has an expected utility of-1 in his decision problem δ0 = δ(M�α0), where expected utility of an action(in a classical decision problem δ) is defined as:

EUδ(a) def= ∑s∈S

P(s)×U(s, a)

In contrast, abstaining from all baking has expected utility 0, so that in δ0 thisis clearly the preferred option. But now compare this with Pickles’ decisionproblem under full awareness, δ1. Clearly baking a cake and abstaining frombaking altogether have the same expected utilities. But we have a new playerin the race: baking shortbread. The expected utility of baking shortbreadunder full awareness is:

EUδ1(sb) = −1× p + 1× (1− p) = 1− 2p

This means that under full awareness baking shortbread will be preferredto abstaining from baking (with expected utility 0) iff p < 0.5. In otherwords, barring pragmatic considerations Pickles will bake shortbread only ifhe thinks it is more likely that the recipe for shortbread does not require eggs.(Naturally the specific utilities chosen don’t matter for the general point thatthere is some threshold beyond which p is high enough to justify baking.) Buteven if his subjective probability favored the possibility that shortbread doesrequire eggs (p > 0.5) he could still revise these beliefs based on the followingpragmatic reasoning: if Bob knows that Pickles faces the decision problem inquestion (including unawareness of shortbread) and if furthermore Bob is alsohelpful and cooperative, then his conversational move (deliberately bringingshortbread to awareness) can only be motivated if p < 0.5, for otherwise itwould be futile, or lead him to choose an even worse action. If furthermoreBob is an expert baker who knows for sure whether w1 or w2 is the true stateof affairs, Pickles is safe in concluding that the true state of affairs is w1.

Page 28: Now that you mention it: Awareness Dynamics in Discourse ...

28

4.2 Decision-theoretic relevance

This intuitive formulation points towards the assumption of relevance asdriving Pickles’ inference. Pickles must be able to explain Bob’s questionas relevant to the purposes of the conversation, otherwise he would have toconclude that Bob was not being cooperative. If those purposes are, roughly,getting something tasty cooked, it’s hard to imagine how Bob could haverelevantly intended his question as a literal request for information. Howeverif it is given the natural interpretation of deliberately and ostensibly bringinga possibility to awareness, the prospect looks much better.

If our promise of a formal solution is to be fulfilled, though, we need a for-mal notion of relevance that is appropriate for the decision-theoretic setting.As it happens one already exists for purely informational updates, which wecan adapt with very minor changes to the current setting. The measure inquestion is called the value of sample information (we use a variant ofthat defined in [RS61]).

The intuition is as follows: the relevance of (true) information in a decisionproblem can be measured as the change in the agent’s expected utility withthe information, compared to without it. If we assume the agent is a utilitymaximiser, they will choose one of the actions that appear best (in terms ofexpected utility) according to their information; comparing the actual value(given the new information) of these apparent best-action choices with thetrue best-action payoffs (again given the new information) gives the amountthe agent believes her fortunes have improved in light of the information.Stated negatively, information that does not cause the agent to change hermind about her best action is irrelevant.14

We will first define the value of sample information for factual informationuptake not involving expanding awareness. Towards this end, extend thisdefinition of expected utility of an action in a (classical) decision problem to aset B ⊆ A of actions by taking the average (we will use this for the agent’s setof perceived best actions):

EUδ(B) def=1|B| ∑

a∈BEUδ(a).

We write BA(δ) for the set of actions with maximal expected utility in δ:

BA(δ) def= {a ∈ A ; ∀a′ ∈ A : EUδ(a′) ≤ EUδ(a)}.

Now suppose that δ is a decision problem representing the actual state ofaffairs (metaphysical uncertainty may still keep this nontrivial), while γ is theagent’s conception of the decision problem she faces (limited by unawarenessand complicated by epistemic uncertainty as usual). Then EUδ(BA(γ)) isperfectly well-defined (so long as δ and γ are ‘compatible’ in the obviousways) and gives the actual expected utility of the actions the agent believesare best.

Now let δ represent some concrete decision problem, and write δ[φ] for thesame problem updated with some (true) factual information φ that does not

14This can easily be rejected as overly simplistic, since information making an agent more certainof a choice she has made can intuitively be highly relevant. We don’t represent anywhere thehigher-order notions of uncertainty that doing this intuition justice would require, however wealso feel that this omission is harmless for the paradigm cases of unawareness that we treat.

Page 29: Now that you mention it: Awareness Dynamics in Discourse ...

29

involve awareness updates. The value of sample information φ in the originaldecision problem δ, written VSIδ(φ), is given by

VSIδ(φ) def= EUδ[φ](BA(δ[φ]))− EUδ[φ](BA(δ)).

That is, we compare the actual expected utilities (given the information φ) oftwo sets of actions: those the agent considers best before she learns φ, andthose she prefers after she learns φ.

One convenient feature of this definition is that information never has neg-ative value. Another simple point to note is that information has strictly pos-itive value only if it reveals some apparently optimal action to not in fact beso; that is, if it removes something from the set of best actions. (To see this,consider the alternatives: (i) the set of best actions stays the same, in which itsvalue doesn’t change, or (ii) something is added to the set, in which case thenew action must have the same expected value as the previous elements —orthey would be removed— so the average does not change.)

Turning now to awareness dynamics, we will take over this definition ex-actly as it stands, except for the update: we will consider also the value ofchanges to an epistemic state by updating awareness, rather than by incorpo-rating information. The only difference, then, is in the definition of expectedutility: rather than take this relative to a decision problem we define it relativeto a filtered information state. If 〈M, α〉 is a decision-theoretic epistemic state(a background model and awareness state) then we define the expected utilityEU〈M,α〉(a) simply by taking EUδ(a) where δ is the classical decision problem‘read off’ fromM�α according to Definition 7; the definition of “BA(·)” is ex-tended in the same way. Entirely analogous to the definition given above, wecan write EU〈M,α〉[φ](BA(〈M, α〉)) for the consequences (judged in terms ofawareness of φ) of the actions the agent considered best before φ was broughtto her attention.

We might dub the new definition the “Value of Epistemic Change”; notthat it is formally so different from the old, but the supporting intuitionscertainly are. We are no longer dealing simply with sample information (intu-itively obtained by direct observation) but with changes to the epistemic stateof the agent herself (most likely obtained from conversation), albeit ones thatwe will assume bring her asymptotically closer to the truth. While her up-dated state may be closer to reality (if she learns true information or becomesaware of contingencies she was unwarrantedly excluding) it may still be farfrom the whole truth. These necessary caveats given, here is the definition:

VEC〈M,α〉(φ) def= EU〈M,α〉[φ](BA(〈M, α〉 [φ]))− EU〈M,α〉[φ](BA(〈M, α〉)).

4.3 Bob & Pickles made formal (at last)

Armed with this definition we can at last formalise the pragmatic reason-ing we attribute to Pickles when he concludes that the recipe for shortbreadshould not require eggs. The formalisation rests on the assumption that al-though Pickles does not know which of the two possible worlds is actual (w1in which shortbread does not require eggs, or w2 in which it does) he knowsthat Bob the Baker does know which world obtains.

Page 30: Now that you mention it: Awareness Dynamics in Discourse ...

30

Pickles began in state 〈σ0, α0〉, and Bob’s suggestion has brought him (be-fore any pragmatic reasoning takes place) to 〈σ0, α0〉 [φ] = 〈σ0, α1〉 (we’ll as-sume that whatever logical form φ takes, (a) it confers no information directly—does not eliminate worlds or change probabilities— and (b) it mentionsshortbread and thus induces the awareness updates we need).

Pickles can now imagine two possibilities for the decision problem as Bobsees it: if w1 is the actual world, Bob sees 〈σ0[¬egg], α1〉 (that is, the samedecision problem Pickles does but updated with the information that the eggsaren’t needed) or 〈σ0[egg], α1〉. Call the first possibility δ1 and the second δ2(conflating the epistemic state with the corresponding decision problem, sincethese states occur with full awareness anyway).

Now Pickles can compute the value of his epistemic change, if each ofthese possibilities is the actual one:

VECδ1(φ) = EUδ1(BA(〈σ0, α0〉 [φ]))− EUδ1(BA(〈σ0, α0〉))

and likewise for the alternative δ2. This computation rests, of course, on thedecision of best action Pickles would take. Let’s assume the worst: he’s uncer-tain enough about the recipe that he’s unwilling to take the risk, and prefersto abstain from cooking. Then BA(〈σ0, α0〉) = {abs}.

But then VECδ1(φ) = VECδ2(φ) = 0: for before Bob made his suggestion,Pickles had already decided not to cook anything.

If Pickles is to make sense of Bob’s suggestion as relevant, he will haveto conclude that it communicates something beyond its ostensive awarenessupdate. It’s easy to see that this extra content, if assumed to be true, couldonly be “¬eggs” if w1 is the actual world, and “eggs” if w2 is actual. So let’scompare these possibilities:

VECδ1(φ;¬eggs) VECδ2(φ; eggs)= EUδ1(BA(〈σ0, α0〉 [φ;¬eggs])) = EUδ2(BA(〈σ0, α0〉 [φ; eggs]))− EUδ1(BA(〈σ0, α0〉)) − EUδ2(BA(〈σ0, α0〉))

= EUδ1(sb)− EUδ1(abstain) = EUδ2(abstain)− EUδ2(abstain)= 1 = 0

That is, correctly assuming shortbread to not require eggs would lead Picklesto a positive expected utility gain, while correctly assuming that it does requireeggs leads to no gain at all. Since Bob knows which of δ1 and δ2 actuallyobtains and since his update (by the relevance requirement) should lead to apositive expected value change, he must be in δ1 and have intended to convey“¬eggs”.15

The reader might be forgiven for thinking that this is making a very com-plicated mountain out of a very simple molehill. We give the derivation inall its considerable detail in order to make clear that this framework fulfillsthe promises we have made on its behalf: a fully formalised representationof awareness-related relevance reasoning can be represented. That is not to

15In fact something stronger can be said: if δ0 is the actual state of affairs (and Bob knows this)then no action (that Pickles is aware of) does better than the one he has already chosen, so noreasoning he can conceive of would make Bob’s suggestion relevant. This reasoning comes closerto the intuitively natural question “If you know shortbread needs eggs and we don’t have any,why would you bring it up at all?”

Page 31: Now that you mention it: Awareness Dynamics in Discourse ...

31

say that the story we have given for Bob and Pickles necessarily covers all thepossibilities, and it’s also not to claim that the numerical details of this sort ofreasoning are perspicuous for the modeller. We would be remiss in pushingthe structural capabilities of the system, however, without substantiating ourclaims with at least one fully worked-out example.

5 Related work

We have tried to develop a formal notion of unawareness from inattentivenessin this paper and also to show that such a notion can be useful for accountsof natural language meaning. In other words, the purpose of this paper isexplicitly a dual one: to propose a thoroughly grounded and explicit modelof a sort of unawareness that has not yet been closely investigated in therational choice literature, and also to make linguists aware of the possibilityof awareness as a significant feature of semantics and pragmatics (to run theawareness flag up the mast, as it were). This section situates our contributionwith respect to current models of unawareness and to linguistic theory; weshow the contribution of our work and give some suggestions for furtherapplications in linguistics.

5.1 Formal awareness models

The classical reference for the notion of unawareness is [FH88], whose origi-nal motivation was developing inference systems that did not suffer from theproblem of logical omniscience (that an agent knows all logical consequencesof the facts that she knows). Fagin and Halpern point out that there are severaldistinct reasons to want to do away with logical omniscience such as ‘strict’unawareness of possibilities, computational limitations and resource bounds,lack of knowledge of rules of inference, or issues of attention and focus. Differ-ent modelling choices result from different conceptualizations of unawarenesswhich in turn depend on the intended application of the unawareness model.

This is then also the primary difference between the models presentedhere and the majority of unawareness models presented in rational choicetheory that have sprouted recently.16 In rational choice theory, apart froma general interest in modelling reasoning about this notion (see for instance[DLR98; Hal01; HMS06; HMS08a; MR94; MR99]) and in including unaware-ness into game theoretic solution concepts (see for instance [Fei04; Fei05;HR06; HMS07; Ozb07]), most applications have focused on reanalyzing in thelight of possible unawareness certain fairly strong game-theoretic predictionsabout rational behavior: [Fei04], for instance, shows how the possibility of un-awareness helps establish cooperation as a rational solution in the prisonersdilemma; [HMS06] shows how possible unawareness has otherwise rationalagents wholeheartedly engage in speculative trade despite the well-knownclass of “No-Trade Theorems” (for example [MS82]).17

16The online unawareness bibliography maintained by Burkhard Schipper (http://www.econ.ucdavis.edu/faculty/schipper/unaw.htm) is a good starting point for readers interested inexploring the rational choice literature further.

17Very roughly, a “No-Trade Theorem” shows that speculative trade should never take place. Foran intuitive basis to this theorem, imagine you are bargaining at a street market for some item

Page 32: Now that you mention it: Awareness Dynamics in Discourse ...

32

The source of unawareness that we have been concerned with in this paperis inattentiveness. This is because we believe that it is this kind of unaware-ness that plays a key role in certain aspects of conversation (see also the nextsection). The crucial feature of unawareness from inattentiveness is the easewith which it is overturned. To appreciate the difference between unaware-ness from inattentiveness and that resulting from a lack of conceptual grasp,suppose a teenager is presenting a poorly-reasoned argument in favour of un-protected sex, and you mention the possibility of aids; the instant awarenessupdate along the lines we have described is easy to imagine. Treating un-awareness from lack of conceptual grasp is like imagining the same conversa-tion as if it it were held in the ’70s, when the disease was unidentified and theacronym not yet invented: the new possibility being brought to painful aware-ness was not forgotten but simply had not yet been imagined. It should beclear that where linguistic generalizations about extending awareness throughdialogue are concerned it is the former, not the latter type of awareness dy-namics that we should focus on. This is then the main difference in conceptualinterpretation of unawareness between our models and the collection of mod-els entertained in economic theory. The notional difference further cashes outin two major differences in the modeling.

The first difference between our linguistically-inspired models and theones studied for economic applications is that the latter do not consider andspell out assumptions. Recall that in introducing the notion of assumptionsas implicit beliefs, we referred to the intuition that in the initial example un-aware Bo Peep behaves as if she believes the keys are not in the car. Interest-ingly, it seems to us that the motivation for explicit modelling of assumptionsof agents is not exclusively linguistic. For instance, when Heifetz, Meier, andSchipper seek to explain how unawareness overturns the “No-Trade Theo-rem” [HMS07], they also need to assume (implicitly) a particular “as-if” be-havior of unaware sellers and buyers, namely behavior as if certain favorableor unfavorable contingencies were believed to be true or false. We suggestthat the notion of an assumption might be an interesting enrichment of exist-ing unawareness models.

The second major difference between the two systems (or system types)also stems from our goal to apply an unawareness model to (generalizationsabout) cooperative conversation. For this end, we are interested in describ-ing systematically the effects of awareness updates on decision problems,which requires specifying numerical probabilities and utilities for the newly-introduced possibilities. The main idea to achieve this end is filtering throughan awareness state.18 The problem of changing awareness has also been ad-dressed, typically in game-theoretic settings where it is natural to assume thatobserving a player make a move you were unaware they could make over-turns this unawareness. [HMS08b] gives a game-theoretic model and a vari-

of jewellery; you offer what you consider an outrageously low price, and the seller immediatelyspits in his palm, shakes your hand and shouts “Done!”. Your immediate thought is likely tobe “If he’s so happy with the deal, I’ve been had”, and if given the choice you would prefer torevoke the offer. The unawareness perspective suggests how a more careful buyer and seller canstill both go away convinced that they have struck an advantageous deal: you are unaware thatthe necklace is stolen and thus a risky purchase, and the seller is unaware that you are leavingthe country in the morning and thus have nothing to fear.

18We introduced this approach in [FJ07], in a preliminary and in many ways unsatisfactory modelwhich nonetheless contains the seeds of the present account.

Page 33: Now that you mention it: Awareness Dynamics in Discourse ...

33

ant of rationalizability for games with possibly unaware players, and [Fei04;Fei05; HR06] have taken similar equilibrium-based approaches. However, theemphasis in these efforts is on non-cooperative game theory, whose solutionconcepts do not, strictly speaking, supply vanilla awareness updates irrespec-tive of rationality considerations. The demands of linguistic pragmatics, basedas it is on a fundamentally cooperative notion of interaction, are quite differ-ent: we would like to pin down pure awareness dynamics first and show howpragmatic reasoning con take place on top of it.

Seen in this light, the model of [Ozb07] deserves special mention. Ozbaygives in a non-cooperative setting a signalling games model with an equilib-rium refinement somewhat similar to our notion of relevance as VSI or VEC.In the model an aware sender can make an unaware receiver aware of certaincontingencies by her choice of signal, but the beliefs the receiver adopts whenbecoming aware are not determined, but subject to strategic considerations.Ozbay offers a refined equilibrium notion according to which the receivershould adopt beliefs under extended awareness that prompt him to choose adifferent action from the one that he had chosen under his initial unawareness.While this kind of constraint on belief formation seems to be what pragmaticreasoning based on a notion of relevance as VSI or VEC provides, it is unclearwhether this should apply in all cases of (possibly) conflicting interests. Itshould, to our mind, apply for the cooperative case, and we have spelled outthis kind of reasoning based on the example of Bob and Pickles.

In short, although our work is based on the standard models in the ratio-nal choice literature, our notion of unawareness is not quite the same. Thelinguistic application, and in particular the structural requirements imposedby decision-problem representations, have led us to develope a significantlydifferent model based on similar, but subtly different, intuitions.

5.2 Unawareness in linguistics

Turning to the linguistic literature, the picture is quite different. The no-tions and intuitions have apparently been present since the work of Lewis,and probably before, but have never been treated as a distinct phenomenonamenable to a unified formal treatment.

In his seminal paper [Lew79], David Lewis gave a unifying account of awide range of accommodation effects in terms of an evolving “conversationalscore”. Awareness effects as we have described them make a somewhat un-comfortable fit with picture, since unawareness updates (if we are correct)proceed not by accommodation but by something akin to inherent salience orattention-focussing effects. However one class of observations given by Lewisfits the awareness story very comfortably: his Example 6, on relative modality.

Lewis is concerned here with modals such as “can” and “must”, and theirapparent restriction, in normal usage, to a subset of all ‘metaphysical’ possi-bilities. There is a large literature on this subject, of course, but certain featuresrecur again and again: a restricted set of possibilities that are ‘in play’ at anygiven moment, against which modal statements should be evaluated, and thepossibility to add hitherto unconsidered possibilities into this set as a conver-sation progresses.

The similarity to the unawareness picture is clear, so we should say some-thing instead about the differences. It might be thought that our ‘worlds

Page 34: Now that you mention it: Awareness Dynamics in Discourse ...

34

being entertained’ correspond directly to a Stalnakerian context set [Sta78]:the possibilities not ruled out by presuppositions in force. However there isa crucial difference between our assumptions and the presuppositions thatthis approach would conflate them with: assumption is typically somethingthe agent would repudiate if she were made aware of it. This is implicit inour slogan “Unawareness is easily overturned”: it is only when overturningunawareness also overturns an implicit belief (that is, when an assumption isgiven up as unfounded) that the epistemic update is as it were visible to theobserver, since an awareness update that simply ratifies an implicit belief doesnot result in a change of behaviour.

Nevertheless the notions of assumption and presupposition are closelylinked, and the exact relation between them remains a problem for furtherstudy. It seems, for instance, that assumptions can sometimes be ‘convertedinto’ presuppositions. Suppose you make a naıve statement due to unaware-ness of some contingency p. I am aware of p, and see that you seem to haveneglected it, but even so I agree with your statement (suppose for examplethat I explicitly assign very low probability to p). If I choose not to object, itseems that all assertions in our further conversation are contingent on p beingfalse, but in two quite different ways: we might say that you are assuming,while I am presupposing. Whether this is in fact the right distinction is unclear(the possibility of uncertainty about the awareness basis from which a speakermakes assertions complicates matters), but certainly the issue deserves furtherinvestigation.

Another difference to the standard approach is the inadvertency of anawareness update: the agent who undergoes such an update cannot chooserather to remain unaware, and no pragmatic reasoning can undo the imme-diate effects it produces. An interesting topic for further research is the in-terplay between such automatic updates and the explicit negotiation aboutwhich possibilities are ‘on the table’ displayed in sentences like “Let’s leavethat possibility out of the picture for the moment”.

The closest account to ours that we are aware of is Eric Swanson’s treat-ment of the language of subjective uncertainty [Swa06b], elaborated in [Swa06a];his “coarse credal spaces” are very closely analogous to the aggregated statesand outcomes in our decision-theoretic formulation. We agree wholeheartedlywith his insight that “might”-statements can be appropriately used withoutany expectation that they will be informative in the usual propositional senseof removing possibilities from play, and that such use is in fact based on thehope that adding possibilities will be helpful to the addressee. What awarenessbrings to the party is a notion of excluded possibility that is both absolute andeasily overturned.

Another area where notions of considering or ignoring possibilities appearvery natural is in assessing the truth of conditionals in discourse. There has re-cently been some interest in the question how the acceptability of conditionalsentences depends on their sequential presentation [F01; Gil07; Wil08]. Sensi-tivity to ordering sequence implies some sort of dynamic effect, and aware-ness dynamics indeed seem a fairly good intuition to explain the observedorder sensitivity (for a proposal closely related to our views on the matter see[Mos07]).

A final, and much more speculative, area of potential linguistic applica-tion is to vagueness. We can relate this again to [Lew79]; there Lewis gave an

Page 35: Now that you mention it: Awareness Dynamics in Discourse ...

35

account in terms of changing “standards of precision” that can make an utter-ance of “France is hexagonal” true (or acceptable) in one context and untrue(or unacceptable) in another. A hint in the direction of unawareness is givenby Lewis’ observation that, as in the case of possibility modals, ‘accommoda-tion’ proceeds much more smoothly in the direction of increasing standards ofprecision than for decreasing them. If a standard is defined in terms of a setof alternatives (for instance “square”, “boot-shaped”, and “octagonal”) andthe best alternative from the set is considered “true enough” according to thestandard, then introducing new alternatives via awareness can raise the stan-dards of precision but never lower them. One can think of possibilities here asliterally the measurement markers on some device: if we include only the 20-mile markers then it’s 100 miles to Chicago, but add some more measurementpossibilities and all of a sudden it’s 106. As we’ve defined aggregation thiswon’t work, since the equivalence relation gives rise to a sorites paradox; theopen question is whether unawareness might add anything to your favouritesorites-proof account. At least this account gives an easy explanation for theasymmetry that Lewis has noted.

6 Conclusion

We have tried to cover a lot of ground in this paper; it’s quite likely that wehaven’t succeeded in convincing the reader of everything. This is a good pointto take stock, and state clearly which notions we think are central and whichcan be discarded while still agreeing with the endeavour in general.

We’ve used three slogans to give intuitions about unawareness:

1. Unawareness is not uncertainty (it cannot be represented formally byuncertainty; it typically takes the form of implicit beliefs).

2. Unawareness is not introspective (it must be represented intensionally;the modeller’s language is not the agent’s language).

3. Unawareness is easily overturned (it stems from absent-mindedness orinattentiveness; mere mention of possibilities, whatever the linguisticsetting, suffices).

In particular the third slogan shows how our notion differs from the versioncommon in the rational choice literature; as far as we can see, this characteris-tic is key for a linguistic application of the idea.

We’ve modelled unawareness in terms of filtering a background modelthrough a set of unmentionables (which define a limited agent language)and assumptions, and distinguished between implicit and explicit beliefs.These are key concepts we would like to see generally adopted, whatever thespecific implementation.

In decision theory we’ve made a more specific suggestion: that decisionproblems be considered a subjective representation of the relevant features ofthe situation, and that unawareness models be used whenever that subjectivenotion of relevance may undergo revision over time. The technical details ofour model produce numerically precise and potentially quite complex revi-sions of decision problems by way of simple updates to awareness structures.

Page 36: Now that you mention it: Awareness Dynamics in Discourse ...

36

In linguistics we’ve argued that awareness dynamics are a natural fea-ture in conversation. We have offered a toy example of pragmatic reasoningcentered on a conversational move intended first and foremost to bring a pos-sibility to awareness. A further example is offered by the speculative sugges-tions of the previous section: they are intended not as assertions but simplyto make the reader aware of the impressive range of possibilities this notionmight fruitfully be applied to.

References

[BC07] Oliver Board and Kim-Sau Chung. “Object-Based Unawareness”.In: Logic and the Foundations of Game and Decision Theory, Proceed-ings of the Seventh Conference. Ed. by G. Bonanno, W. van der Hoek,and M. Woolridge. 2007.

[Byr89] Ruth M. J. Byrne. “Suppressing Valid Inferences With Condition-als”. In: Cognition 31.1 (1989), pp. 61–83.

[DLR98] Eddie Dekel, Barton L. Lipman, and Aldo Rustichini. “StandardState-Space Models Preclude Unawareness”. In: Econometrica 66.1(1998), pp. 159–173.

[Fei04] Yossi Feinberg. “Subjective Reasoning - Games with Unaware-ness”. Research Paper No. 1875, Stanford University. Nov.2004. url: https://gsbapps.stanford.edu/researchpapers/library/RP1875.pdf.

[Fei05] Yossi Feinberg. “Games with Incomplete Awareness”. ResearchPaper No. 1894, Stanford University. May 2005. url: http://www.stanford.edu/%7Eyossi/Files/Games%20Incomplete%20DP.pdf.

[FH88] Ronald Fagin and Joseph Y. Halpern. “Belief, Awareness and Lim-ited Reasoning”. In: Artificial Intelligence 34 (1988), pp. 39–76.

[F01] Kai von Fintel. “Counterfactuals in a Dynamic Context”. In: KenHale: A Life in Language. Ed. by Michael Kenstowicz. MIT Press,2001, pp. 123–152.

[FJ07] Michael Franke and Tikitu de Jager. “The relevance of aware-ness”. In: Proceedings of the Sixteenth Amsterdam Colloquium. Ed. byMaria Aloni, Paul Dekker, and Floris Roelofsen. 2007, pp. 91–96.

[Gil07] Anthony S. Gillies. “Counterfactual Scorekeeping”. In: Linguisticsand Philosophy 30 (2007), pp. 329–360.

[Gri89] Paul Herbert Grice. Studies in the Ways of Words. Harvard Univer-sity Press, 1989.

[Hal01] Joseph Y. Halpern. “Alternative Semantics for Unawareness”. In:Games and Economic Behavior 37 (2001), pp. 321–339.

[HMS06] Aviad Heifetz, Martin Meier, and Burkhard C. Schipper. “Inter-active Unawareness”. In: Journal of Economic Theory 130 (2006),pp. 78–94.

Page 37: Now that you mention it: Awareness Dynamics in Discourse ...

37

[HMS07] Aviad Heifetz, Martin Meier, and Burkhard C. Schipper. Unaware-ness, Beliefs and Games. Tech. rep. 6. Bonn Econ Discussion Papers,2007.

[HMS08a] Aviad Heifetz, Martin Meier, and Burkhard C. Schipper. “Acanonical model for interactive unawareness”. In: Games and Eco-nomic Behavior 62 (2008), pp. 304–324.

[HMS08b] Aviad Heifetz, Martin Meier, and Burkhard C. Schipper. “Dy-namic Unawareness and Rationalizable Behavior”. UnpublishedManuscript. Apr. 2008.

[HR06] Joseph Y. Halpern and Leandro Chaves Rego. “Extensive Gameswith Possibly Unaware Players”. Full paper unpublished; a pre-liminary version appeared in: Proceedings of the Fifth Interna-tional Joint Conference on Autonomous Agents and MultiagentSystems, 2006, pp. 744–751. 2006. url: http://www.cs.cornell.edu/home/halpern/papers/aamas06.pdf.

[Jef65] Richard Jeffrey. The Logic of Decision. Chicago: Univeristy ofChicago Press, 1965.

[Lew79] David Lewis. “Scorekeeping in a language game”. In: Journal ofPhilosophical Logic 8 (1979), pp. 339–359.

[Mos07] Sarah Moss. “On the Pragmatics of Counterfactuals”. Unpub-lished Manuscript, MIT. 2007.

[MR94] Salvatore Modica and Aldo Rustichini. “Awareness and Parti-tional Information Structures”. In: Theory and Decision 37 (1994),pp. 107–124.

[MR99] Salvatore Modica and Aldo Rustichini. “Unawareness and Parti-tional Information Structures”. In: Games and Economic Behavior 27

(1999), pp. 265–298.

[MS82] P. Milgrom and N. Stokey. “Information, trade and commonknowledge”. In: Journal of Economic Theory 26 (1982), pp. 17–27.

[Ozb07] Erkut Y. Ozbay. “Unawareness and strategic announcements ingames with uncertainty”. In: Proceedings of TARK XI. Ed. by DovSamet. 2007, pp. 231–238.

[RS61] H. Raiffa and R. Schlaifer. Applied Statistical Decision Theory. MITPress, 1961.

[SL08] Keith Stenning and Michiel van Lambalgen. Human reasoning andcognitive science. MIT Press, 2008.

[Sta78] Robert Stalnaker. “Assertion”. In: Syntax and Semantics. Ed. byPeter Cole. Vol. 9. New York: Academic Press, 1978, pp. 315–332.

[Swa06a] Eric Swanson. “Interactions with Context”. PhD thesis. Mas-sachusetts Institute of Technology, 2006.

[Swa06b] Eric Swanson. “Something ‘Might’ Might Mean”. UnpublishedManuscript, University of Michigan. 2006.

[Wil08] J. Robert G. Williams. “Conversation and Conditionals”. In: Philo-sophical Studies 138.2 (Mar. 2008), pp. 211–223.