Top Banner
Interpreted systems for situation analysis Anne-Laure Jousselme and Patrick Maupin 1 R & D Defence Canada - Valcartier 2459 Pie XI North Val-B´ elair, QC, G3J 1X5, Canada Email: {Patrick.Maupin, Anne-Laure.Jousselme}@drdc-rddc.gc.ca Abstract— This paper details and deepens a previous work where the Interpreted Systems semantics was proposed as a general framework for Situation Analysis (SA). This framework is particularly efficient for representing and reasoning about knowledge and uncertainty when performing situation analysis tasks. Our approach of SA is to base our analysis on the production of state transition systems consisting in the set of all temporal trajectories possibly obtained upon the execution of a given set of agents’ protocols. Thus seen, the SA task involves the definition of more or less subtle reasoning about graph structures. A formal situation analysis model is defined as an interpreted algorithmic belief change system. In such a model, the notions of situation, situation awareness and situation analysis are provided. The analysis of the situation is done through the verification of implicit notions of knowledge with temporal properties. Implicit knowledge is distinguished from explicit knowledge and situation awareness is defined in terms of the computing power of resource-bounded agents. A general plausibility measure allows us to model belief while making the link with quantitative representations of uncertainty such as probabilities, belief functions and possibilities. The propsed modelisation of the Situation Analysis process, while compatible with the traditionnal implicit representation of knowledge found in modal logic, allows us to link the decision processes of the agents, their awareness of the situation with the observations they make about the environment. Keywords: Situation, Situation awareness, Situation assess- ment, Situation analysis, Interpreted systems. I. I NTRODUCTION In this paper we pursue the work presented in [1] where the Interpreted Systems semantics was proposed as a general framework for Situation Analysis (SA). This framework is particularly efficient for representing and reasoning about knowledge and uncertainty when performing situation analysis tasks. Here we detail and deepen the exposition of the inter- preted systems semantics and give mathematical definitions of the concepts we feel are needed to achieve a formal binding between the notions of SA and Situation Awareness. Our approach of SA is to base our analysis on the production of state transition systems consisting in the set of all temporal trajectories possibly obtained upon the execution of a given set of agents’ protocols. Thus seen, the SA task involves the definition of more or less subtle reasoning about graph structures. This follows a standard intuition about levels 2 and 3 of the standard JDL Information Fusion model where, according to Steinberg and Bowman [2] Situation Assessment (level 2) is the “estimation and prediction of entity states on the basis of inferred relations among entities” whereas Impact assesment (level 3) “is usually implemented as a prediction, drawing particular kinds of inferences from Level 2 associations”. The following sections review briefly the principal formal models of information fusion as they relate to the SA task (Section I-A) and the basic problems tackled in this paper about the notions of Situation, Situation Awareness and Situation Assessment (Section ). A. Formal Models of Higher Levels of Information Fusion Fusion is often defined as the process of combining informa- tion in order to estimate or predict entity states. In the literature three kinds of information fusion models can be distinguished [2], namely process, functional and formal have already been proposed as guides for the design and implementation of information fusion systems. Process models are for example: John Boyd’s Observe-Orient-Decide-Act loop, Predict-Match- Extract-Search loop, UK’s Intelligence community Collec- tion, Collation, Evaluation, Dissemination cycle [3]. The JDL functional model [2], [4] is one of the most referred with its different levels and corresponding applications. The usual functions considered for information fusion are conveniently segregated into so-called levels of processing, even though practice has shown a less clearcut specialization. The main advantages of the formal models of information fusion is that they (1) allow to verify the systems before they are fielded (2) provide specification means that will allow a more rapid and robust software implementation and (3) allow to caracterize the complexity of representations and tasks. Usually the formal models are verified using simula- tion involving or not scenarization, theorem provers, model checking techniques or combinations of these. Finally, formal models allow to confirm or falsify theories by making results reproducible. We have conveniently distinguished the formal approaches to the problem of situation analysis and higher-levels of information fusion into two large formal modeling families, as illustrated in Figure 1. It is worth mentioning that members of these families of formal models are sometimes, mathematically and practically speaking, closely related and differences are often only a question of terminology. First we distinguish Algebraic Frame- works from the Formal Methods commonly used in computer
11

Interpreted systems for situation analysis

Jan 26, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Interpreted systems for situation analysis

Interpreted systems for situation analysisAnne-Laure Jousselme and Patrick Maupin

1R & D Defence Canada - Valcartier2459 Pie XI North

Val-Belair, QC, G3J 1X5,Canada

Email: {Patrick.Maupin, Anne-Laure.Jousselme}@drdc-rddc.gc.ca

Abstract— This paper details and deepens a previous workwhere the Interpreted Systems semantics was proposed as ageneral framework for Situation Analysis (SA). This frameworkis particularly efficient for representing and reasoning aboutknowledge and uncertainty when performing situation analysistasks. Our approach of SA is to base our analysis on theproduction of state transition systems consisting in the set ofall temporal trajectories possibly obtained upon the executionof a given set of agents’ protocols. Thus seen, the SA taskinvolves the definition of more or less subtle reasoning aboutgraph structures. A formal situation analysis model is definedas an interpreted algorithmic belief change system. In such amodel, the notions of situation, situation awareness and situationanalysis are provided. The analysis of the situation is donethrough the verification of implicit notions of knowledge withtemporal properties. Implicit knowledge is distinguished fromexplicit knowledge and situation awareness is defined in termsof the computing power of resource-bounded agents. A generalplausibility measure allows us to model belief while makingthe link with quantitative representations of uncertainty suchas probabilities, belief functions and possibilities. The propsedmodelisation of the Situation Analysis process, while compatiblewith the traditionnal implicit representation of knowledge foundin modal logic, allows us to link the decision processes of theagents, their awareness of the situation with the observationsthey make about the environment.Keywords: Situation, Situation awareness, Situation assess-ment, Situation analysis, Interpreted systems.

I. INTRODUCTION

In this paper we pursue the work presented in [1] wherethe Interpreted Systems semantics was proposed as a generalframework for Situation Analysis (SA). This framework isparticularly efficient for representing and reasoning aboutknowledge and uncertainty when performing situation analysistasks. Here we detail and deepen the exposition of the inter-preted systems semantics and give mathematical definitions ofthe concepts we feel are needed to achieve a formal bindingbetween the notions of SA and Situation Awareness. Ourapproach of SA is to base our analysis on the production ofstate transition systems consisting in the set of all temporaltrajectories possibly obtained upon the execution of a givenset of agents’ protocols. Thus seen, the SA task involvesthe definition of more or less subtle reasoning about graphstructures.

This follows a standard intuition about levels 2 and 3 of thestandard JDL Information Fusion model where, according toSteinberg and Bowman [2] Situation Assessment (level 2) is

the “estimation and prediction of entity states on the basis ofinferred relations among entities” whereas Impact assesment(level 3) “is usually implemented as a prediction, drawingparticular kinds of inferences from Level 2 associations”. Thefollowing sections review briefly the principal formal modelsof information fusion as they relate to the SA task (SectionI-A) and the basic problems tackled in this paper aboutthe notions of Situation, Situation Awareness and SituationAssessment (Section ).

A. Formal Models of Higher Levels of Information Fusion

Fusion is often defined as the process of combining informa-tion in order to estimate or predict entity states. In the literaturethree kinds of information fusion models can be distinguished[2], namely process, functional and formal have already beenproposed as guides for the design and implementation ofinformation fusion systems. Process models are for example:John Boyd’s Observe-Orient-Decide-Act loop, Predict-Match-Extract-Search loop, UK’s Intelligence community Collec-tion, Collation, Evaluation, Dissemination cycle [3]. The JDLfunctional model [2], [4] is one of the most referred withits different levels and corresponding applications. The usualfunctions considered for information fusion are convenientlysegregated into so-called levels of processing, even thoughpractice has shown a less clearcut specialization.

The main advantages of the formal models of informationfusion is that they (1) allow to verify the systems before theyare fielded (2) provide specification means that will allowa more rapid and robust software implementation and (3)allow to caracterize the complexity of representations andtasks. Usually the formal models are verified using simula-tion involving or not scenarization, theorem provers, modelchecking techniques or combinations of these. Finally, formalmodels allow to confirm or falsify theories by making resultsreproducible.

We have conveniently distinguished the formal approachesto the problem of situation analysis and higher-levels ofinformation fusion into two large formal modeling families,as illustrated in Figure 1.

It is worth mentioning that members of these families offormal models are sometimes, mathematically and practicallyspeaking, closely related and differences are often only aquestion of terminology. First we distinguish Algebraic Frame-works from the Formal Methods commonly used in computer

Page 2: Interpreted systems for situation analysis

Software

design cycle

Sp

ecificatio

nV

erificatio

n

Formal Models

Algebraic

Frameworks

Interpreted

Systems

Generalized

Information

Theory

Category

Theory

Highly Formal

Ontologies

Specification

Languages

Formal

Methods

Completeness

theorem proving

Theorem

provingInduction

Model

checking

Fig. 1. Formal models for information fusion.

science and software engineering. These Formal Methods canbe further distinguished into Highly Formal Ontologies suchas the ones proposed in [5] for Situation Awareness, andSpecification Languages such as the well known UML andDARPA’s DAML also used to model the Situation Awarenessprocess. Usually, these methods are supported by specializedsoftware and displays that will either guide the engineerthrough the usual software development cycle or simply asin the case of highly formal ontologies help him verify ifthe specified ontologies are complete and sound. On theother hand, specification of situation analysis and generallyspeaking, high-level information fusion based on what wehave called Algebraic Frameworks include work on CategoryTheory by Kokar et al. [6] where information fusion processesare studied, the use of Generalized Information Theory [7] forthe modelization of information processing and uncertaintycharacterization at all levels of the JDL model, and finallya new approach based of the Interpreted Systems semanticsused originally for the analysis of distributed systems [8] andproposed in [1] to model the Situation Analysis process. Thesethree approaches to high-level information fusion typicallyuse different verification (or query) strategies. While categorytheory uses the traditional theorem proving approach andgeneralized information theory is mainly based on the useof induction, the interpreted system approach is traditionallybased on the very efficient technique of model checking. Inpractice, matters are however not so clear cut, and cross-fertilization between the various specification and verificationtechniques is common.

While lower levels of information fusion L.0 and L.1already lay on solid formal foundations, at least as far asinformation, uncertainty and aggregation processes modelingare concerned, higher levels of information fusion L.2, L.3 andL.4 still lack a unified formal theory, although many works re-cently emerged [6], [9]. The mathematical modeling approachis typically based on a single abstract theoretical frameworkused to model either information or processes. For instance,one will use probability theory and model random variablescarrying information about perception or measurements. Onthe other hand, mathematical theories can be used insteadto model explicitly the information fusion process. A goodexample of this approach if given by the work of Kokar and

collaborators, where model theory [10] and category theory[6] are used to model information fusion processes and forinstance demonstrate in [11] that decision fusion is a subclassdata fusion.

Because of space limitations we do not review the veryinteresting theories offered by theoretical computer science,economy and game theory for reasoning about processes.Extending the notion of SA to the general idea of representingand reasoning about processes one can consider extensivegames as process models useful for the represention of mentalstates such as uncertainty and knowledge. The state spacerepresentation of knowledge and common proposed by Au-mann [12] in economy and game theory is also an interestingmodel although it lacks an explicit represention of the passageof time. Computer science proposed process calculi such asHoare-Dijkstra calculus, process algebra, modal and dynamiclogics.

Interpreted Systems semantics (Section II-A) combinesmany features of these theories and can be seen as a con-junction of the state space and the extensive game modelingapproaches while being much more explicit about the agents’sources of information. While the rational behavior of in-teracting groups of agents can be efficiently modeled usingthe interpreted systems semantics, the approach also offersthe flexibility needed to model the most common departuresfrom rationality, including resource-boundedness. Section II-B.1 deals explicitly with the algorithmic modelization ofprocessing power limitations. This discussion can be easilyextended to deal with most kinds of mental limitations. SectionII-B.2 deals with a specific type of Interpreted System used tomodel plausibilities, a general concept allowing to model andreason about uncertainty.

In practice formal models such as the Interpreted Systemssemantics have two principal aims (1) to describe the agents’reasoning schemes and (2) to describe the way modellers’ rea-son about such distributed systems. This distinction betweenlevels of analysis lead to the distinctions and definitions of IIIand most particularly to our definition of Situation, SituationAwareness and Situation Assessment. Pratical applications ofthese concepts are presented in Section IV.

B. Situations in state spaces

Situation analysis (SA) in the military domain can beperformed either at the tactical, operational or strategic level.At the tactical level one is faced with an adversary hidingits presence or its true identity by using stealth or dual-use technologies. High velocity targets also add a level ofcomplexity upon the SA task at this level. Adversaries also tryto confuse one another by acting with audacity, by causing sur-prises such as those triggered by using new tactics for whichno corresponding course of action exists. The knowledge ofterrain characteristics is central at the tactical level. At theoperational level one is confronted to adversaries hiding theirintentions or plans as well as avoiding to give a precise idea ofthe extent of their resources. At the strategic level the previousissues also have to be taken into account when performing

Page 3: Interpreted systems for situation analysis

situation analysis but here the adversaries attempt to shatterthe enemy’s will and cohesion by using political, diplomatic,psychological and financial means at their disposal.

There have been many attempts to formalize the process ofsituation analysis for the design of decision support systemsand the automation of the main tasks. From the above sketchone can abstract the main goals of situation analysis in termsof reasoning about relations holding among entities of interestand also in terms of reasoning about the properties of theserelational structures thus revealed. Modeling the interactionbetween agents’ knowledge, awareness, resources, abilities andplans in a unified framework seems necessary to encompassthe above mentionned levels of SA. In [13] we have shownhow to bind plans and abilities to the Interpreted Systemsframework, the latter used to model epistemic changes oc-curing through time.

In probability theory, Shafer [14] defines a probability-treestructure involving special events designated as situations, ina tentative to reconcile the subjective and objective interpreta-tions of probability. In this work, Shafer explicitly links a statespace representation and the time flow for the definitions of asituation. In logic, rather than defining an inference process,McAllester [15] defines the notion of a situation, which is amore general concept in the semantics theoretical framework,and define a meaning function such that each proposition iseither TRUE or FALSE in each situation. In the terminologyof mathematical logic the situations are also called modelsand the meaning of an expression is called the denotation ofthat expression. A language of propositions together with aset of models (situations) and a way of expressing a truthvalue to a proposition in a model is called a logic. This workby McAllester encouraged us a few years ago to pursue theresearch of logic-based approaches for the formalization ofsituation analysis.

In Game Theory it is even possible to draw a correspon-dence between the type of game played and the notion of asituation. In this case this is a meta-system concept since itinvolves classifying and comparing systems. Zero-sum games,repeated games, perfect information games correspond totypical situations. A game can be characterized by a set ofrules called the rules of the game including normative accountsabout players, actions, payoffs and information. Indeed forRasmussen [16] “the modeller’s objective is to describe asituation in terms of the rules of a game so as to explain whatwill happen in the situation”. The aim of a player in gametheory is to plan actions (strategies) based in the information ithas acces to in order to maximize its payoffs. The combinaisonof strategies used by a set of agents playing the same gameis called the equilibrium. This equilibrium gives the modellera precise idea about what will result from the game. Giventhe set of possible equilibria the goal of the modeller willbe to calculate the possible outcome or find the best possibleequilibrium.

Our general definition of a situation (Def. 3.1) is a tentativeto bring together the best of these worlds.

II. BACKGROUND

This section gives the basic definitions of Interpreted Sys-tems semantics II-A including truth sets as well as the basicmodal knowledge and temporal operators. In section II-B weshow how the notion of Interpreted System can be tailoredto model various notions of resource boundedness, includingsituation awareness as well as mental states such as uncertaintyor preferences.

A. Interpreted Systems

Let A = {1, 2, 3, . . . , n, e} be a set of agents where e is aspecial agent denoting the environment. Each agent is assumedto be in some local state li at a given time, encapsulatingall the information the agent has access to. This informationcan include past states, actions and also information about theprotocols used by the agent. The local of the environment isdenoted by le and encodes of the relevant information thatis not encoded in the agents’ local states. In particular, le canencode the objective state of the world i.e. not attainable by theagents’ perception and reasoning means. For SA applications,apart from the objective states of the world le can containmaps of the environment, network information or any otherinformation describing the outside world. The agents’ localstates can incode partial or imperfect views of this outsideworld.

A global state s is an element of S ⊆ L1 × . . .×Ln ×Le,where Li is the set of the possible local states of the agent i.The local state of Agent i corresponds to the ith componentof the global state s = (l1, . . . , ln, le). A sequence of globalstates s1, s2, . . . is called a run r over S and is a functionfrom time to global states. A system R is a set of runs and(r,m) denotes a point in R, consisting of a run r and a timem. The state of the system at time m in the run r is r(m). Ifr(m) = (l1, . . . ln, le) is the global state at point (r,m), thenre(m) = le and ri(m) = li for i = 1, . . . , n are respectivelythe environment’s and the agents’ local states at point (r,m).A round m in run r takes place between time m−1 and timem.

Actions are the cause of changes in the system and are per-formed by the agents and the environment in rounds. Let ACTi

be the set of actions that can be performed by agent i, and letACTe be the set of actions performed by the environment. Ajoint action is an element of ACTe × ACT1 × . . . × ACTn,i.e. a tuple (ae,a1, . . . ,an) of actions performed by the set ofagents and the environment, where ae is the action performedby the environment, and ai is the action performed by theagent i.

A protocol Pi for the agent i is a mapping from the setLi of local states of the agent i to nonempty sets of actionsin ACTi, i ∈ A. A protocol is a function on local statesrather than on global states. A joint protocol P is a tuple(P1, P2, . . . , Pn) consisting of the protocols of each of theagents i, i = 1, . . . , n. This corresponds to the definition ofan equilibrium in game theory mentionned in the introductionof this paper. Note that Pe, the protocol of the environment, isnot included in the joint protocol. Rather, the protocol of the

Page 4: Interpreted systems for situation analysis

environment is supposed to be given (or at least estimated)and P and Pe can be viewed as the strategies of opposingplayers.

A context γ is a tuple (Pe, S0, τ, Ψ) where Pe is a protocolfor the environment, S0 is a nonempty subset of S describingthe state of the system at the initiation of the protocol, τ isa transition function and Ψ is an admissibility condition onruns. The environment’s protocol Pe can be used to modelthe adversary’s strategies or simply model random errors orevents in a given situation. A transition function τ assigns foreach global state and each joint action the resulting globalstate obtained after performing the joint action. The transitionfunction describes which actions can be performed from agiven global state. The admissibility condition Ψ on runstells which ones are “legal”. In practice Ψ can be used toshrink down a large system, or can be used to model fairnessconditions. Formally, Ψ is a set of runs and r ∈ Ψ if and onlyif r satisfies the condition Ψ. Note that the description of thebehavior of a system is contextual, i.e. , a joint protocol P isalways described within a given context γ.

Let Φ be a set of primitive propositions, describing basicfacts about the system. Formulas are built using the classicaloperators of propositional logic. The set of formulas is closedunder ¬ and ∧ (negation and conjunction). Hence, given twoformulas φ and ψ, ¬φ, φ∧ψ are also formulas. Let denote byL(Φ) the language of Φ, i.e. the set of well-formed formulas.

An interpreted system I consists of a pair 〈R, π〉 whereR is a system over a set S of global states and π is aninterpretation for the propositions in Φ over S, which assignstruth values (either true or false) to the primitive propositionsat the global states. Thus, for every p ∈ Φ and state s ∈ S, wehave π(s)(p) ∈ {0; 1}. The satisfaction of formulas in L(Φ)is given by:

(I, r,m) ² p iff π((r(m))(p) = 1(I, r,m) ² φ ∧ ψ iff (I, r,m) ² φ and (I, r,m) ² ψ

(I, r,m) ² ¬φ iff (I, r,m) 2 φ

TThe truth set of φ is the set of points satisfying φ, i.e. :

||φ|| = {(r,m) ∈ I|(I, r,m) ² φ} (1)

Defining local states for each agent induces a series ofequivalence relations ∼i, for i ∈ A over the points of thesystem: Ri(r,m) = {(r′,m′)|(r,m) ∼i (r′,m′)} is theequivalence class of point (r,m) partitioning points at which ihas the same information, which are then equivalent to (r,m).Here is thus defined the standard model of modal logic S5,represented by the operator Ki, i ∈ A, and whose semanticsis:

(I, r,m) ² Kiφ iff (I, r′,m′) ² φ for all (r′,m′)such that (r,m) ∼i (r′,m′)

that reads “at point (r,m) in the interpreted system I, theagent i knows that φ if and only if φ holds in all the points itcannot distinguish from (r,m)”, i.e. the points in which i isin the same local state ri(m).

Besides all compositions of individual knowledge operatorslike KiKjφ (“Agent i knows that Agent j knows φ”) orKi¬KjKkψ (“Agent i knows that Agent j does not knowthat Agent k knows ψ”), etc, we are interested in more globalstatements involving a group of agents G. Group knowledgeoperators and then defined with their semantics in interpretedsystems [17]:

(I, r,m) ² SGφ iff ∃i ∈ G such that (I, r,m) ² Kiφ

(I, r,m) ² EGφ iff (I, r,m) ² Kiφ for all i ∈ G

(I, r,m) ² CGφ iff (I, r,m) ² Eki φ for all k > 0

(I, r,m) ² DGφ iff (I, r′,m′) ² φ for all (r′,m′)such that (r,m) ∼i (r′,m′) for all i ∈ A

SGφ reads “Someone in group G knows that φ”, EGφ reads“Everyone in group G knows that φ”, CGφ reads “φ iscommon knowledge in group G”, DGφ reads “Group G hasdistributed knowledge that φ”. Distributed knowledge is thecombined knowledge of all the members of G [17], andcorresponds to the knowledge that would be ascribed to anagent that woould have fused the individual knowledge of themembers of G.

Finally, to reason about the temporal evolution of thesystem, temporal operators are defined:

(I, r,m) ² ©φ iff (I, r,m + 1) ² φ

(I, r,m) ² φUψ iff ∃m′ ≥ m such that (I, r,m′) ² ψ

and ∀m′′ such that m ≤ m′′ ≤ m′, (I, r,m′′) ² φ

©φ reads “φ will be true next time”, φUψ reads “φ is trueuntil ψ is true”. These basic temporal operators of LTL (LinearTime Logic) can be extended to include some shortcuts suchas Fφ (“eventually φ”) or Gφ (“always φ”), but also to includeCTL (Computational Tree Logic) operators such as Aφ (“forall sequences φ”).

B. Different kinds of interpreted systems

In this section we detail different versions of interpreted sys-tems providing a basis for our situation analysis modelisationproblem. In Section IV Based on these models of InterpretedSystems and combining their principal features, we proposein Section III the notion of interpreted belief change systemas a topic for further investigation.

1) Interpreted Algorithmic Systems: The knowledge notionintroduced previously through the modal operator Ki is animplicit notion of knowledge: The agents are not assumedto compute this knowledge, implicit knowledge being ratherthe vision of the analyst of the system. Indeed, this notionof knowledge is based on the standard logic modal systemS5 which suffers from the logical omniscience problem: theagents are assumed to know all tautologies as well as allthe logical consequences of their knowledge, making themperfect reasoners. A notion of explicit knowledge has beenintroduced in [18], called algorithmic knowledge. The latter isan internal notion of knowledge that the agent can compute,given an internal algorithm. This explicit notion of knowledge

Page 5: Interpreted systems for situation analysis

is a general epistemic model of agent resource-boundednessand is at the basis of our definition of situation awareness insection III.

While Kiφ denotes that fact that i is granted the knowledgeof φ (without itself necessary knowing this fact), Xiφ is usedin [18] to denote the fact that i can compute that it knowsφ. Each agent owns a local algorithm Ai allowing it at eachpoint of the system to decide if it knows φ. Such an algorithmAi takes as inputs a point (r,m) together with a formula φand outputs “Yes” if i knows that φ is true, “No” if i doesnot know if φ is true and “?” if i is unable to compute ifit knows φ. Ai is sound if it never returns wrong answersand it is complete if it never returns “?”. It follows that for asound and complete algorithm, the explicit knowledge equalsthe implicit knowledge (Xiφ ⇔ Kiφ).

Definition 2.1 (Interpreted Algorithmic System [18]): Aninterpreted algorithmic system (IAS) is an interpreted systemI in which the local state of each agent i at point (r,m) isa pair 〈Ai, li〉 where Ai is i’s local algorithm and li is i’slocal data. The algorithmic knowledge denoted by the modaloperator Xi is then defined by:

(I, r,m) ² Xiφ iff Ai(φ, li) = “Yes”for Ai = algi(r,m) and li = datai(r,m)

2) Interpreted Plausibility Systems: Kripke structures allowto model belief instead of knowledge by simply changing theproperties of the binary relation between points. In particular,instead of an equivalence relation, belief has often been rep-resented by either both Euclidean and transitive (K45 system)or Euclidean, symmetric and transitive binary relations (KD45system). Contrarily to systems for knowledge in which onlytrue formulas can be known, systems for belief do not satisfythe truth axiom (T) as false formulas can be believed.

An alternative to this model of belief which turns to bemore general is proposed in [19]. The agent’s beliefs are notdetermined through binary accessibility relations but ratherby plausibility spaces Pi(r,m) = (S(r,m,i), PL(r,m,i)) foreach agent i and each point (r,m) of I. PL is a plausibilitymeasure defined in [20] PL : 2S → D to satisfy the followingproperties:

PL(∅) = ⊥D

PL(S) = >D

PL(A) ≤ PL(B) if A ⊆ B

D is a partially ordered set (poset) by the relation ≤. Thisnotion of plausibility measure, generalizes among other prob-ability measures, belief and plausibility measures of Shafer[21], possibility and necessity measures of Zadeh [22], inwhich cases D = [0, 1], ⊥D = 0 and >D = 1. PL mustnot be confused with Shafer’s plausibility measure (i.e. Pl inSection IV-C).

Definition 2.2 (Interpreted Plausibility System [19]):An interpreted plausibility system (IPS) is a tuple(R, π,P1, . . . ,Pn) where (R, π) is an interpreted system andPi is a plausibility assignment mapping each point (r,m) to a

plausibility space Pi(r,m) = (S(r,m,i), PL(r,m,i)), describingthe relative plausibility of events from the point of view ofthe agent i at (r,m).In such a system, the semantics for belief is:

(I, r,m) ² Biφ iff Ki (PL(φ) ≥ PL(¬φ)) (2)

An agent i is said to believe φ if it knows that φ is moreplausible than ¬φ in all the worlds it considers plausible [19].

The plausibility of a formula of L(Φ) at point (r,m) andaccording to agent i is then defined as the plausibility of itstruth set:

PL(r,m,i)(φ) = PL(r,m,i)(||φ||) (3)

Although Definition 2.2 is quite general, some restrictionsnaturally arise in most of applications. In particular, wemay want that S(r,m,i) = Ri(r,m) that is that the agentconsiders plausible only states that are possible according toits knowledge, i.e. its local state. Moreover, also often we mayrequire that if (r,m) ∼i (r′,m′) then Pi(r,m) = Pi(r′,m′),that is that the plausibility space of the agent depends onlyon its local state. Other restrictions can also been envisaged,allowing to better fit to a particular application but also tomake the modelisation much simpler.

3) Interpreted Belief Change Systems: In order to modelbelief change (i.e. revision and update), interpreted plausibilitysystems have been restricted to satisfy some additional condi-tions, leading to Interpreted Belief Change Systems.

Definition 2.3 (Interpreted Belief Change System [23]):An interpreted belief change system is an interpretedplausibility system (R, π,P1, . . . ,Pn) satisfying the fivefollowing conditions:(IBCS1) φ can be evaluated with respect to re(m) only:

le ² φ if (I, r,m) ² φ for some (r,m); re(m) = le

(IBCS2) The agent’s local state is of the form:

ri(m + 1) = 〈ri(m).o(r,m)〉where ri(m) = 〈o(r,1), . . . , o(r,m)〉 and o(r,m) ∈ Le

is the observation at time m in run r.(IBCS3) The language L(Φ) includes a set Φobs of primitive

propositions disjoint from Φe such that Φobs ={learn(φ); φ ∈ Le} with:

π(r,m)(learn(φ)) = 1 iff o(r,m) = φ,∀r and ∀m.

(IBCS4) The agent never observes false:

(I, r,m) ² o(r,m), for all runs r and times m.

(IBCS5) There is a prior Pi = (R, PLi) such that PLi is aplausibility measure.

While in [23], the observations are reliable (φ is observedonly if φ is true), the model has been extended to unreliableobservations in [24] to account for Markovian observationmodels. This kind of interpreted belief change system is thencalled observational system.

Page 6: Interpreted systems for situation analysis

III. FORMALISATION OF THE SITUATION ANALYSISPROCESS

We claim that the interpreted systems semantics can beused as a blueprint for situation analysis as it provides thebasics for the formalisation of situation analysis elements.Moreover, the different kinds of interpreted systems detailedin II-B cover most of the features required in the design ofa situation analysis problem. Indeed the limitation of agents’capabilities or awareness, the representation of a wide varietiesof uncertainty types, information aggregation, updating andrevision can all be efficiently modeled within the IS formalism.

Hence, our model of situation analysis is an InterpretedAlgorithmic Belief Change System (IABCS) I generated bythe joint protocol of n agents within a context γ, the localstates of the agents are tuples of the form:

ri(m) = 〈algi(r,m);Pi(r,m); obsi(r,m)〉 (4)

where for each agent i and point (r,m) of the system R,algi(r,m) is an algorithm of truth evaluation, Pi(r,m) is aplausibility space and obsi(r,m) is a sequence of observations.

A. Situation

Based on the interpreted systems semantics, we define thesituation in terms of a transition state system, in which thearcs are labeled by joint actions and nodes by global states.

Formally, using the IS semantics:Definition 3.1 (Situation): Let A be a set of n + 1 agents

including the environment and let I be the interpreted systemrepresenting the joint protocol P of the n agents in theinterpreted context (γ, π). A situation is the sub-system I(r,m)

of I, that is the system representing P in the interpretedcontext (γ(r,m), π) where γ(r,m) = ((r,m), Pe, Ψ, τ).

For a given agent, the (local) situation is the projection ofI(r,m) over ri(m), the local state of i at (r,m), provided apartial view of it.

A situation can thus be seen as a subset of states andcorresponding transitions inside a set of protocols runs. Onecan distinguish three remarkable cases of situations (Fig. 2):

1) The first case is simply the global state denoted eithersimply by s, or by (r,m) which is a point a on arbitraryrun r at time m. This case of a situation as a rather staticobject is the typical sort of situation modeled in informa-tion fusion, as proposed by Steinberg and Bowman [2]and followed by Baclawski et al. [25], although the stateof the environment is not often explicitly encoded.

2) The second remarkable case of a situation is simply, givenan initial state (r, 0) the sequence of state transitionsleading to (r,m), which is for instance the current globalstate and the fan of all possible state transitions thereafter.Of course one can restrict the analysis of such a situationto an interval around (r,m). Multisensor tracking andMultiHypothesis Tracking (MHT) rely on this kind ofsituational modeling and more generally game situationsmodeling.

3) The third remarkable case of situation of notable interestis the one where the study of the agents’ joint protocol

is not restricted to a single initial state, but rather thefull spectrum of the k possible initial states S0 is un-folded leading to the full system R. This is typically thekind of situations studied in the analysis of parallel anddistributed systems using formal notions of knowledge[26]–[28].

Possible states for Agent 1

Global state

sm

Po

ssib

le s

tate

s fo

r A

gen

t 2

Initial states

s0

r2(m)

S0

r1(m)

a

Joint action

(a) Case 1.

sm

s0

r2(m)

S0

r1(m)

(b) Case 2.

sm

s0

r2(m)

S0

r1(m)

(c) Case 3.

Fig. 2. Three remarkable cases of situations defined in terms of statestransitions.

In Figure 2, the global states do not necessary correspond tothe grid and illustrates the fact that some approximation maybe required to represent the uncertainty, the difference betweenthe measures and the universe of discourse for instance.

B. Situation awareness

Awareness is not simply a special state of knowledge. It alsorefers to a limited view, to a limited capacity of the agentsto reach a perfect state of knowledge, the one that wouldbe reached by perfect logically omniscient reasoners. Indeed,when defining situation awareness, arise some concepts ofattention, vigilence, intelligence, stress [29].

In order to account for limited ressources of agents, Faginand Halpern introduced in [30] a logic of general awareness.Built upon a standard S5 model, this logic assigns to eachpoint (r,m) of the system and for each agent i an arbitrarysubset of formulas of L(Φ) about which the agent is aware,denoted by Ai(r,m). The semantics of the awareness operatorAi is then simply:

(I, r,m) ² Aiφ iff φ ∈ Ai(r,m) (5)

Page 7: Interpreted systems for situation analysis

that reads “the agent i is aware of φ at point (r,m) if andonly if φ belongs to Ai(r,m)”. The operator Ai acts as a filteron implicit knowledge to define explicit knowledge Xiφ ⇔Kiφ ∧ Aiφ. Figure 3 illustrates the interaction of awareness,

Implicit

knowledge

Awareness

Explicit

knowledge

Fig. 3. Awareness, implicit and explicit knowledge [31].

explicit knowledge and implicit knowledge. In this diagram,three kinds of formulas are distinguished: Those about whichthe agent is aware, thos about which it has implicit knowledgeand those about which it has explicit knowledge.

In the logic of general awareness, Ai is a purely syntacticoperator as the set of formulas about which the agent is awareat a given point is arbitrary. Our standpoint is quite differentsince we are interested in how this set Ai(r,m) is built.Among the different possible interpretations of Ai, we followHalpern et al. [18] and say that “an agent is aware of a formulaφ at a given point (r,m) if it is able to compute the truthvalue of φ”, understood as capturing any constraints like time,memory, reasoning abilities, etc.

Formally,Definition 3.2 (Awareness): Let A be a set agents and let I

be the IABCS representing the joint protocol P of the n agentsin the interpreted context (γ, π). The local state of each agent iat point (r,m) includes both a local algorithm Ai = algi(r,m)and local data li = obsi(r,m). An agent i of A is aware ofφ at a given point (r,m) in I, and we note Aiφ, iff it is ableto compute the truth value of φ:

(I, r,m) ² Aiφ iff Ai(φ, li) 6= “?” (6)The fact that the algorithm can compute the truth value of φ

does not mean that this is the correct truth value π(r,m)(φ).The 6 possible configurations are illustrated in Table III-B: thealgorithm answers “Yes” and φ holds, “Yes” and ¬φ holds,etc. Although to be aware of a formula i’s algorithm doesnot need to answer the correct value, a desirable propertyfor the algorithm is soudness which guarantees that it alwaysgives correct answers. This property is generally easily provedbefore hand and encoded in the local state of the agent, so thatwhen its algorithm answers “Yes”, φ is effectively true at thecorresponding point, and then explicitly known by the agent.In Table III-B, both cases of sound algorithm are highlightedin larger print.

A remarkable case of interest is about the formulas of typeKiψ. Indeed, if we replace φ in the discussion above by Kiψ,we obtain Table III-B. In this table, ¬Ai is an abbreviationAiKiψ ∧Kiψ and Ai¬Kiψ ∧ ¬Kiψ correspond to two casesof explicit knowledge (denoted by Xiψ in [18] and in this

algi(r, m)Yes No ?

π(r, m)(φ) = 1 Aiφ ∧ φ Ai¬φ ∧ φ ¬Aiφ ∧ ¬Ai¬φ

π(r, m)(φ) = 0 Aiφ ∧ ¬φ Ai¬φ ∧ ¬φ ¬Aiφ ∧ ¬Ai¬φ

TABLE IAWARENESS AND TRUTH VALUES IN THE SET OF POINTS OF R.

Section above), that are “i explicitly knows that ψ” and “iexplicitly does not know that ψ”.

We can now state our definition of situation awareness:Definition 3.3 (Situation awareness): Let A be a set of

agents and let I be the IABCS representing the joint protocolP of the n agents in the interpreted context (γ, π). For anagent i of A, the situation awareness at point (r,m) is the setof formulas of L(Φ) about which the agent i is aware:

Ai(r,m) = {φ ∈ L(Φ)|(I, r,m) ² Aiφ} (7)Situation awareness is thus defined in terms of states, i.e.in terms of set of points in the system. For a given agent,the situation awareness is provided by test evaluations onobservations about the environment (the objective state ofthe world). For the analyst, the situation awareness is theset of queries that the situation analysis system is able toanswer. Contrary to a particular agent, the analyst is outsidethe situation and can only make queries about the system ithas itself designed. This approach is close to the one followedin [32].

C. Situation perception and comprehension

In this section we step aside from the main discussion andshow how the notions of Situation perception and Situationcomprehension keen to psychologists and ergonomists likeEndsley [33] could be modeled in the Interpreted Systemsframework. A situation is a system (or a subsystem), generatedby some agents evolving in some context. An interestingquestion now is what kind of protocol can be ascribed tothe agents? Remind that a protocol is a function from localstates to actions. What are then the kinds of actions thatcan be performed by the agents? Two different perspectivecan be adopted: (1) the planning point of view where theagents mainly act to reach a certain goal, and (2) the situationanalysis point of view where the agents mainly performepistemic actions aiming at gathering information, sharinginformation with other agents, reason, consult their memory,etc. We conceive thus situation perception as an epistemic taskrepresented by an epistemic protocol, whose actions involved

algi(r, m)Yes No ?

Kiψ AiKiψ ∧Kiψ Ai¬Kiψ ∧Kiψ ¬Ai

¬Kiψ AiKiψ ∧ ¬Kiψ Ai¬Kiψ ∧ ¬Kiψ ¬Ai

TABLE IIAWARENESS, EXPLICIT KNOWLEDGE AND IMPLICIT KNOWLEDGE.

Page 8: Interpreted systems for situation analysis

aim at changing the information the agent has access to, i.e.its local state.

Formally,Definition 3.4 (Situation perception): Let A be a set of

agents and let I be the IABCS representing the joint protocolP of the n agents in the interpreted context (γ, π). Thesituation perception is a mapping between the state of theenvironment and the local state of the agent Obs : Le −→ Li

such that Obs(le) = φ with φ being a formula of L(Φ), isAgent i’s observation of the environment’s local state. Thenwe have:

obs(r,m) = 〈obs(r,m− 1).φ〉 (8)

where . denotes an append operation.Situation perception is an external task, as it is a mappingbetween the environment and the agent.

In return, situation comprehension is an internal task, asit can be seen as a self-mapping on the agent’s local state.The situation comprehension is then any action that makesthe agent change its set of beliefs. Reasoning, aggregation,deduction, recall, compression of database, etc are all tasks ofsituation comprehension.

D. Situation analysis

Situation analysis is the process by which the decisionmaker (or analyst) reaches a state of situation awareness, thatwill then allow him to make decisions.

Definition 3.5 (Situation analysis): Let A be a set of nagents and let I be the IABCS representing the joint protocolP of the n agents in the interpreted context (γ, π). Situationanalysis is the process of verifying properties of I. If φKT isa formula of L(Φ) expressing such a property, then analysisthe situation comes to answer the question

(I, r,m) ² φKT (9)

If φKT is satisfied in I at (r,m), then the analyst of the systemis said to be aware of φKT .The task of deciding if a formula is true in a given model I isknown as model checking (for details see for example [34]).

Indeed, the IS semantics allows two views of the systems:(1) the agent’s view, which is partial, represented by its localstate, and (2) the analyst’s view which is total, represented bythe global state. The analyst, although not explicitley referredis the person who designed the system, i.e. assigned protocolsto the agents and defined the context (gives an estimation of theprotocol of the environment, defines the working of the world,specifies the possible initial states, introduces some additionalcontraints).

On the other hand, since agents are not perfect reasonersand are thus limited in computation and reasoning abilities, thepartial view of the agent is moreover restricted. The analysthowever, is assumed to be a perfect reasoner and is thusascribed implicit knowledge of the agents. Although it is nota standard view we can further consider the analyst itself aslimited in comptuation and reasoning capabilities, constraintby the algorithm used for its decision procedure, such as modelchecking for example (see Table III).

Local view (agent) Global view (analyst) God Eye ViewExplicit Xiφ XXiφ KXiφImplicit Kiφ XKiφ KKiφ

TABLE IIIDIFFERENT VIEWS VERSUS IMPLICIT AND EXPLICIT KNOWLEDGE.

If the model checking algorithm is sound (always givescorrect answers), then the analyst has explicit knowledge ofφKT . Note that the local algorithms introduce in Section III-B could be model checking algorithms and reversely, thediscussion of this latter section holds for the awareness ofthe analyst.

IV. ILLUSTRATIONS ON A SURVEILLANCE SCENARIO

Let us consider the setting of Figure 4, in which the behaviorand knowledge of two agents 1 and 2 are analysed accordingtheir mutual interaction as well as their interaction with aparticular area of interest (AOI). Suppose that both agentsare able to sense ranges ρ and bearings θ about the otheragent’s spatial position that we will denote by (ρ1, θ1) for theestimated position of Agent 1 (made by Agent 2) and (ρ2, θ2)for the estimated position of Agent 2 (made by Agent 1).

vT

T

AOI

1

2

N

W E

S

NE

SE

WN

WS

Fig. 4. Scheme of a scenario surveillance.

A. Situation

Since the kind of interpreted considered is observational,the local state of each agent is composed by successiveobservations (measures) about the target’s position and thestate of the environment contains the real targets positions:

obs1(r,m) = 〈(ρ2(0), θ2(0)); . . . ; (ρ2(m), θ2(m))〉obs2(r,m) = 〈(ρ1(0), θ1(0)); . . . ; (ρ1(m), θ1(m))〉re(m) = 〈(ρ1(0), θ1(0), ρ2(0), θ2(0)), . . .〉

Thus, a global state will be of the form:

s =[(ρ1, θ1); (ρ2, θ2); (ρ1, θ1, ρ2, θ2)

](10)

Moreover, both agents are able to move in 8 possible directionsNorth (N), North-West (NW), West (W), West-South (WS),

Page 9: Interpreted systems for situation analysis

South (S), South-East (SE), East (E) and East-North (EN).Here are then the possible actions for both agents:

ACT1 = ACT2 = {MoveN; . . . ;MoveEN; Λ}= {N; . . . ;EN; Λ} (11)

Since the agents are always sensing, no action Sense isincluded in the sets ACTi, i = 1, 2, and this for simplifyingthe exposition. Thus the possible joint actions for both agentsare of the form:

a = (Movex; Movey) (12)

with (x, y) ∈ ACT1 × ACT2. For example, a1 = (N; S)meaning that in round 1, Agent 1 moves in direction North andAgent 2 moves in direction South. For simplicity, we assumethat Agent 1 is static and only performs the null action Λ. Theprotocol of Agent 1 consists simply in doing nothing (onlysense) except if it percieves that Agent 2 is in the AOI, inwhich case it will send the information to an agent designatedagent of its hierarchy. The protocol of Agent 2 consists inmoving toward Agent 1 until it percieves it is too close, inwhich case it must make a u-trun.

The uncertainty in general can be modeled by environmentactions. For instance, the measures of both agents may beaffected by sensing errors like ε1r = ±10 for the rangeestimated by Agent 1, ε2r = ±20 for the range estimated byAgent 2 and εθ = ±1 for the bearing estimated by both agents.Thus the environment is able to perform one joint action ofthe form:

ae = {(ε1r; ε2r; εθ; εθ} (13)

The protocol of the environment simply selects randomly oneaction of ACTe in each round, affecting thus the measuresof both agents. For instance, εe = (0; 0; 0; 0) means thatno error occurred in the observations of both agents, εe =(+10; 0; 0;−1) means that an error of +10 occurred in themeasured range by Agent 1 and −1 in the measured bearingof Agent 2.

The situation is represented by the execution of the jointprotocol of 1 and 2 in the given context, as illustrated in Figure5.

B. Situation awareness

We consider the basic set of propositions Φ = {φr, φθ} withφr =“The range of Agent 2 crosses the AOI”, and φθ =“Thebearing of Agent 2 crosses the AOI”. The formula φAOI =φr ∧ φθ is then “Agent 2 is in the AOI”. We have thus forexample that:

φr = {(r,m)|re(m) ² φr} (14)

is the set of points in which φr holds, i.e. the set of states ofthe system in which the target is the the AOI.

We assume that each agent has an algorithm algi(r,m) inits local state ri(m) to decide if a particular formula φ is true.For example, Agent 1 may evaluate that φr holds if the rangeit measured about Agent 2 ρ2 lies in the range interval definingthe AOI. Since the evaluation of such a basic proposition could

m=0

m=1

…m=2

m=4

m=3

…m=5

(r,0)=

( ;WN; ) ( ;NE; ) ( ;SW; ) ( ;N; )…

( ;SW; )

( ;SW; )

( ;W; )

( ;WN; )

Fig. 5. System generated by the joint protocol of agents 1 and 2 and theenvironment which represents the target’s motion.

be seen as immediate, according to Definition 3.2 Agent 1 isaware of φr, at any point in I. The situation awareness forAgent 1 will be then A1(r,m) = {φr}, for all (r,m) or interm of states, A1(r,m) = ||φr|| which is the set of points inwhich φr holds.

However, the task may be not as simpler for a formulas likeφus =“Agent 2 is coming toward Agent 1”. Indeed, such anevaluation involves more computation, i.e. a test between thelast and before last observation to be able to say whether ornot the measure at m − 1 is greater or not than the measureat m. In this case, we may have for some (r,m), (I, r,m) ²¬A1φr ∧ ¬A1φr.

C. Belief, revision and update

Remember from II-B that if algorithm algi returns “Yes”means that the agent i is aware of φr, and not that φr is true.The algorithm may be wrong. If the algorithm is sound (alwaysreturn the correct answer), then the agent is fully reliable,and all pieces of information coming from it can be takenas knowledge. However, the soundness property may not beprovable in some cases and thus, the notion of awareness wemodel is rather close to belief.

In order to model belief, we assume that each agent is ableto assign for example a belief space (in the Shafer’s sense)to each point of the system, P(i,r,m) = (S(i,r,m), Bel(i,r,m)),with Bel(i,r,m)(A) being the belief degree the agent i hasat time m in run r about the event (or formula) A. If theconsidered events are φr and φθ and if we denote by m(i,r,m),the basic Probability Assignement (BPA) corresponding toBel(i,r,m), then we could have for example m(1,r,3)(φr) = 0.7and m(1,r,3)(>) = 0.3 for Agent 1 and m(2,r,3)(φθ) = 0.6 andm(1,r,3)(>) = 0.4 for Agent 2. Using Dempster’s rule, thenwe obtain m(12,r,3)(φAOI) = 0.42, the joint degree of belief ofAgents 1 and 2 that Agent 2 is in the AOI. If the agents arecooperative, then the degree of belief could serve as a criterionfor exchanging information.

Page 10: Interpreted systems for situation analysis

Another example of use of this plausibility measure is theupdate of Bel(i,r,m)(φ) on the light of new pieces of infor-mation. In this case, a conditional rule could be used leadingfor example to Pl(1,r,m+1)(φAOI|φr) = Pl(1,r,m)(φAOI∧φr)

Pl(1,r,m)(φr) , usingDempster’s rule of conditioning, where Pl is the plausibilitymeasure in Shafer’s sense.

D. Situation analysis

Three kinds of queries can be used to analyse the situation:• Queries about truth: “Does φ hold at point (r,m) in I?”

formally written (I, r,m) ² φ ?• Queries about knowledge: “Does i know φ hold at point

(r,m) in I?” formally written (I, r,m) ² Kiφ ? but also“Does i know that j knows that φ hold at point (r,m)in I?” formally written (I, r,m) ² KiKjφ ? and queriesabout group knowledge like “Does the group G of agentshas distributed knowledge of φ at point (r,m) in I?”formally written (I, r,m) ² DGφ ? or “Is φ commonknowledge among members of G at point (r,m) in I?”formally written (I, r,m) ² CGφ ?

• Queries about time: “Does φ eventually hold in run r?”formally written (I, r) ² Fφ ? (Linear Temporal Logic)but also “Does φ hold for all sequences r from s?”formally written (I, s) ² Aφ ? (Computation Tree Logic)

In general, queries are combinations of these three types likefor example “Do the agents 1 and 2 always both know that¬φAOI holds in I?”, formally written (I, s) ² AE12φ ?where ¬φAOI means that Agent 1 is not in the AOI. Incase of a negative answer, either the surveillance protocol orequipment may have to be modified.

V. CONCLUSIONS

In this paper, we extended a discussion initiated in [1] wherethe interpreted semantics had been presented as a potentialblueprint for the situation analysis purpose. Our hypothesisfor further research is then:

A formal situation analysis model is an interpretedalgorithmic belief change system.

IS in general allow to model implicit notions of knowledgeand the analysis is done through by verifying some epistemicand temporal properties by the analyst, through some decisionprocedure. Although more research needs to be done in thedirection, the efficient model checking procedure is particu-larly well developped for the IS semantics. Our definition ofa situation is closely related to the one of a state transitionsystem.

The algorithmic characteristic of an IS allows to distinguishbetween implicit and explicit knowledge and thus allows tocompute what an agent can know given limited ressources(time, computation capability, expertise, etc). This notionallows to formally define situation awareness.

Introducing plausibility measures in interpreted systems(interpreted plausibility systems) allows to introduce the notionof belief in the model. The generality of plausibility measuresgives the advantage of using most of quantitative represen-tations of uncertainty such as probabilities, belief functions,

possibilities, etc. Vagueness can also be represented, but thiswill be detailed in a further work.

Restricting interpreted plausibility systems to satisfy someconstraints leads interpreted belief change systems and allowsto represent belief change (update and revision). This mod-elisation, while compatible with the implicit representationof knowledge, makes an explicit link with the observationsand the environment. The local process of perceiving theenvironment, computing and changing beliefs based on newpieces of information, is seen as the situation assessmentprocess.

REFERENCES

[1] P. Maupin and A.-L. Jousselme, “A general algebraic framework forsituation analysis,” in Proceedings of the 8th International Conferenceon Information Fusion, Philadelphia, PA, USA, July 2005.

[2] A. N. Steinberg and C. L. Bowman, “Revisions to the JDL data fusionmodel,” in Handbook of Multisensor Data Fusion, ser. The ElectricalEngineering and Applied Signal Processing Series, D. L. Hall andJ. Llinas, Eds. Boca Raton: CRC Press, 2001, ch. 2, pp. 2–1–2–19.

[3] C. L. Bowman and A. N. Steinberg, “A systems engineering approach forimplementing data fusion systems,” in Handbook of Multisensor DataFusion, ser. The Electrical Engineering and Applied Signal ProcessingSeries, D. L. Hall and J. Llinas, Eds. Boca Raton: CRC Press, 2001,pp. 16–1 – 16–39.

[4] J. Llinas, C. Bowman, G. Rogova, A. Steinberg, E. Waltz, and F. White,“Revisions and extension to the JDL data fusion model II,” in Pro-ceedings of the 7th International Conference on Information Fusion,Stockholm, Sweeden, June-July 2004, pp. 1218–1230.

[5] C. J. Matheus, M. M. Kokar, and K. Baclawski, “A core ontology forsituation awareness,” in Proceedings of the 6th Annual Conference onInformation Fusion, Cairns, Qc, Australia, July 2003, pp. 545–552.

[6] M. M. Kokar, J. A. Tomasik, and J. Weyman, “Formalizing classes ofinformation fusion systems,” Information Fusion, vol. 5, no. 3, pp. 189–202, 2004.

[7] G. J. Klir and M. J. Wierman, Uncertainty-Based Information, 2nd ed.,ser. Studies in Fuzziness and Soft Computing. Heidelberg, New York:Physica-Verlag, 1999, vol. 15.

[8] J. Y. Halpern, Reasoning about Uncertainty. Cambridge, MA: TheMIT Press, 2003.

[9] S. N. Thorsen and M. E. Oxley, “Comparing fusors within a categoryof fusors,” in Proceedings of the 7th International Conference onInformation Fusion, Stockholm, Sweden, 2004, pp. 435–441.

[10] M. Kokar and J. Tomasik, “Towards a formal theory of sensor/datafusion,” Northeastern University, Boston, MA, Tech. Rep. COE-ECE-MMK-1/94, May 1994.

[11] M. M. Kokar, J. A. Tomasik, and J. Weyman, “Data vs. decision fusionin the category theory framework,” in Proc. of the 4th Intl Conf. onInformation Fusion, Montreal, 2001.

[12] R. J. Aumann, “Agreeing to disagree,” The Annals of Statistics, vol. 4,pp. 1236–1239, 1976.

[13] A.-L. Jousselme, P. Maupin, C. Garion, L. Cholvy, and C. Saurel,“Situation awareness and ability in coalitions,” in Submitted to the 10thInternational Conference on Information Fusion, Quebec city, Canada,2007.

[14] G. Shafer, The Art of Causal Conjecture. MIT Press - ArtificialIntelligence Series, 1996.

[15] D. McAllester, “Semantics,” MIT,” Lecture notes, October 1993.[16] E. Rasmussen, Games and Information: An Introduction to Game

Theory, 4th ed. Blackwell Publishing, 2006.[17] J. Y. Halpern and Y. Moses, “Knowledge and common knowledge in

a distributed environment,” Journal of the Association for ComputingMachinery, vol. 37, no. 3, pp. 549–587, 1990.

[18] J. Y. Halpern, Y. Moses, and M. Y. Vardi, “Algorithmic knowledge,” inProc. of the 5th Conference on Theoretical Aspects of Reasoning aboutKnowledge (TARK’94). Morgan Kaufmann, 1994, pp. 255–266.

[19] N. Friedman and J. Y. Halpern, “Modeling belief in dynamic systems.Part I: Foundations,” Artificial Intelligence, vol. 95, no. 2, pp. 257–316,1997.

Page 11: Interpreted systems for situation analysis

[20] ——, “Plausibility measures: A user’s guide,” in Proceedings of theEleventh Conference on Uncertainty in AI, 1995, pp. 175–184.

[21] G. Shafer, A Mathematical Theory of Evidence. Princeton UniversityPress, 1976.

[22] L. A. Zadeh, “Fuzzy sets as a basis for a theory of possibility,” FuzzySets and Systems, vol. 1, pp. 3–28, 1978.

[23] N. Friedman and J. Y. Halpern, “Modeling belief in dynamic systems.Part II: Revision and update,” Journal of Artificial Intelligence Research,vol. 10, pp. 117–167, 1999.

[24] C. Boutilier, N. Friedman, and J. Y. Halpern, “Belief revision withunreliable observations,” in Proceedings of the 15th National Conferenceon Artificial Intelligence, 1998, pp. 127–134.

[25] K. Baclawski, M. K. Kokar, C. J. Matheus, J. Letkowski, and M. Mal-czewski, “Formalization of situation awareness,” in Eleventh OOPSLAWorkshop on Behavioral Semantics, November 2002, pp. 1–15.

[26] R. Parikh and R. Ramanujam, “Distributed processes and the logic ofknowledge,” in Proceedings of the Conference on Logic of Programs,ser. Lecture Notes In Computer Science, vol. 193, 1985, pp. 256–268.

[27] K. M. Chandy and J. Misra, “How processes learn,” Distributed com-puting, vol. 1, no. 1, pp. 40–52, 1986.

[28] R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi, Reasoning aboutknowledge. Cambridge, MA: The MIT Press, 2003.

[29] R. W. Pew, “The state of situation awareness measurement; headingtoward the next century,” in Situation Awareness Analysis and Mea-surement, M. Endsley and D. Garland, Eds. Mahwah, New Jersey:Lawrence, Erlbaum Associates, 2000, pp. 33–50.

[30] R. Fagin and J. Y. Halpern, “Belief, awareness, and limited reasoning,”Artificial Intelligence, vol. 34, no. 1, pp. 39–76, 1988.

[31] K. Konolige, “What awareness isn’t: A sentential view of implicit andexplicit belief,” Journal of Symbolic Logic, vol. 53, no. 2, pp. 667–668,June 1988.

[32] R. Ramanujam, “View-based explicit knowledge,” Annals of Pure andApplied Logic, vol. 96, no. 1, pp. 343–368, March 1999.

[33] M. R. Endsley, “Theoretical underpinnings of situation awareness: Acritical review,” in Situation Awarness Analysis and Measurement, M. R.Endsley and D. J. Garland, Eds. Mahwah, NJ: Lawrence ErlbaumAssociates, 2000, ch. 1, pp. 3–32.

[34] R. van der Meyden, “Common knowledge and update in finite environ-ments,” Information and Computation, vol. 140, no. 2, pp. 115–157, 1February 1998.