A PRIVACY ENHANCED SITUATION-AWARE MIDDLEWARE FRAMEWORK FOR UBIQUITOUS COMPUTING ENVIRONMENTS by GAUTHAM V. PALLAPA Presented to the Faculty of the Graduate School of The University of Texas at Arlington in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY THE UNIVERSITY OF TEXAS AT ARLINGTON December 2009
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
A PRIVACY ENHANCED SITUATION-AWARE MIDDLEWARE FRAMEWORK
FOR UBIQUITOUS COMPUTING ENVIRONMENTS
by
GAUTHAM V. PALLAPA
Presented to the Faculty of the Graduate School of
The University of Texas at Arlington in Partial Fulfillment
Active Device, De-vices where activitycan be migrated to
History Previouslyvisited,new
Landmarks,facilities
Previousevents
Friends,prior inter-actions
Devices where ac-tivity migrated to,Preferred devices
Physiological State Zone inSmartHome
NearestSensor /Appliance
Time, day,date
Current in-habitants
Active sensor(s),compatible devices
3.2 Classification of Context
A major issue in context-aware computing is the design of data formats used
to exchange contextual information [41]. We need to adapt a context representation
that is not application-specific, abstracted from the context sources, and flexible to
31
accommodate system enhancements, which tends to be challenging in a ubiquitous
environment, owing to the diversity of devices deployed. In order to effectively rep-
resent the context elements obtained from our user-centered approach, we have to
consider the following properties of context:
1. Dynamic nature of context:
The context of a user (or the physical environment) is dynamic in nature as
the attributes can vary with time. The values of the context elements are also
time-variant.
2. Accumulation of Knowledge:
User history and interactions with the physical environment and other users
has to be accumulated over large periods of time to encompass past experiences
and situations. An effective retrieval mechanism is also necessary to enable the
context-aware system to learn from the experiences.
3. Fusion of related context:
Since a situation can be analyzed as an sequence of context attributes, we
need to be capable of fusing related context attributes together and passing the
aggregated context to the context-aware system to perform situation prediction
3.2.1 Types of User Context
Based on the interaction of the user, we can classify context into physical context
and logical context
1. Physical Context:
Physical context is obtained from various sensors, devices, actuators, and other
smart objects that are distributed in the environment. When a user interacts
with the environment, context is generated which indicates the interaction with
a particular smart object. Some examples of physical context are location,
32
Figure 3.1. Types of User Context.
motion, access, etc. This type of context usually enables the system to identify
the presence of the user in the environment, along with inferring the intent of
the user based on the type of object accessed, the location, and other details.
In other words, physical context captures the user activity in the environment.
2. Logical Context:
Logical context describes the relationship that the user has with the environ-
ment and other entities. This kind of context is used to capture user behavior
and interaction. An example of logical context is the way in which the user
demarcates environments, such as home, work, communities, etc. Another ex-
ample is the relationship of an entity with the user, such as peer, friend, relative,
etc. The aim of logical context, therefore, is to observe the social nuances, and
relationship of the user with other entities in the environment, and effectively
capture the user behavior and interaction history.
33
3.3 Capturing context
Context information about the physical world can be gathered in real time
from sensors embedded in the environment. However, a concerted effort is required
to obtain context from the logical world, obtained by gathering information directly
from the user, or deducing from interactions the user has made with other entities
over time. Whatever the nature of this information, context may come from disparate
sources and has a relatively transient lifetime.
Building a general context information model to capture all aspects of the
user’s information is a difficult task. However, the key is to make the information
representation consistent over different applications, thereby, making the information
generalizable. The context representation must, therefore, be modular, distinct, and
should have a set of well-defined interfaces by which heterogeneous devices can access
and manipulate the context representation.
In a ubiquitous computing environment, the underlying assumption is that the
user and some of the devices are mobile, and activities performed in such an environ-
ment often include mobility. As a result, location information usually is considered
more valuable than others. However, we consider context gathering to be analogous
to the function in humans. Perception and cognition are the foundation of intelligent
behavior of sentient beings, and the incorporating a method to effectively capture an
overall understanding of the environment would facilitate better perception. However,
the information sensed from the environment has to be first translated into patterns,
and these patterns are then associated with meaning. This therefore, implies that
perception requires some form of memory or history, and patterns with history are
translated to knowledge or experience.
34
3.3.1 Event - Condition - Action rules
Event - Condition - Action (ECA) rules are an intuitive and powerful method
of implementing reactive systems, and are applied in many areas such as distributed
systems, real time systems, agent based systems, and context-aware systems. The
basic construct of an ECA rule is of the form:
On Event If Condition Do Action (3.1)
which translates to: when Event occurs, if Condition is true, then execute
Action. Systems programmed with ECA rules receive input from the environment
in the form of events, and react by performing actions that change the state of the
system or of the environment. The event part specifies when a rule is triggered, the
condition part is considered as a query on the state of the system or the environment,
and the action part states the actions to be performed if the condition is verified.
Executing the rule’s actions may in turn trigger further ECA rules, and execution
proceeds until no more rules are triggered.
An ECA rule has to satisfy several properties for implementation in a wide range
of applications. Firstly, complex events occurring in a reactive rule are considered to
be a composition of several basic rules. Similarly, complex actions that are triggered
are decomposed into several actions that have to be performed concurrently or in a
given order. The ECA rules should, in general, be coupled with a knowledge base,
which contains all the rules that specify information about the environment, along
with the ECA rules that specify reaction to events. Thus, ECA rules are developed
to deal with systems that evolve. However, the evolution is limited to the knowledge
base in most of the applications. In a truly evolving system, not only the knowledge
35
base, but also the reactive rules themselves change over time to handle unambiguous
and ambiguous context information.
Unambiguous context refers to context that can be definitely depicted by events
and conditions of an ECA rule. This kind of context usually includes quantifiable data
obtained from the environment. For instance,
On < location == bedroom > (3.2)
If < Time ≥ 6 : 00PM >
Do < Switch on bedroom light >
represents an ECA rule where the bedroom light is switched on after 6:00 PM if
the user is in the bedroom. This rule is triggered on the event of the user entering the
bedroom, and the system then checks the time to see if the condition is valid. If true,
then the system executes the action, i.e., Switch on the bedroom light. In this rule,
both the location and the time are quantifiable, as sensors have the ability to detect
the person’s mobility as they pass from one room to another, and also harbor limited
capability to record the interaction of the person with the surrounding environment.
Thus discrepancies about the rule do not occur.
On the other hand, consider the following rule:
On < location == bedroom > (3.3)
If < user state == sleeping >
Do < Switch off bedroom light >
In this rule, the event can be quantified with location as the context. However,
determining the current user state proves to be a harder task. If the bed contains
36
pressure sensors, then, the system can detect the presence of the user. But the user
states could be standing, sitting, lying down and reading, or sleeping. This form of
multiple states over a single event introduces ambiguity in capturing context.
The situations or activities in a real world are often dynamic and unfold over
time. The sensory observations also evolve over time to reflect changes in the envi-
ronment. As a consequence, the dynamic aspect of ubiquitous computing requires a
monitoring system to be a time-varying dynamic model that not only captures the the
current events, but handles the evolution of different scenarios as well. The inability
of current sensory systems to correlate and reason about a vast amount of information
over time is an impediment to providing a coherent overview of the unfolding events
since it is often the temporal changes that provide critical information about inferring
the current situation [90]. To correctly assess and interpret the situation, an adaptive
approach is therefore needed that can systematically handle corrupted sensory data
of different modalities and, at the same time, can reason over time as well to reduce
context ambiguity.
3.3.2 Limitations of ECA rules
While ECA rules provide an intuitive technique to model reactive systems,
their power is limited by their implementation. Complex events or actions have to
be broken down into simpler blocks (primitives), and this could potentially lead to
multiple ECA rules being executed over a single trigger. For illustration, consider
the following rules that can be used to actively monitor a patient in an assisted
environment:
37
On < location == bedroom > (3.4)
& < body temperature > Threshold >
& < Respiratory rate > Threshold >
If < userstate == sleeping >
Do < Report condition to physician >
In this rule, we have a complex event, which fuses the context of location and
health monitoring context. The condition, on the other hand, is of the user state, and
queries if the user is sleeping, which is an ambiguous context. Even if we were able
to resolve the ambiguity with the help of other sensors, such as a video camera which
captures the position of the user, or a pressure pad, which captures the surface area
over which pressure is applied over the bed, there still exists uncertainty about the
situation, The patient could have performed some physical activity, prior to sleeping
on the bed, which could push the body temperature and respiratory rate above the
threshold. Flagging this situation as a sickness of the patient would generate a false
positive in the detection. This example, therefore, emphasizes the need for analyzing
the state of the user or an entity in the environment not just as a set of complex
events or conditions. On the other hand, prior activities usually build up to an
event occurrence, and monitoring a ubiquitous environment should be considered
as a continuous process, rather than a discrete set of events. Designers developing
systems deployed in ubiquitous environments should consider the history of activities
in the environment, and model situations based on the knowledge obtained.
38
3.4 Summary
In this chapter, we have discussed various interpretations of context, and their
limitations. In order to improve upon the understanding of context, we have presented
our definition of context which aims at extending the functionality of context with
respect to perception of user activity. We have discussed the various perspectives of
context and presented the various ways in which context can be classified. We have
investigated methods to capture context, and develop rules. We have also discussed
the limitations of such rules, and this forms the motivation for our work in situation
awareness.
CHAPTER 4
PERCEPTION OF SITUATIONS
4.1 Introduction
With the rising popularity of ubiquitous computing, the focus of developing
systems has shifted from generic to user-centric solutions. This paradigm shift in-
volves seamless integration of heterogeneous devices and sensors in the environment
to constantly monitor and perform tasks traditionally performed by the user. There
is a considerable push, therefore, to develop systems which can perceive user behav-
ior, and adapt to their idiosyncrasies. In this chapter, we discuss some limitations of
the interpretations of context, and aim to extend them, to facilitate improved con-
text awareness, and aim to perceive the situation of an entity in the environment
using context as building blocks of information. We discuss a user-centric approach
to perception of user activity in the environment, and use the knowledge obtained,
to understand user activities. We present a system for perceiving situations in the
environment, and discuss an approach to empower the user to develop complex, yet
intuitive, rules. We then present our scheme for dynamic generation of situation
grammar. We evaluate the system with two scenarios and present the performance
of the system in a dynamic ubiquitous environment.
4.2 Motivating scenario
Human behavior is described by a finite number of states called situations,
These situations are characterized by entities playing particular roles and being in
relation within the environment. Perceptual information from different sensors in
39
40
the environment is associated to the situations, roles, and relations. These different
situations are connected within a network. A path in this network describes behavior
in the scene. Human behavior and needs evolve over time. Hence, a context model
representing behavior and needs of the users must also evolve.
Consider the following real life scenario: John is a patient in an assisted en-
vironment, and his physician would like to monitor his progress remotely. John’s
physician uploads the regimen onto the system, and John is reminded to take his
medicine at the right time. Based on his recovery, John’s physician might want to
change his regimen or medication, and John is informed of the same. The system
now adjusts to the new regimen and alerts John of any new medication needed. The
system also registers the number of usages of the medication, and informs John to fill
his prescription well in advance. This scenario incorporates the concepts of remote
and local monitoring of the patient, access to sensitive information, and predictive
actions performed proactively by the system.
Consider another scenario. Mary has obtained a recipe and wants to try it out.
She accesses the ingredients from the refrigerator and pantry, and prepares the recipe.
Some of the items called for in the recipe are depleted in the preparation and Mary
wants to add them to the grocery list. After cooking, she finds that she wants to
store the recipe for future reference. Normally, Mary would file the recipe for future
reference, and add the depleted items to the grocery list. At a future date, she would
have to manually look through the recipe and check that the necessary ingredients are
available for the preparation. If would benefit the user if the system could take care
of all these tasks with a minimal amount of work. The system would file the recipe,
and check if the required ingredients are available. Also, to reduce the number of
tasks to the user, the system could automatically generate a grocery list and transfer
it to Mary’s cell phone.
41
Both these scenarios mentioned above would require the system to understand
the situation in which the user are currently in, and perform most of the tasks, and
at the same time, reducing the interaction with the user. There had been significant
amount of work in the area of recognition of daily life activities. Recognizing activities
of daily life is challenging, especially in a home or assisted environment. This is due
to the fact that the user can perform activities in several way, This would also imply
that the underlying sensors must report the features required of them robustly across
various sensing contexts [20]. A popular technique for detecting features of such
activities is known as “dense sensing” [65], in which a a wide range of sensor data is
collected instead of relying on visual based systems. Another technique has been to
use wearable sensors such as accelerometers and audio sensors which aim to provide
data about body motion and the surroundings where the data has been collected
from [62]. It has been shown in [82] that a variety of activities can be determined
using this technique.
Human activity recognition in the context of assisted environments using RFID
tags has been investigated in [65, 78].Though this approach involves extensive tagging
of objects and users, it demonstrates that hierarchical organization of probabilistic
grammars provides sufficient inference power for recognizing human activity patterns
from low level sensor measurements. A sensing toolkit for pervasive activity de-
tection and recognition has been discussed in [86]. Systems deployed in ubiquitous
environments are characterized by multiple smart cooperating entities and will have
to perform high-level inferencing from low-level sensor data reporting [8, 6]. The
presence of such heterogeneous sensors, coupled with myriad devices, drives the need
for appropriate perception of situations in the environment.
42
4.3 Perceiving Situation
(a) (b)
Figure 4.1. The environment of the system showing (a) Floor plan and distributionof nodes, (b) Minimum Spanning Tree and calculation of a zone.
Consider an ubiquitous environment shown in Figure 4.1(a). Let S = {s1, s2, . . . , sm}
sensors be distributed in this environment. Each sensor monitors a zone around it.
The zone of a sensor is calculated in the following manner: Draw a straight line con-
necting a sensor and its neighbor. The perpendicular bisector of this line forms the
edge demarcating the zones of these adjacent sensors. If a wall is encountered within
the zone, then that wall forms the edge of the zone for the sensor. We next define a
context element:
Definition 1 Context Element
A context element ci contains the information from sensor si. Therefore, C =
{c1, c2, . . . , cm} contains the data from m sensors distributed in the environment.
43
Let D = {d1, d2, . . . , dk} be k devices that are present in the environment. We
define a device as a part of the environment which the user accesses or interacts
with. In other words, we consider a device to be an object in the environment. We
collectively call the sensors and devices as nodes, and assume that there are n nodes
in the environment, where n = m + k.
During the initial discovery phase, the location of all the nodes is obtained and
a minimum spanning tree is calculated, to enable tracking of the user activity, shown
in Figure 4.1(b).
If a user enters a zone at any point and leaves the zone from any point, we
represent that using the edge present in the minimum spanning tree. If an edge is not
found, then we log the activity, and upon repeated usage of that path, we include it
into the spanning tree and remove the existing path between the two nodes. When a
user enters a node’s zone, we assume that the node generates a context element and
transmits it to the system. We define a situation in the following way:
Definition 2 Situation
A situation γ(t) is a sequence of context elements c1, c2, . . ., terminated by a device
at time t. In other words,
γ(t) = c1c2 . . . , cidj, where i ∈ {1, . . . , n}, j ∈ {1, . . . , k} represents a situation at
time t, γ(t) ∈ Γ.
According to our definition of situation, context elements correspond to non-
terminal, and devices correspond to terminal symbols respectively. As the user moves
in the environment, the context elements corresponding to the zones in which the user
traverses are obtained, and we process the elements in an online manner.
44
4.3.1 Capturing action
In order to capture activity, we associate action words or “verbs′′ to each node.
Let vi be the verb associated with the node i. Additionally, since we are capturing
user activity, we would need to analyze the motion of the user. Each type of node
is assigned verbs according to their capability. For instance, a sensor which captures
user motion would be assigned the verbs “walk”, “stand”, and “sit”. If we assume that
a normal person walks 1 m/s, and the average house size is 200 m2, the user would
enter an exit a zone once per second on an average. This would imply that we would
want our nodes to report with a very low frequency (about 1 Hz). When the user is
within the zone of the node i, we assign vi to it, where the verb would correspond to
“walk”. If the user is still in the same zone after 2 reporting cycle, we upgrade the
verb as v′
i = “stand′′. Some verbs associated with devices are “switch on”, “switch
off”, “access”, “replace”, etc. In order to differentiate between the verb of the context
element and the verb of the device, let V = {v1, v2, . . . , vp}, p ≤ m correspond to the
verbs associated with the context elements and A = {a1, a2, . . . , ap}, q ≤ k correspond
to the set of verbs associated with devices. A situation in our approach is interpreted
as an activity in the environment, and therefore, we can represent a situation as
user (subject)→ verb (action)→ environment (devices, context elements).
4.3.2 Initial Configuration
Initially, the system has to be trained to the various activities of the user and
we perform it in the following manner. Consider an arbitrary activity pattern of the
user. Let us assume that the system informs the user to take the medication. The
user, who is in the living room, gets up and moves to the bathroom, via the bedroom,
and accesses the medicine cabinet (d5). We can represent this activity by the sequence
c2v1c5v1c9v1c10v1c11v1d5v2, with the initial position of the user being a location in the
45
zone of c2. The system obtains the first (context, verb) pair and compares it with
the next (context, verb) pair in the sequence. Since the verbs in both the pairs are
similar, it uses the following rule:
Rule 1 A sequence xvyv, x, y ∈ N can be represented as (x, y)v
This rule is called the Rule of Simple Contraction. This is commutative in nature,
i.e., sequence vxvy, x, y ∈ N can be represented as v(x, y)
The system then contracts the first two (context, verb) pairs. It then compares
this with the next (context, value) pair that arrives and uses Rule 2.
Rule 2 A sequence (c1, c2)vc3v, {c1, c2, c3 ∈ C} can be contracted to (c1, c3)v
This rule is called the Rule of Compound Contraction. As in Rule 1, this rule is also
commutative, i.e., v(c1, c2)vc3, {c1, c2, c3 ∈ C} can be contracted to v(c1, c3). This rule
is helpful in eliminating redundant context information when the action performed is
the same over multiple context elements.
Rule 3 A sequence v(d1, d2)vd3, {d1, d2, d3 ∈ D} can be contracted to v(d1, d2, d3)
This rule is called the Rule of Device Listing, and is different from Rule 2, since we
would like to capture a list of all the devices that the user has interacted with.
The system continues contracting the sequence till we obtain (c2, c11)v. When
it encounters d5, it realizes that a terminal symbol has been encountered. It then
constructs the situation tree. Figure 4.2(a) describes Rule 2, and Figure 4.2(b) shows
the sequence after encountering a device.
Definition 3 An activity is considered complete when any situation si, terminating
at device di with verb v is immediately followed by a situation si+1 terminating at the
same device, but with verb v′
To implement Definition 3, we introduce the following rule:
Rule 4 A sequence terminating with v(d1, d2, . . . , di), {di ∈ D}, is constructed until
we encounter the complement
46
v′
(d1, d2, . . . , di), {di ∈ D}
This rule is called the Complement Rule, and ensures that any device accessed/switched-
on is replaced/switched-off.
Specifically, when dealing with usage of a particular device, we have the follow-
ing rule:
Rule 5 An activity is considered a usage of a device di if situation γi, terminating
at device di with verb v = “access/switch on′′ is immediately followed by a situation
γi+1 terminating at the same device, but with verb v′
= “replace/switch off ′′
In some situations, there could be a large amount of activity between accessing
a device and replacing it, for instance, talking on a phone while cooking. In such
circumstances, the sequence should be decomposed into activity performed up to
device access, intermediate activity, and activity for device release.
Rule 6 If a sequence terminates with a device or a set of devices, and the subsequent
verb obtained is not a complement of the prior verb, construct a new sequence for the
current activity, until the complement is encountered.
Using these rules, we can now represent the sequence(s) obtained as a structure.
4.3.3 Building the Situation tree
The Situation Tree structure is a binary tree constructed bottom-up, from the
sequence of context elements, verbs and devices obtained from the environment. The
Situation Tree (S − Tree) contains the following properties:
Property 1 The root of any subtree of a S-Tree is always a verb.
Property 2 The left child of any verb is a non-terminal symbol (i.e., context element
or verb).
47
Property 3 The right child of the root is either a terminal symbol (device) or a
subtree of terminals.
Property 4 The right child of any intermediate verb, whose parent is not its com-
plement, is a context element.
Property 5 The right child of any intermediate verb, whose parent is its comple-
ment, is a terminal or a subtree of terminals.
Property 6 The left subtree of any intermediate verb represents the prior activity of
the user.
Another interesting property of an S-Tree is that the post-order traversal of any
left sub-tree generates the prior user activity.
(a) (b)
Figure 4.2. Situation Tree (S-Tree) after (a) Compound Contraction, (b) Encounter-ing a terminal symbol.
4.3.4 Designing complex rules
One of the complexities of context-aware computing that developers face is to
perceive the current state of user activity. In order to resolve this, many systems
incorporate a form of “Event − Condition − Action′′(ECA) rules in order to
perform actions based on event triggers. An example of an ECA rule is given below:
48
rule: "Cooking_Rule":
Event: (location == "Kitchen")
Condition
(device == "Oven") &&
(status == "On")
Action:
assign activity == "Cooking"
A problem with ECA rules is that it tends to become complex and requires
chaining of logical operations to encompass multiple events. Additionally, since events
trigger an action, the prior activity (history) of the user might not be taken into
consideration in the condition. For instance, in the activity discussed in Section 4.3.2,
the developers might choose to discard the context information of user movement
from c2 to c10, and focus on information obtained from c11 onwards, resulting in loss
of information about the user behavior, which might be beneficial in understanding
user behavior for situation prediction. ECA rules are also not very user-friendly,
and require the user to manually decompose a complex action into various steps,
and integrate them using logical operations, which could potentially result in loss of
information.
We believe that our system improves the user interaction and allows the user to
specify custom rules naturally. Let us assume that the user would like to create a rule
which turns on the television when she walks from the bedroom to the living room.
Using ECA mechanism would involve initial location as “Bedroom”, final location
as “Living room”, and a series of operations to include the activity of walking. Our
system handles this in a graceful manner. The user would enter the rule without
decomposition as “If user walks from Bedroom to Living room, turn on the television
”. The system perceives that the subject is the user and the rest of the rule, “walks
49
from Bedroom to Living Room, turn on the television” is the activity. It then parses
the rule sequentially. The first word is a verb “walk” v1 followed by the keyword
“from”. From Rule 2, it obtains the next two elements c12, and c3, and constructs the
sequence (c12, c3)v1. It then looks up the minimum spanning tree (Figure 4.1(b)) and
expands the sequence to c12c10c9c7c5c2c3v1. The part “turn on (v2) the television” is
then translated to v2d9 and appended to the initial sequence. After parsing the rule,
therefore, we obtain the situation γ(t) = c12c10c9c7c5c2c3v1v2d9.
Suppose the user now moves from the bedroom to the living room along a
different path c12c10c9c5c3. The sequence obtained would be c12v1c10v1c9v1c5v1c3v1.
Using Rules 1 and 2, the sequence still reduces to (c12, c3)v1, and the system turns
on the television. It also registers the new path taken by the user, and upon frequent
usage of this path by the user, the system perceives that this is a preferred path, and
updates its spanning tree. The system could also observe the behavior of the user,
and develop dynamic rules based on user history. The advantages of this approach
are two-fold: (1) The system allows the user to create user-friendly rules (2) The
system can be dynamically customized to the user behavior and idiosyncrasies.
4.4 Generating Situation Grammar
From the properties of an S-Tree, we observe that we can generate different
levels of grammar by observing the elements of the tree at that level. For instance,
since all the leaves of the S-Tree are either terminals or non-terminals, we obtain
the phonemes of the situation grammar. An In-order traversal at any left child will
generate the sequence of symbols required to generate higher Level grammar. To
demonstrate the versatility of our scheme, let us consider heat therapy as an activity,
in which the user uses a heat pad to obtain muscular relief.
50
Our objective is to recognize, with a predefined confidence, if the patient has
undertaken heat therapy, by monitoring the person’s activity in the assisted envi-
ronment. However, it is not enough to recognize individual instances of “Therapy
activity”. For efficient performance of the automated system:
1. “Therapy activity” should be properly identifies and differentiated from other
similar activities, such as “cooking” or “sleeping”, that might involve accessing
common items or being in similar locations.
2. “Therapy activity” recognition should also be person-specific. This implies that
we should correctly analyze the monitored activity and differentiate it with
respect to different users.
3. In a case where the “Therapy activity” cannot be detected, the system has
to predict the most probable activity similar to it, and take necessary actions
based on pre-determined rules.
4.4.1 Formulating Initial Grammar
Our description of the “therapy activity” is based on the floor plan of the
assisted environment shown in Figure 4.1(a). We considered the approach discussed
in [54] and simulated a similar approach using S-Trees. To specify a sensory grammar
that recognizes heat therapy, we decomposed the therapy activity into a sequence of
primitive actions as:
1. Get required items from the medicine cabinet or the closet.
2. Heat the heating pad by spending time at the microwave oven.
3. Apply the heat pad on the bed or in the living room.
4. Replace the heating pad in the medicine cabinet or closet.
By decomposing the activity in this way, we can describe the process as an
ordered sequence of actions 1, 2, 3, and 4. Though this simplistic approach can ade-
51
quately capture the actions, there could be ambiguous activities when users deviate
from the normal actions. For instance, if the user forgets to replace the heat pad, or
misplaces it in a different location, the activity could be considered incomplete. Our
approach of S-Tree construction takes care of this problem and we have addressed it
in the analysis of our scheme. Another cause for ambiguity is the tendency of people
to multitask, and S-Trees resolve this problem gracefully. For example, while the heat
pad is in the microwave oven, the user might move to the living room to watch tele-
vision, or rest in the bedroom. It, therefore, becomes evident that the system would
have to handle these realistic events, and we feel that the approach of S-Trees would
resolve the ambiguity introduced by such activities. The grammar derived from these
trees would be robust to recognize as many of these instances as possible, and at the
same time, differentiate from similar activities, thereby, reducing the occurrence of
false positives, and false negatives.
4.4.2 Specifying Detailed Grammar
Figure 4.3 shows the structure of a 2-Level grammar hierarchy for recognizing
therapy activity based on the our decomposition of the activity, using the method
discussed in [54]. At the lowest level, sensors correlate the user’s identity and location
with areas and provides a string of symbols, where each symbol coincides with an
area in the assisted environment (e.g., d6, d4, etc.). This string of symbols is fed as
input to the first level grammar which translates it and encapsulates it to a new
string of higher level semantics related to the detection of the therapy activity (e.g.,
AccessPad, HeatPad, etc). The second level grammar uses the high-level semantics
identified at the immediate previous level to describe and identify a typical therapy
activity. Similarly, the output of the second level grammar can be fed to any other
higher level grammar for the detection of even higher level semantics.
52
Figure 4.3. 2-Level grammar hierarchy for the detection of therapy activity.
The detailed implementation of the proposed grammar hierarchy is shown in
Table 4.4.2. Level 1 grammar identifies the four therapy activity components (PadAc-
tion). We assume that the underlying sensors provide a sequence of the activity
regions., which are the phonemes of this language. The terminal symbols are fed
as input to the grammar and represent the different activity regions. We have also
aimed to make the grammar more powerful by adding the user as a terminal symbol.
This would help us to predict and log situations specific to a user. The non-terminal
symbols include the four heat therapy components, and a set of standard symbols
including the Start, P, and M symbols. The Start symbol is a standard symbol used
in grammar descriptions to represent the starting point of the grammar. We use the
P symbol to factor in the user, and the M symbol for recursion. The non-terminal
symbols represent the semantics into which the input of the grammar is mapped. In
our case, the output of the Level 1 grammar is any sequence of the following seman-
53
Table 4.1. Therapy Grammar Hierarchy
Level 1 Grammar
Input: A sequence of any of the terminal symbols (devices)Output: A sequence of any of the following non-terminal symbols:{ AccessPad, HeatPad, ApplyPad, ReplacePad }1. VN = { Start, M, User, Action, PadAction, ApplyPad, HeatPad,
9. ApplyPad → B ApplyPad(0.25)|ApplyPad L(0.25)|B(0.25)|L(0.25)
10. ReplacePad → ReplacePad B MC(0.125)|ReplacePad B C(0.125) |ReplacePad L MC(0.125) |ReplacePad L C(0.125)|B MC(0.125)|B C(0.125)|L MC(0.125)|L C(0.125)
Level 2 Grammar
Input: A sequence of any of the terminal symbols: { AccessPad, HeatPad, ApplyPad, ReplacePad }Output: A sequence of any of the non-terminal symbols: { HeatTherapy }1. VN = { Start, M, User, Therapy, Process, Heat }2. VT = { AccessPad, HeatPad, ApplyPad, ReplacePad }3. Start → P (1.0)
a case with 5 peers. We considered multiple sessions between the users and chose
random length of elements requested per session. Table 5.3 shows the average C′
req
and average number of queries made by user u2 to user u1 over 0.6 ≤ θ ≤ 1.
5.5 Results
We simulated a campus environment with 100 active users and implemented the
scenarios described in Section 5.2. We created 100 context elements based on our sur-
vey and generated rules for forming context states. The JBoss Drools rules engine [42]
was chosen for developing context-aware rulesets since it uses a business friendly open
source license that makes it free to download, use, embed, and distribute. We varied
the number of privacy states from 3 to 6 to incorporate special privacy states for
medical professionals, law enforcement officers and Faculty administrators, allowing
86
Figure 5.5. User interaction with peers.
us to assign pertinent context elements directly to those states. Different users were
created using various J2METM Mobile Information Device Profiles (MIDP) [58]. We
implemented an instant messaging (IM) client on all the active user devices and built
buddy lists based on the sharing rules created.
(a) (b)
Figure 5.6. Graphical User Interfaces (GUIs) of the system showing (a) User prefer-ences (b) User Query.
87
Figure 5.6(a) shown the GUI of the system, where the user can set privacy
levels for different context elements. Since we observed in our survey that users have
a better perception of data, we have represented the context elements as data in order
to bring transparency into the system. A privacy slider is also included to enable the
user to set the overall privacy setting of the system, and based on the ruleset, the
context elements are assigned to the different privacy states. The range of the privacy
slider has been set as 1 – 10 for the user to intuitively understand the functioning
of the slider. The user can view his/her prior sessions, and also view the context
elements stored in different categories based on his/her social interactions with other
users.
Figure 5.7(a) shows the behavior of Incδ(w) and Decδ(w) with various ranges
of δ. We then varied the number of privacy states from 2 – 6 to find its effect on the
number of incremental or decremental operations. We considered 5 variations in θ over
varying privacy weights and Figure 5.7(b) represents an average of 100 test runs. We
found that the number of operations increase drastically when we add extra privacy
states beyond 4. We then decided to vary the Increasing and decreasing functions to
find their impact on the number of operations. We considered the alternate functions:
Incδ(w) =(w + wδ)
2(5.13)
Decδ(w) =(w + δ
√w)
2(5.14)
We observed that these functions incremented and decremented slower than
Equations 5.5 and 5.6 in the presence of privacy states more than 3. Figure 5.7(d)
shows the number of operations with the new function. It is therefore advantageous
to use Equations 5.5 and 5.6 for up to 4 privacy states.
88
(a) (b)
(c) (d)
Figure 5.7. (a) Incδ(w)/Decδ(w) with various ranges of δ (b) Number of Incre-ment/Decrement operations with varying privacy states (c) Equations 5.13 and 5.14for ranges of δ (d) Number of Increment/Decrement operations using Equations 5.13and 5.14.
We then simulated a dynamic environment with up to 100 users and ran an
average of 50 sessions between them. The results of this simulation for 5 random
users are shown in Table 5.4. We set the number of privacy states to 3 (Transparent,
Private, and Protected), and varied the privacy setting of the system from 0.5 to 0.9,
89
Table 5.4. Hybrid Approach: 5 users, up to 50 sessions, variations in θ
User Privacy Levels Sessions Average Creq Average No. of Queriesθ (s)
we observed that if the user has sessions which involved smaller size of Creq, it took a
longer time to populate the dictionary with (user, element) pairs, and therefore, the
number of queries for the initial sessions was pretty high. However, as the number
of elements were shared with a user, the system reduced significantly, as can be seen
in Table 5.5. We also observed that for a large size of Creq, the system outperformed
both the System-centric and the Hybrid approaches. As a case study, we increased
the number of sessions to 100 and the number of context elements to 100. We varied
the privacy setting from 0.5 to 0.9 as before. We observed that the System-centric
approach spent considerable amount of time in balancing the weights, while the Hy-
brid approach eliminated this problem. However, the number of queries to the user
were large, since the decision tree could not mark a majority of context elements, and
91
therefore, had to rely on the user feedback. On the other hand, the approach with
user behavior improved over time, due to a rich population of the dictionary, and the
average number of queries was still considerably small.
(a) (b)
(c) (d)
Figure 5.8. (a) System-centric approach with varying privacy levels (b) Hybrid Ap-proach with varying privacy settings (c) Privacy quantization using user behaviorover varying privacy settings (d) Comparison of the three approaches with respect tothe number of operations/queries.
92
The averaged results of our simulation are presented in Figures 5.8(a) - 5.8(d).
We considered 2 - 10 privacy levels for the system-centric approach and ran a sim-
ulation with up to 25 sessions, as shown in Figure 5.8(a). We then increased the
number of sessions to 50, and observed that it took a considerable amount of time
for the system to attain a steady state. We then tested the hybrid approach and
the approach with user behavior, for up to 50 sessions, and presented the results in
Figures 5.8(b) and 5.8(c) respectively. Figure 5.8(d) describes our comparison of the
three approaches with respect to the number of context elements and average number
of operations (or queries).
5.6 Summary
In this chapter, we have presented user-centric approaches for introducing gran-
ularity of user privacy in context-aware systems deployed in ubiquitous computing
environments. We have described our proposed approaches and analyzed them in
various scenarios. The approaches are highly scalable and can be extended to include
input from sensors monitoring the physical environment, or new devices entering the
ubiquitous computing environment. We have developed a GUI to make perception
of privacy intuitive for the user, yet allowing the scheme to be adept in resolving
context.
CHAPTER 6
SITUATION-AWARE MIDDLEWARE
As applications and systems are rapidly becoming more networked, there is a
constant need for an approach to manage the complexity and heterogeneity inherent
in such distributed systems. Middleware performs this task of connecting parts of the
distributed application, and is traditionally a layer between the network and appli-
cation layers. A common definition of middleware is software that connects different
parts of an application or a series of application. In other words, it can be considered
as software that functions as a conversion or translation layer, a consolidator, and an
integrator. Many middleware solutions have been developed to enable applications
running on different platforms, or developed by different vendors, to communicate
with each other. Every type of middleware, however, has the same general purpose
of extending the scope of an application or applications over a network. Ubiquitous
middleware, on the other hand, are constrained in development by unique challenges
owing to the nature of the environment in which they are deployed.
6.1 Challenges of designing Ubiquitous middleware
Conventional middleware include five pitfalls that a designer faces while imple-
menting middleware in a ubiquitous computing environment in [47]. The common
vulnerabilities encountered while developing such a ubiquitous computing system can
be categorized into flow-based, or process-based. The former is the inability to asso-
ciate relevant information and lack of transparency, and the latter deals with issues
such as configuration being given importance over action, and the granularity of the
93
94
system incorporating social variations. These two categories are interwoven, but de-
marcating them can help designers in analyzing them. Some of the current problems
involved in developing a privacy-sensitive ubiquitous computing framework are:
6.1.1 Inability to associate relevant information
A ubiquitous computing environment is a challenge to designers of middleware.
It is difficult to encompass the myriad information flows that exist in such a chaotic
environment. Many of the middleware are designed as an Event-Condition-Action
(ECA) approach [12, 52, 48]. Importance has to be given to the relevance of data
pertaining to a session. It would be more advantageous taking a user’s behavior as
an entity and deriving work flows from it, rather than considering events as a basal
unit [71]. The middleware should be able to predict the information required for a
particular service. For example, a request for a user’s contact details should include
email address, address, and phone number. It is redundant to have multiple requests
for each piece of information.
6.1.2 Lack of transparency in authentic information
In human-computer interaction, computer transparency is an aspect of user
friendliness which relieves the user of the need to worry about technical details. When
there is a large gap between user perception and actual authentic information, the
system is failing in representation of information. Information transparency changes
behavior [25], and there have been some efforts in the field of privacy enhancing
technologies that help create transparency of information security practices.
One problem area to be tackled is that of sharing and distributing informa-
tion between users, i.e., not only between all participants in a single application such
as a conference, but also across different applications, for e.g., information retrieval.
95
This makes the need of information brokers imperative. CORBA Component Model
(CCM), an extension of “language independent Enterprise Java Beans (EJB)” [17],
is a broker oriented architecture. By moving the implementation of these services
from the software components to a component container, the complexity of the com-
ponents is dramatically reduced [24]. One drawback of CCM is the lack of provision
for tackling the issue of disconnected processes, which is rampant in a ubiquitous
computing environment [83].
6.1.3 Configuration superseding action:
Designers constantly include huge configuration steps for incorporating privacy
into the model. This would be necessary for making the system robust, but deters
the user from using the system effectively. Web services [14, 33] aims at promoting a
modular, interoperable service layer on top of the existing Internet software [11], but
lacks consistent management and are tightly bound to the simple object access proto-
col (SOAP) which constrains compliance to various ubiquitous computing protocols.
Jini [74, ?], is a service oriented architecture that defines a programming model which
both exploits and extends Java technology to build adaptive network systems that are
scalable, evolvable and flexible as typically required in dynamic computing environ-
ments. Jini, however, assumes that mobile devices are only service customers which
is not the case. We aim at reducing the task of user configuration by introducing
classification of information based on privacy levels.
6.1.4 Granularity of the system incorporating social variations
The ubiquitous framework should be able to predict the privacy level of the
session based of peer bonding and organizational hierarchy. At the same time, it
should allow the user to set privacy levels to other individuals based on their social
96
interaction. Since it is difficult to define privacy, we considered it beneficial to incor-
porate a privacy slider to effectively depict the granularity of user interpretation. In
a social environment, maladroit situations, such as denial of a service or a request for
information, have to be handled gracefully.
These conundrums that are constantly faced while developing middleware frame-
works for ubiquitous computing environments form the motivation for this work. Se-
curity and privacy have an implicit relationship. An elementary level of security is
imperative while helping people manage their personal privacy. Since, in many scenar-
ios of a ubiquitous computing environment, the users are not necessarily adversaries,
and usually know each other’s identities, the uncertainty would be less and hence we
adapt a privacy risk model rather than a threat model. Social and organizational
context should also be taken into consideration while developing a framework for the
environment [40].
6.2 Motivating Scenario
Consider a scenario in a ubiquitous health care environment, where Dr. Alice,
who is in charge of a recent patient, intends to discuss the medical condition with
one of her colleagues, Dr. Bob. Since the patient recently entered the hospital, there
could arise a situation where the central database is not updated, and the results of
the various tests conducted and procedures administered, are still in the respective
departmental servers. When Dr. Alice issues a request to send the patient’s chart to
Dr. Bob, the system, upon finding that the central database has not been updated,
searches for the various bits of information distributed in the hospital and aggregates
all the data and sends it to Dr. Bob.
John stores his emergency information, auto insurance, and medical records on
his PDA. If the PDA detects an accident, it sends a dialog box, requesting John if
97
he needs medical assistance. Upon receiving no response, the PDA places a call to
the Emergency Medical Services (EMS), giving the location of the user. When the
Emergency Medical Technicians (EMTs) arrive, they request all the devices of the
user to transmit data about the user. When the PDA checks that the request was
issued by an Emergency Medical Technician, it transmits the personal and medical
details, which the EMT aggregates with the vital signs and health monitoring data
and transmits it to the Emergency Room prior to arrival, facilitating the doctors in
the ER to prepare for the victim. A case sheet is generated with the patient details
and the doctor on call, Dr. Alice is informed of the status of the new case. Suppose,
Dr. Alice wishes to discuss John’s case with Dr. Bob, she messages him and asks if
he is free to discuss the case. On receiving Dr. Bob’s response, Dr. Alice’s laptop
decreases the privacy level of the session since the data is being sent to a peer (doctor).
It then collects all the related patient information, checks the target device and makes
necessary changes to match Dr. Bob’s device and his privacy settings. Concurrently,
a video conference session is set up between the doctors. When the session ends, Dr.
Alice’s privacy level is set back to the default setting.
The scenario described, though common in real life, involves modifications to
the system based upon various context elements such as location, device properties,
etc., and also on the social interactions of the users. In this chapter, we focus on
patterns of personal information disclosure and privacy implications associated with
user interactions in a ubiquitous computing environment. In the next section, we
present our middleware framework, and discuss its architecture and functioning. We
also demonstrate the working of our middleware in assisted environments, and present
experimental results, and our findings.
98
6.3 Precision
Currently, computing devices have penetrated the hospital environment, but
inter-networking is not yet seamless. Many procedures are still manually entered
into the system, and there is no end-to-end transparency in the process. Due to the
diversity in ubiquitous computing devices, incompatibility and reliability issues are
predominant. Data takes a long time to migrate and availability issues constantly
plague the staff and the doctors. We have developed a Privacy enhanced Context-
aware Information Fusion framework for ubiquitous computing environments called
Precision to handle personal privacy of a user in highly dynamic environments. Figure
6.1 shows the architecture of Precision.
Figure 6.1. Proposed middleware of Precision.
6.3.1 Device Abstraction Layer
Since Ubiquitous computing environments contain myriads of heterogeneous de-
vices, middleware developed for a ubiquitous computing system requires an abstrac-
99
tion layer to hide the hardware and implementation details from the upper layers. The
device abstraction layer is responsible for obtaining data from the various devices, and
translating this into context attributes. Each type of device contains an abstraction
module and an adaptor to connect to the various devices. The abstraction module
contains application programming interfaces which obtain the information from the
device, aggregate data obtained from similar sensors, if possible, and pass the ob-
tained data as an XML formatted context attribute to the Context gatherer in the
Information retrieval layer.
Currently, inter-networking among the heterogeneous devices is not inherently
seamless. Due to the diversity in ubiquitous computing devices, incompatibility and
reliability issues are predominant. Data takes a long time to migrate and availability
issues constantly plague the ubiquitous computing environment. To resolve this issue,
we have implemented an intelligent mobile agent-based resource discovery scheme. We
assume that several devices are distributed in the ubiquitous computing environment
and nodes capable of routing context information regarding usage of devices in their
zone1 are strategically identified as Resource Index nodes (RIns). RIns maintain
routing and resource indexing information. Any device that has had an interaction
with the user dispatches a message to its nearest RIn with resource update information
and this mobile agent is then sent with the resource information to the abstraction
module. Upon successful delivery, the mobile agent retracts to the originating RIn
and is destroyed.
Resource discovery is the ability to locate resources that adhere to a set of re-
quirements pertaining to a query that invoked the discovery mechanism. [81] provides
a taxonomy for resource discovery systems by defining their design aspects. To sup-
port a large amount of resources, defining and grouping services in scopes facilitates
1Zone creation is discussed in Chapter 4.3
100
resource search. Location awareness is a key feature in ubiquitous computing [27] and
location information is helpful in many resource discovery cases. In Matchmaking,
a classified advertisement matchmaking framework, client requests are matched to
resources and one of the matched resources is selected for the user [30].
The RIn is equipped with superior processing power, more than average nor-
malized link capacity, and reliability, as compared to other nodes. These nodes are
responsible for indexing all the local and some of the remote services and resources
and contain logic to create intelligent mobile agents that serve to explore the ubiq-
uitous computing environment and index resources and services available therein in
the dispatching RIn. The RIns are chosen by an election procedure and are assumed
to be present all the time in a locality. The election procedure is started as soon as
a RIn is not reachable.
RIns are placed such that each node is connected to at least one RIn within two
hops. Reliability of a node depends on past performance of the node, like number of
packets dropped at that node, connected links failure, and node failure. The average
normalized link capacity is the average of capacities of all adjacent links to of a node.
The RIn placement algorithm [64] is distributed. Initially each node sends its
information packet to neighbor nodes. The information packet contains the node
id and the weight of the node. A node calculates its weight using the equation
Wi = (∑N
j=1
1
dj) + Pi + Ri + Lavg where N is total number of neighbor nodes of ith
node, dj is the degree of jth neighbor node, Pi is processing power of ith node, Ri is
reliability of ith node, Lavg is average normalized link capacity at ith node, given by
Lavg = C1+C2+C3
3where C1, C2, C3 are adjacent link capacities.
On receiving weight packets from all the neighboring nodes, a node compares
its weight with its neighbor nodes. A RIn vote is sent back to the node with the
highest weight. A node receiving MaxVotes number of votes will announce itself as
101
the RIn. Nodes in the neighborhood of the RIn note down their local RIn. MaxVotes
can be chosen depending on the density of the nodes and requirement of RIns. The
RIn selection procedure might also be started when a node detects that its local RIn
is not responding.
6.3.2 Information Retrieval Layer
The Information Retrieval Layer retrieves information from the Device abstrac-
tion layer, and generates situations based on the gathered information. This layer
is tightly coupled with the Decision layer, and consists of a Context Gatherer and a
Situation Analyzer and Generator (SAGe).
6.3.2.1 Context Gatherer
The Context Gatherer manages the underlying context information. The archi-
tecture of the Context Information Management Unit is illustrated in Figure 6.2.
The XML formatted context information obtained from the Device Abstraction
layer is fed as input to the Context Acquirer, which parses the XML and encapsulates
the data, along with the type of device or node to the Context Identifier. The Context
Identifier identifies the verbs related to the type of device and attaches this informa-
tion to the Context Information Aggregator. The Aggregator continues to buffer
context information until a device is encountered, and then generates the activity
sequence of the situation, and passes it to the Situation Analyzer and Generator.
Input: XML formatted Context Information
Output:
Output from the Context Gatherer
< Activity Sequence >
consisting of context attributes, verbs, and a terminal device
102
Figure 6.2. Context Gatherer.
6.3.2.2 Situation Analyzer and Generator (SAGe)
SAGe consists of a Situation Tree constructor and a Rules Engine. The sequence
of context obtained from the Context Gatherer is parsed with the help of the Rules
engine, and situation trees are constructed. Depending on the nature of the context
sequence, multiple situation trees could be generated and these are stored in a cache.
Level-1 and Level-2 grammar are generated by the Rules Engine using rules and the
vocabulary generated, and the situation in constructed and passed to the Decision
Layer. The architecture of Sage is shown in Figure 6.3
Input: Activity Sequence describing the situation
Output:
103
Figure 6.3. Situation Analyzer and Generator (SAGe).
Output from the SAGe
Level − 2 Grammar and Parse Tree
describing the situation
6.3.3 Decision Layer
The Decision Layer consists of a Decision Engine and a Policy engine, a knowl-
edge base for storing dynamic rules generated, and a policy database with stores all
the policies related to privacy.
6.3.3.1 Decision Engine
Decision making is crucial in any middleware for ubiquitous computing. The
property of Event-Condition-Actions (ECA) often becomes inadequate in these appli-
cations, where combinations of multiple contexts and user actions need to be analyzed
over a period of time. Based on the situation trees obtained from the SAGe, actions
can be performed based on the situation’s presence in the knowledge base. If the
situation is not present, then the situation is broken down into simpler units, until
104
the units can be mapped to situations in the knowledge base, and the corresponding
actions are sent to the application layer. Based on the actions approved by the user,
the knowledge base is updated with the new situation, enabling the system to improve
its knowledge, and accuracy in decision making.
6.3.3.2 Policy Engine
The policy engine consists of our developed schemes in user privacy (discussed in
Chapter 5. Guidelines for privacy of sensitive information (such as HIPAA directives),
and user settings are stored, and the policy engine generates privacy levels for the
context attributes, based on these policies. The proper set of context attributes that
can be shared are sent to the decision engine to calculate the required actions for the
situation.
6.3.4 Application Layer
The user interface consists of an application manager and a message center.
The application manager coordinates the various messages on the device. In the
event of a request for information, a query is generated and sent to the Decision
Layer. The message center is connected to the application manager and is similar to
an instant messaging client. This also acts as a conduit to the information that is
received for a request issued by the user or by a message that is sent to the user. Our
proposed scheme is implemented beneath the user interface layer and on top of the
existing middleware. We chose JavaTM technology in order to incorporate seamless
migration of agents over cross-platform technologies.
105
6.4 Results and analysis
6.4.1 Case study 1: Information sharing between users
In this case study, we considered two types of interactions. We initially con-
sidered a Instant Messaging (IM) application in which users could chat in real time
and share files. We developed privacy levels to the files based on prior interactions
with the users. We then simulated exchange of information, and validated our privacy
management with varying levels of privacy. Figure 6.4(a) shows the GUI of our appli-
cation on a mobile phone, through which the user can enter contact information, and
set privacy levels. The desktop version of the GUI is shown in Figure 6.4(b). This is
a more comprehensive user interface, and the user is capable of obtaining information
about the privacy settings, social and organizational information, privacy level, and
prior sessions, within one click.
(a) (b)
Figure 6.4. GUI for (a) Mobile phones (b) Desktops.
106
The desktop version of the chat application is shown in Figure 6.5. The chat
application enables the users to share a file, create permissions for multiple users, and
allow collaboration between users, based on the permissions set by the owner of the
file. This enables easy migration of information within a team or between project
members, and all members share the same resource concurrently.
Figure 6.5. Desktop frontend of the Chat application.