Top Banner
by H. Lieberman T. Selker Thereis a growingrealizationthatcomputer systemswillneedto be increasinglysensitiveto theircontext.Traditionally, hardwareand softwarewereconceptualizedas input/output systems:systemsthattookinput,explicitlygiven to themby a human,and actedupon thatinput aloneto producean explicitoutput.Now, this viewis seen as beingtoo restrictive. Smart computers,intelligent agentsoftware,anddigital devicesof the futurewillhaveto operateon data thatare not explicitlygivento them,datathat theyobserveor gatherforthemselves.These operationsmaybe dependenton time,place, weather,userpreferences, or the historyof interaction. In otherwords,context.But what, exactly,is context?We lookat perspectivesfrom softwareagents,sensors,andembedded devices,and alsocontrasttraditional mathematical andformalapproaches.We see how eachtreatsthe problemof contextand discussthe implications for designof context- sensitivehardwareandsoftware. W e are in the middle of many revolutions in computers and communication technologies: ever faster and cheaper computers, software with more and more functionality, and embedded com- puting in everyday devices. Yet much about the com- puter revolution is still unsatisfactory. Faster com- puters do not necessarily mean more productivity. More capable software is not necessarily easier to use. More gadgets sometimes cause more compli- cations. What can we do to make sure that the in- creased capability of our artifacts actually improves people’s lives? Several subfields of computer science propose paths to a solution. The field of artificial intelligence tells us that making computers more intelligent will help. The field of human-computer interaction tells us that more careful user-centered design and testing of di- rect-manipulation interfaces will help. And indeed they will. But in order for these solutions to be re- alized, we believe that they will have to grapple with a problem that has previously been given short shrift in these and other fields: the problem of context. We propose that a considerable portion of what we call intelligence in artificial intelligence or good design in human-computer interaction actually amounts to being sensitive to the context in which the artifacts are used. Doing “the right thing” en- tails that it be right given the user’s current context. Many of the frustrations of today’s software— cryp- tic error messages, tedious procedures, and brittle behavior—are often due to the program taking ac- tions that may be right given the software’s assump- tions, but wrong for the user’s actual context. The only way out is to have the software know more about, and be more sensitive to, context. Many aspects of the physical and conceptual envi- ronment can be included in the notion of context. Time and place are some obvious elements of con- text. Personal information about the user is part of context: Who is the user? What does he or she like or dislike? What does he or she know or not know? rCopyright 2000 by International Business Machines Corpora- tion. Copying in printed form for private use is permitted with- out payment of royalty provided that (1) each reproduction is done without alteration and (2) the Journal reference and IBM copy- right notice are included on the first page. The title and abstract, but no other portions, of this paper may be copied or distributed royalty free without further permission by computer-based and other information-service systems. Permission to republish any other portion of this paper must be obtained from the Editor. IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000 0018-8670/00/$5.00 © 2000 IBM LIEBERMAN AND SELKER 617 Out of context: Computer systems that adapt to, and learn from, context
16

Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

Mar 22, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

by H. LiebermanT. Selker

Thereis a growingrealizationthatcomputersystemswillneedto be increasinglysensitivetotheircontext.Traditionally,hardwareandsoftwarewereconceptualizedas input/outputsystems:systemsthattook input,explicitlygivento themby a human,and acteduponthatinputaloneto producean explicitoutput.Now, thisviewis seenas beingtoo restrictive.Smartcomputers,intelligentagentsoftware,and digitaldevicesof thefuturewillhaveto operateon datathatare not explicitlygivento them,datathattheyobserveor gatherfor themselves.Theseoperationsmay be dependenton time,place,weather,userpreferences,or thehistoryofinteraction.In otherwords,context.Butwhat,exactly,is context?We lookat perspectivesfromsoftwareagents,sensors,and embeddeddevices,and alsocontrasttraditionalmathematicaland formalapproaches.We seehow eachtreatstheproblemof contextanddiscussthe implicationsfordesignof context-sensitivehardwareand software.

We are in the middle of many revolutions incomputers and communication technologies:

ever faster and cheaper computers, software withmore and more functionality, and embedded com-puting in everyday devices. Yet much about the com-puter revolution is still unsatisfactory. Faster com-puters do not necessarily mean more productivity.More capable software is not necessarily easier touse. More gadgets sometimes cause more compli-cations. What can we do to make sure that the in-creased capability of our artifacts actually improvespeople’s lives?

Several subfields of computer science propose pathsto a solution. The field of artificial intelligence tells

us that making computers more intelligent will help.The field of human-computer interaction tells us thatmore careful user-centered design and testing of di-rect-manipulation interfaces will help. And indeedthey will. But in order for these solutions to be re-alized, we believe that they will have to grapple witha problem that has previously been given short shriftin these and other fields: the problem of context.

We propose that a considerable portion of what wecall intelligence in artificial intelligence or gooddesign in human-computer interaction actuallyamounts to being sensitive to the context in whichthe artifacts are used. Doing “the right thing” en-tails that it be right given the user’s current context.Many of the frustrations of today’s software—cryp-tic error messages, tedious procedures, and brittlebehavior—are often due to the program taking ac-tions that may be right given the software’s assump-tions, but wrong for the user’s actual context. Theonly way out is to have the software know moreabout, and be more sensitive to, context.

Many aspects of the physical and conceptual envi-ronment can be included in the notion of context.Time and place are some obvious elements of con-text. Personal information about the user is part ofcontext: Who is the user? What does he or she likeor dislike? What does he or she know or not know?

rCopyright 2000 by International Business Machines Corpora-tion. Copying in printed form for private use is permitted with-out payment of royalty provided that (1) each reproduction is donewithout alteration and (2) the Journal reference and IBM copy-right notice are included on the first page. The title and abstract,but no other portions, of this paper may be copied or distributedroyalty free without further permission by computer-based andother information-service systems. Permission to republish anyother portion of this paper must be obtained from the Editor.

IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000 0018-8670/00/$5.00 © 2000 IBM LIEBERMAN AND SELKER 617

Out of context:Computer systemsthat adapt to, and learnfrom, context

Page 2: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

History is part of context. What has the user donein the past? How should that affect what happens inthe future? Information about the computer systemand connected networks can also be part of context.We might hope that future computer systems willbe self-knowledgeable—aware of their own context.

Notice how little of today’s software takes any sig-nificant account of context. Most of today’s softwareacts exactly the same, regardless of when and whereand who you are, whether you are new to it or haveused it in the past, whether you are a beginner oran expert, whether you are using it alone or with

friends. But what you may want the computer to docould be different under all those circumstances. Nowonder our systems are brittle.

What is context? Beyond the “black box”

Why is it so hard for computer systems to take ac-count of context? One reason is that, traditionally,the field of computer science has taken a positionthat is antithetical to the context problem: the searchfor context-independence.

Many of the abstractions that computer science andmathematics rely on—functions, predicates, subrou-tines, I/O systems, and networks—treat the systemsof interest as black boxes. Something goes in one side,something comes out the other side, and the outputis completely determined by the input. This is shownin Figure 1. We would like to expand that view totake account of context as an implicit input and out-put to the application. That is, the application candecide what to do, based not only on the explicitlypresented input, but also on the context, and its re-sult can affect not only the explicit output, but alsothe context. Context can be considered to be every-thing that affects the computation except the explicitinput and output, as shown in Figure 2.

And, in fact, even this diagram is too simple. To bemore accurate, we should actually close the loop,bringing the output back to the input. This acknowl-edges the fact that the process is actually an itera-tive one, and a state that is both input to and gen-erated by the application persists over time andconstitutes a feedback loop.

One consequence of this definition of context is thatwhat you consider context depends on where youdraw the boundary around the system you are con-sidering. This affects what you will consider explicitand what you will consider implicit in the system.When talking about human-computer interfaces, theboundary seems relatively clear, because the bound-ary between human and computer action is sharp.Explicit input given to the system requires explicituser interface actions—typing or menu or icon se-lection in response to a prompt, or at the time theuser expects the system’s actions to occur. Anythingelse counts as context—history, the system’s use offile and network resources, time and place if theymatter, etc.

If we are talking about an internal software module,or the software interface between two modules, it

Figure 1 The traditional computer science “black box”

INPUT OUTPUTAPPLICATION

Figure 2 Context is everything but the explicit input and output

CONTEXT-AWAREAPPLICATION

EXPLICIT OUTPUT

EXPLICIT INPUT

CONTEXT IS:• STATE OF THE USER• STATE OF THE PHYSICAL

ENVIRONMENT• STATE OF THE COMPUTATIONAL

ENVIRONMENT• HISTORY OF USER-COMPUTER-

ENVIRONMENT INTERACTION

LIEBERMAN AND SELKER IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000618

Page 3: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

becomes less clear what counts as context, becausethat depends on what we consider “external” to thatparticular module. Indeed, one way that computerscientists sometimes deal with troublesome aspectsof context is through “reification”—redrawing theboundaries so that what was formerly external to asystem becomes internal. We must always be clearabout where the boundaries of a system are. Any-thing outside is context, and it can never be madeto go away completely.

The context-abstraction trade-off. The temptationto stick to the traditional black-box view comes fromthe desire for abstraction. Mathematical functionsderive their power precisely from the fact that theyignore context, so they are assumed to work correctlyin all possible contexts. Context-free grammars, forexample, are simpler than context-sensitive gram-mars and so are preferable if they can be used todescribe a language. Side effects in programming lan-guages are changes to or dependencies on context,and they are shunned because they thwart repeat-ability of computation.

Thus, there is a trade-off between the desire for ab-straction and the desire for context sensitivity. Webelieve that the pendulum has now swung too far inthe direction of abstraction, and work in the nearfuture should concentrate more on reintroducingcontext sensitivity where it is appropriate. Since theworld is complex, we often adopt a divide-and-con-quer strategy at first, assuming the divided pieces areindependent of each other. But a time comes whenit is necessary to move on to understanding how eachpiece fits in its context.

The reason to move away from the black-box modelis that we would like to challenge several of the as-sumptions that underlie this model. First is the as-sumption of explicit input. In user interfaces, explicitinput from the user is expensive; it slows down theinteraction, interrupts the user’s train of thought, andraises the possibility of mistakes. The user may beuncertain about what input to provide, and may notbe able to provide it all at once. Most of us are fa-miliar with the hassle of entering the same informa-tion many times into forms on the Web. If the sys-tem can get the information it needs from context(stored somewhere else, remembered from a pastinteraction), why does it ask for it again? Devicesthat sense the environment and use speech recog-nition or visual recognition may act on input that theysense—input that may or may not be explicitly in-dicated by the user. Therefore, in many user inter-

face situations, the goal is to minimize input explic-itly provided by the user.

Similarly, explicit output from a computational pro-cess is not always desirable, particularly when itplaces immediate demands on the user’s attention.

Hiroshi Ishii1 and others have worked on “ambientinterfaces” where the output is a subtle changing ofbarely noticeable environmental factors such as lightsand sounds, the goal being to establish a backgroundawareness rather than force the user’s attention tothe system’s output.

Finally, there is the implicit assumption that theinput/output loop is sequential. In practice in manyuser interface situations, input and output may begoing on simultaneously, or several separate I/O in-teractions may be overlapped. While traditional com-mand-line interfaces adhered to a strict sequentialconversational metaphor between the user and themachine, graphical interfaces and virtual reality in-terfaces could have many user and system elementsactive at once. Multiple agent programs, groupware,and multiprocessor machines can all lead to paral-lel activity that goes well beyond the sequential as-sumptions of the explicit I/O model.

Putting context in context. So, given the above de-scription of the context problem, how do we makeour systems more context-aware? Two parallel trendsin the hardware and software worlds make this trans-formation increasingly urgent. On the hardware side,smaller computation and communication hardwareand less expensive sensors and perceptual technol-ogies make embedded computing in everyday devicesmore and more practical. This gives the devices theability to sense the world around them and to act onthat information. But how? Devices can easily beoverwhelmed with sensory data, so they must deter-mine what is worth acting on or reporting to the user.That is the challenge that we intend to meet withcontext-aware computing.

IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000 LIEBERMAN AND SELKER 619

There is a trade-off betweenthe desire for

abstraction and the desirefor context sensitivity.

Page 4: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

On the software side, we view the movement towardsoftware agents2,3 as an attempt to reduce the com-plexity of direct-manipulation screen-keyboard-and-mouse interfaces by shifting some of the burden ofdealing with context from the human user to a soft-ware agent. As these agent interfaces move off thedesktop, and small hardware devices take on pro-active decision-making roles, we see the convergenceof these two trends.

Discussion of aspects of context-aware systems as anindustrial design stance can be found in a related pa-per,4 which also details some additional projects inaugmenting everyday household objects with con-text-aware computing.

In the next sections of this paper, we describe sev-eral of our projects in these areas for which we be-lieve the context problem to be a motivating force.These projects show case studies dealing with thecontext problem on a practical application level andprovide illustrations of the techniques and problemsthat arise.

We then broaden our view to very quickly surveysome views that other fields have taken on the con-text problem, particularly traditional approaches inartificial intelligence and mathematical logic. Soci-ology, linguistics, and other fields have also dealt withthe context problem, and although we cannot exhaus-tively treat these fields here, an overview of the var-ious perspectives is helpful.

Context for user interface agents

The context problem has special relevance for thenew generation of software agents that will soon beboth augmenting and replacing today’s interactionparadigm of direct-manipulation interfaces. We tendto conceptualize a computer system as being like abox of tools, with each tool specialized to do a par-ticular job when it is called on by the user. Each menuoperation, icon, or typed command can be thoughtof as being a tool. Computer systems are now or-ganized around so-called “applications,” or collec-tions of these tools that operate on a structured ob-ject, such as a spreadsheet, drawing, or collection ofe-mail messages.

Each application can be thought of as establishinga context for user action. The application determineswhat actions are available to the user and what ob-jects can be operated on. Leaving one applicationand entering another means changing contexts—the

user gets a different set of actions and a different setof objects. Each tool works in only a single contextand only when that particular application is active.Any communication of data between one applica-tion and another requires a stereotypical set of ac-tions on the part of the user (copy, switch applica-tion, paste).

One problem with this style of organization is thatmany things the user wishes to accomplish are notcompletely implementable within a single applica-tion. For example, the user may think of “arrangea trip” as a single task, but it might involve use ofan e-mail editor, a Web browser, a spreadsheet, andother applications. Switching between them, trans-ferring relevant data, worrying about things getting“out of sync,” differences in command sets and ca-pabilities between different applications, remember-ing what has already been done, etc., soon becomesoverwhelming and makes the interface more andmore complex. If we insist on maintaining the sep-aration of applications, there is no way out of thisdilemma.

How do we deal with this in the real world? We mightdelegate the task of arranging a trip to a human as-sistant, such as a secretary or a travel agent. It thenbecomes the job of the agent to decide what contextis appropriate and what tools are necessary to op-erate in each context, and to determine what ele-ments of the context are relevant at any moment.The travel agent knows that we prefer an aisle seatand how to select it using the airline’s reservationsystem, whether we have been cleared for a wait-listed seat, how to lower the price by changing air-line or departure time, etc. It is this kind of job thatwe will have to delegate more and more to softwareagents if we want to maintain simplicity of interac-tion in the face of the desire to apply computers toever more complex tasks.

Agents and user intent. The primary job of the agentis to understand the intent of the user. There areonly two choices: either the agent can ask the user,or the agent can infer the user’s intent from context.Asking the user is fine in many situations, but mak-ing explicit queries for everything soon exhausts theuser. We rely on our human agents to learn frompast experience and to be able to pick up informa-tion they need from context. We expect humanagents to be able to piece together partial informa-tion that comes from different sources at differenttimes in order to solve a problem. Today’s softwaredoes not learn from past interactions, always asks

LIEBERMAN AND SELKER IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000620

Page 5: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

explicit questions, and can deal with information ex-plicitly presented to it only when it is ready to re-ceive it. This will have to change if we are to makecomputers ever more useful.

Getting context sensitivity right is no easy task. It isparticularly risky because if you get it wrong, it be-comes very noticeable and annoying to the user. Asin any first-generation technology, there will be oc-casional mistakes. As an example, the feature in Mi-crosoft Word** that automatically capitalizes the firstword of a sentence can occasionally get it wrong whenyou type a word following an abbreviation. The firsttime is not so bad. But this becomes even more frus-trating as the user tries to undo the “correction” andis repeatedly “recorrected.” One can take this as anargument not to do the correction at all. But per-haps the cure is more context sensitivity, rather thanless. The system could notice that its suggestion wasrejected by the user, and possibly also note the ab-breviation so that its performance improves in thefuture.

Instructibility and generalization from context. Allwe have to start with, for humans as well as machines,is concrete experience in specific situations. For thatknowledge to be of any use, it has to be generalized,and so generalization is key for the agent to inferthe intent of the user. Generalization means remem-bering what the user did, and removing some of thedetails of the particular context so that the same oranalogous experience will be applicable in differentsituations.

This involves an essential trade-off. A conservativeapproach sticks closely to the concrete experience,and so achieves increased accuracy at the expenseof restricting applicability to only those situations thatare very similar to the original. A liberal approachtries to do as much abstraction as possible, so thatthe result will be widely applicable, but at the in-creased risk of not being faithful to the user’s orig-inal intentions.

We illustrate this relationship between generaliza-tion and context by talking about several projects thattry to make software agents instructible, using thetechnique of “programming by example,”5 some-times also called “programming by demonstration.”This technique couples a learning agent with a con-ventional direct-manipulation graphical interface,such as a text or graphic editor. The agent recordsthe actions performed by the user in the interface

and produces a generalized program that can laterbe applied to analogous examples.

Authors in this field have noted the “data descrip-tion problem,” which is the problem of how to usecontext in deciding how much to generalize the re-corded program. The system often has to make thechoice of using extensional descriptions (describingthe object according to its own properties) or inten-tional descriptions (describing the object accordingto its role or relationships with other objects).

Sometimes, knowledge of the application domainprovides enough context to disambiguate actions. Inthe programming-by-example graphical editor Mon-drian,6 the system describes objects selected by theuser according to a set of graphical properties andrelations. These relations are determined by exten-sional graphical properties (top, bottom, left, right,color) and by intentional properties of the object’srole in the demonstration (an object designated asan example might be described as “the first argumentto the function being defined”).

We provide the user with two different ways to in-teractively establish a context: graphically, via attach-ing graphical annotations to selected parts of the pic-ture,6 or by speech input commands that advise thesoftware agent “what to pay attention to”7 (see Fig-ure 3).

Note that neither the user’s verbal instruction alonenor the graphical action alone makes sense out ofcontext. It is only when the system interprets the ac-tion in the context of the demonstration and the con-text of the user’s advice that the software agent hasenough information to generalize the action.

User advice to a system thus forms an important as-pect of context. It can be given either during a dem-onstration, as in Mondrian, or afterward. Future sys-tems will necessarily have to give the user the abilityto critique the system’s performance after the fact.User critiques will serve to debug the system’s per-formance, and serve as a primary mechanism for con-trolling the learning behavior of the system. The abil-ity to accept critiques will also increase usersatisfaction with systems, since there will be less pres-sure to get it right the first time. Notice that makinguse of critiques also involves a generalization step.When the user says “do not do that again,” it is theresponsibility of the system to figure out what “that”refers to, by deciding which aspects of the contextare relevant. The ability to modify behavior based

IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000 LIEBERMAN AND SELKER 621

Page 6: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

on critiques is an essential difference between gen-uine human dialog and the rigid dialog boxes offeredby today’s computer interfaces.

A different kind of use of user context is illustratedby the software agent Letizia.8 Letizia implementsa kind of observational learning, in that it recordsand analyzes user actions to heuristically computea profile of user interests. That user profile is thenused as context in a proactive search for informa-tion of interest to the user.

Letizia tracks a user’s selections in a Web browserand does a “reconnaissance” search to find interest-ing pages in the neighborhood of the currentlyviewed page (see Figure 4). In this agent, it is anal-ysis of the user’s history that provides the contextfor anticipating what the user is likely to want next.Letizia shows that there is a valuable role for a soft-ware agent in helping the user to identify intersec-tions of past context (browsing history) with currentcontext (the currently viewed Web page and otherpages a few links away from it).

Letizia illustrates a role for software agents in help-ing the user deal with “context overload.” In situ-ations such as browsing the Web, so much infor-mation is potentially relevant that the user is

overwhelmed by the task of finding out what ele-ments of the context are actually relevant. It is herethat the software agent can step in, heuristically tryto guess which of the available resources might berelevant, and put the resources most likely to be rel-evant at the user’s fingertips.

A simple way to deal with the profusion of possibleinterpretations of context is for the system to com-pute a set of plausible interpretations and let the userchoose among them. It is therefore a way for the userto give advice about context to the system.

In Grammex (“Grammars by Example”),9 the usercan teach the system to recognize text patterns bypresenting examples, letting the system try to parsethem, and then interacting with the system to explainthe meaning of each part. Text patterns such as e-mail addresses, chemical formulas, or stock tickersymbols are often found in free text. Apple’s DataDetectors10 provides an agent that uses a parser topick such patterns out of their embedded textual con-text and apply a set of predetermined actions appro-priate to the type of object found. For each text frag-ment, Grammex heuristically produces a menucontaining all the plausible interpretations of thatstring in its context. An example string “media.mit.edu” could mean either exactly that string, a string

Figure 3 Mondrian generalizes according to graphical context, user advice context, and demonstration context

Remember-Action

CONTEXT OF GRAPHICAL OBJECTS

MOUSE INPUT

Click at (27,52)Click at (112,52)and drag to (149,217)

Remember-PointRemember-Point

“Maintain height”

SPEECHINPUT

Draw a rectanglefrom the left top corner of the first argument, toa point 1/3 of the width of the first argument, and whose height is 165 pixels

RECTANGLE

LIEBERMAN AND SELKER IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000622

Page 7: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

of words separated by periods, or a recursive def-inition of a host name. Grammex illustrates how asoftware agent can assist a user by computing a setof plausible contexts, then asking the user to con-firm which one is correct (see Figure 5).

Emacs Menus11 is a programming environment thatsits in a text editor and analyzes the surrounding textto infer a context in which a pop-up menu can sug-gest plausible completions. If the user is in a contextwhere it is plausible to type a variable, the systemcan read the program and supply the names of allthe variables that would make sense at that point.This expands the notion of context to mean not just“what is in the neighborhood right now” but, moregenerally, “what is typically in a neighborhood likethis.” That significantly improves the system’s use-fulness and perceived intelligence.

Models of context: User, task, and system models.All computer systems have some model of context,even if it is only implicit. The computer “knows” whatinstructions it can process at each stage, knows whatinput it expects in what order, knows what error mes-sages to give when there is a problem. Historicallythese have been static descriptions, represented byfiles or internal data structures, or encoded proce-durally and used only for a single purpose. The com-puter’s models take the form of a description of thesystem itself, a system model, the user’s state, historyand preferences, a user model, and the goals and ac-tions intended to be performed by the user, a taskmodel. To create context-aware applications, user,task and system models should best be dynamic andhave the ability to explain themselves.

Computer users always hold ideas of what the sys-tem is, and what they can do with it. Part of the usualsystem model is “if I start the right programs andtype the right things, the computer will do things forme.” Some users understand advanced concepts likeclient/server systems, or disk swap space; others donot. The system’s model may change (for example,a new version of a browser might integrate e-mail).The user may or may not be aware of this integra-tion.

Historically, computer system models are implicitlyheld in function calls that expect other parts of thesystem to be there. Better for contextual computingare systems that represent a system model explic-itly, and try to detect and correct differences betweenthe user’s system model and what the system can ac-tually do. This is analogous to “naı̈ve physics” in phys-

ics education, where we help people understand howwhat they think they know, wrong as it may be, af-fects their ability to reason about the system. A dy-namic system model could be queried about whatthe system could do and perhaps even change its re-sponse as it was crashing or being upgraded.

Norman12 stressed that users frequently have mod-els of what the system can do that are incomplete,sometimes intentionally so. They adopt strategiesthat are deliberately suboptimal in order to defen-sively protect themselves against the possibility orconsequences of errors. Discrepancies between a sys-tem’s assumption that the user has perfect under-standing of its commands and objects, and the us-er’s actual partial understanding, can lead toproblems. This further argues for dynamic and ex-

Figure 4 Letizia helps the user intersect past context (history of browsing concerning investment brokers) with current context (“Smart Money”) to find “Broker Ratings.”

AGENT LEARNS PROFILE OF USER INTERESTS:BROKER (17.35), INVESTMENT (8.5), MERRILL (7.22)...

IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000 LIEBERMAN AND SELKER 623

Page 8: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

planatory models so that the user and system cancome to a shared understanding of the system’s ca-pabilities.

A user’s task model is always changing. Perhaps theuser believes he or she needs just a simple calcula-tion to complete some work. If the result is differentthan expected, the user’s task model should change.The computer also has expectations (e.g., “a userwould never turn me off without shutting down . . . ”).A typical graphical help system uses a static taskmodel. “Wizards” typically assume that the user willalways do things in a certain way. The wizards in Mi-crosoft Windows** take a user through a linear pro-cedure. Any change from the shown procedure is notexplained. So wizards do not help the user learn togeneralize from a specific situation. Better ap-proaches can actually use the context to teach con-cepts that improve user understanding and futureperformance.

The COgnitive Adaptive Computer Help (COACH)system (discussed later) has a taxonomy of user tasks.This taxonomy allows the system to have a dynamictask model. Sunil Vemuri13 is building a system called“What Was I Thinking?” that records segments ofspeech, and, without necessarily completely under-standing the speech, maps the current segment ontoa similar segment that occurred in the recent past.This system expects that users have different tasks.

Computer programs that anticipate changing usesare more context-aware.

The computer has a model of what it thinks the usercan do: the user model. In most cases this model isthat the user knows all of its commands and shouldenter them correctly. Explicit user models have beenproposed and used for some time. The Grundy sys-tem14 used a stereotype, a list of user attributes suchas age, sex, and nationality, to help choose books ina library. Such a stereotype is an example of a usermodel that allows a computer to take user contextinto account.

“Do What I Mean” or DWIM15 incorporated a sys-tem model that would change as a person wrote aprogram, tracking the program variables and func-tions. DWIM would notice when a person typed afunction name that had not been defined. It wouldact as though it believed the person’s intended taskwas to type a known function name and would lookfor a similar defined name. DWIM used a dynamicuser model to search through user-defined words forpossible spelling analogs that might be intended. Un-fortunately, DWIM also made some bad decisions.This has been corrected in modern successors, whichinteractively display suggested corrections and per-mit easy recovery in case of wrong guesses. Early inthe 1990s Charles River Analytics’ Open Sesame**tracked user actions in Apple’s Macintosh** oper-ating system and offered predictive completion ofoperation sequences, such as opening certain win-dows after opening a particular application.

However, just having a user model does not ensurean improved system. In the 1970s and 80s, Sophie16

and other systems attempted to drive an electronicteaching system from an expert user model. It wasfound that novice users could not be modeled sim-ply as experts with some missing knowledge. Thethings that a user needs to know have more to do withthe user than with the expert he or she might be-come.17

The COgnitive Adaptive Computer Help (COACH)system18 uses adaptive task user and system modelsto improve the kind of help that can be given. Thissystem was shown to help users actually improve theirability to learn to program. As well as incorporatinga dynamic system model, COACH tracked user expe-rience and expertise and used it to decide what helpto give. The system recorded which constructs wereused, how often, and whether the user’s commandwas accepted by the computer, adding usage exam-

Figure 5 Picking an e-mail address out of context and applying an action to it (top); teaching the system how to recognize an e-mail address using Grammex (bottom)

LIEBERMAN AND SELKER IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000624

Page 9: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

ples and error examples from other users’ experi-ences. COACH also added user-defined constructs tothe system model, so that the user can explicitly teachthe computer.

Context-dependent help is help that is relevant to thecommands and knowledge currently active. Relativeto our notions of context, such help systems are us-ing a system model to make help appropriate to thesituation at the time of the query. Not surprisingly,such context-dependent or integrated help has beenshown to improve users’ ability to utilize it.19

COACH is a proactive, adaptive help system that ex-plains procedures in terms of the user’s own con-text. In contrast to help systems or wizards that useone example for one problem, COACH explains itsprocedures using the present context. COACH wasimplemented first for teaching any programming lan-guage that could be typed through an Emacs-like ed-itor or C shell command line. A user study demon-strated that COACH’s adaptive contextual help couldimprove LISP students’ ability to learn and write LISP.COACH models each task at the novice, intermedi-ate, professional, and expert levels.

Later versions of COACH were deployed as OS/2* (Op-erating System/2*) Smart Guides. This used graph-ical, animated, and audio commentary to teach usersabout many of the important GUI (graphical user in-terface) interaction techniques. Figure 6 showsCOACH demonstrating a drag interaction. We call thisimage a slug trail, which marks the important visualcontext surrounding when to press, move, and liftup on the mouse. Another technique developed forCOACH is the mask, a graphical annotation creatinga see-through grid that highlights important select-able items in the context. COACH adaptively createsthe mask on the user interface.

Context for embedded computing

The toilet flushes when the user walks away. Theclock tells us it is time to get up. These are examplesof context awareness with no user typing into a com-puter. When computers can sense the physical world,we might dispense with much of what is done withkeyboards and mice. Using knowledge of what wedo, what we have done, where we are, and how wefeel about our actions and environment is becominga major part of the user interface research agenda.

People say many things: It is what they do thatcounts. One of the obvious advantages of context-

aware computing is that it does not rely on sym-bols. Symbolic communication must be interpretedthrough language; communication through contextfocuses on what we actually do and where we are.

Individuals often communicate multiple messagessimultaneously that might be hard to separate. Mes-sages often communicate things about us (age,health, sleepiness, social interest in each other, stateof mind, priority of this communication, level of or-ganization, level in the organization, social back-ground, etc.) as well as their ostensible content.

Human speech tends to be full of errors. We com-municate what we think should be interpretable, butoften underspecify in the utterance. Speech is by na-ture unrelated to its physical subject (persons, places,things); without feedback it is hard to know how our

Figure 6 COACH explains operations in terms of the user model (“Level 1” at bottom), system model(masking disabled text at top), and task model (icon acted upon and state of mouse at bottom)

IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000 LIEBERMAN AND SELKER 625

Page 10: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

speech is perceived. Our logic, or the correspondenceof what we are saying with the thing we are talkingabout, is often flawed.

The modality of verbal communication is often lessdense than direct observation of physical acts. De-scribing what part of something should be observedor manipulated in a particular way can be quite cum-bersome compared to actually doing it and havingan observer watch. Some things are easier done thansaid.

Things in your head become things in your life. Withthe advent of ever smaller, faster, and cheaper com-puting and communications, computing devices will

become embedded in everyday devices: clothing,walls, furniture, kitchen utensils, cars, and many dif-ferent kinds of handheld gadgets that we will carryaround with us. Efforts such as the MIT Media Lab-oratory’s Things That Think and Wearable Comput-ing projects, Xerox Parc’s Ubiquitous Computing,20

Motorola’s Digital DNA, and others, are aimedtoward this future. It is our hope that these deviceswill enhance our lives and not be an annoyance, butthat will depend on whether or not the devices cantake the action that is appropriate to the context inwhich they will be used.

Everyone finds cell phones a convenience until theyring inappropriately at a restaurant. Phone compa-nies bristle at the fact that people habitually keepcell phones turned off to guard against just this sortof intrusion. Of course, phones that vibrate, ratherthan ring, are one simple solution that does not re-quire context understanding and illustrates howsometimes a simple context-free design can work.But even the vibration puts you in a dilemma aboutwhether to answer or not. Potentially, a smarter cellphone of the future could have a GPS (Global Po-sitioning System), know that it is in a restaurant, andtake a message. Or the phone could respond differ-

ently to each caller, know which callers should im-mediately be put through, and which could be de-ferred.

Context will be useful in cutting down the interfaceclutter that might otherwise result from having toomany small devices, each with its own interface. Al-ready, the touch-tone interface to a common officephone is getting so overloaded with features thatmost users have difficulty. Specializing “informationappliance” devices to a particular task, as recom-mended by Norman,21 simplifies each device, butleaves the poor user with a proliferation of devices,each with its own set of buttons, display, power sup-ply, user manual, and warranty card. As we havenoted, context can be a powerful factor in reducinguser input, in embedded computing devices as wellas desktop interfaces.

Interfaces for physical devices put different con-straints on user interaction than do screen interfaces.Display space is small, if any exists, and space forbuttons or other interaction elements is also re-stricted or may not exist. Users need to keep atten-tion focused on the real-world task, not on interact-ing with the device.

Transcription: Translating context into action. Per-haps the greatest potential in embedded sensory anddistributed computing is the possibility of eliminat-ing many of the transcription tasks that are other-wise foisted on users. Transcription occurs when theuser must manually provide, as input, some data thatcould be collected or inferred from the environment.Transcription relies on the user to perform the trans-lation. This simple act frequently introduces errors.Ergonomic realities mean that explicit transcriptionis bound to be more stressful than transferring in-formation implicitly. Mitch Stein has also noted thecentrality of the transcription problem,22 and pro-moted Krishna Nathan’s work at IBM on the Cross-Pad** product in response to the handwritten-notes-to-typing transcription that people often perform.Transcription can also take the form of translatinga user’s intuitive goal into a formal language that thecomputer understands; the most extreme exampleis the translation of procedural goals into a program-ming language. The difficulty of learning new com-puter interfaces, and of programming itself, is largelytraceable to the cognitive barriers imposed by thiskind of transcription.

“Smart” devices that sense and remember aspectsof the surrounding context are becoming interfaces

LIEBERMAN AND SELKER IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000626

Describing what should bedone can be quite

cumbersome ... somethings are easier done

than said.

Page 11: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

to computational elements, improving human-com-puter relationships. These devices will be able to lis-ten to and recognize speech input, perform simplevisual recognition, and perhaps even sense the emo-tional state of the user, via sensors being developedin Rosalind Picard’s Affective Computing project.23

Within a very short time, all desktop software, in-cluding Internet accessors, will record parts of theirhuman interface; sensors will record workers’ actionsin offices, labs, and other environments. The chal-lenge for the software will be to determine what partsof the context are relevant.

At the MIT Media Lab’s Context-Aware ComputingLab, we have developed several interfaces that il-lustrate the power of automatically sensing contextto eliminate unnecessary transcription tasks. Thesefall into two categories. The first is context-awaredevices that augment the static environment, suchas intelligent furniture with embedded computingand interaction capabilities. The second is wearable,portable, or attachable devices that augment theusers themselves.

The simplest project in the area of intelligent fur-niture is the Talking Couch, developed at IBM Al-maden’s USER group. A couch positioned in a lobbyoften serves the purpose of inviting the user to takea break to wait for something to happen. Usuallymagazines are on the tabletop; a TV might be on inthe corner. The digital couch does more—it orientsthe visitor, suggesting what he or she could do dur-ing this break. It speaks as its occupant sits down,informing the visitor about what is going on, whattime it is, and what he or she might do. It announceswhen the scheduled conference has its next break,when the cafeteria might be open, and who the nextspeaker is. If the occupant is wearing a specially de-signed personal digital assistant (PDA), the couchgoes on to point out specific user information thatcould be relevant: “It is always good to take a break;you have three things you said you wanted to workon when you had time.” Another message might be:“In 15 minutes you have to give a talk in the audi-torium.” The first generic reminder message, and themore timely talk reminder message, illustrate howbeing reminded might be useful or irritating. Thetalking couch creates a user model of preferencesand ambitions. Without the net-connected PDA, thecouch works with the dynamic system model of itssurroundings. It creates a task model: a person sit-ting on the couch would like to be oriented to whatis going on in the area. With the PDA it adds to thisa schedule-based model of the user.

Another project instruments a bed with computingcapabilities. A projection screen is mounted abovethe bed (Figure 7). A bed is expected to be the placefor a calm, relaxing break. A projection of a sunriseon the ceiling might be nice, especially if it could actas an alarm clock, set to the time you should get up.How about going to sleep with the stars in the sky,and a constellation game to put you to sleep? If youplay it too long, the game should ask if you want toget up at a later time. Projecting pages on the ceil-ing would allow you to read without propping your-self up on your elbows or a pillow. A multimedia bedcould provide such contextually appropriate contentas well as gesture recognition and postural correc-tion and awareness.

Context can augment the mobile, as well as the staticenvironment. A system consisting of an electronicoven mitt and trivet (the “talking trivet”) uses con-text to transform a thermometer into a cooking safetywatchdog. The talking trivet uses task models of tem-peratures on an oven mitt to decide how to commu-nicate to a cook.

The talking trivet is a digital enhancement of com-mon objects (see Figure 8). Sensing and memory inan oven mitt make it a better tool than a simple ther-mometer reading. The system uses a computer totake time and temperature into account in determin-ing whether food is in need of rewarming (under 90degrees Fahrenheit), hot and ready to eat, ready totake out (a temperature hotter than boiling waterwill dry food and browning starts soon after), or onfire (above 454 degrees Fahrenheit). The model iskey to the value of the temperature reading. The goalconcerns the importance of what the mitt is doing—itcould prevent a kitchen fire. The uses of the ovenmitt/trivet combination are simple; the goals of theuser obviously depend on the reported temperature.

The talking trivet could well be in a better positionto know when to take a pizza out of an oven thana person—it would measure the temperature whenthe 72-degree pizza is put in a 550-degree oven, andusing its pizza model, tell the user when the pizzashould be done. If the oven mitt touches the pan andfinds it to be only 100 degrees after 10 minutes, itshould go back to its contextual model and reflectthat this item must be much more massive than apizza. It might express alarm to the user: 550 degreesis too hot for a roast! This example underscores thevalue of a task model in a contextual object. The talk-ing trivet need not be “told” anything explicit. It canact as a fire alarm, cooking coach, and egg timer,

IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000 LIEBERMAN AND SELKER 627

Page 12: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

based only on what it experiences and its models ofcooking and the kitchen.

The view of context from other fields

Many other fields have treated the problem of con-text. In what follows, we present a little of our per-spective on how other fields have viewed the con-text problem.

Mathematical and formal approaches to AI. Severalareas of mathematics, and formal approaches to ar-tificial intelligence (AI), have tried to address con-text in reasoning. When formal axiomatizations ofcommonsense knowledge were first used as toolsfor reasoning in AI systems, it quickly became clearthat they could not be used blindly. Simple inferenc-es: “If Tweety is a bird, then conclude that Tweetycan fly” seemed plausible until the possibility thatTweety might be a penguin or an ostrich, a stuffedbird, an injured bird, a dead bird, etc., was consid-ered. It would be impossible to enumerate all thecontingencies that would make the statement defin-itive.

McCarthy24 introduced the idea of circumscriptionas a way to contextualize axiomatic statements. Likemany of the formal approaches that try to deal withthe problem, this technique gives to each logicalpredicate an extra argument to represent the con-text. The notation tries to make this extra argumentimplicit to avoid complicating proofs that use thetechnique. Then we could say, using commonsensereasoning, “If X is a bird, then assume X can fly, un-less something in the context explicitly prevents it.”This is quite a hedge!

In artificial intelligence, researchers have identifiedthe so-called “frame problem.” In planning and ro-botics systems that deal with sequences of actions,each action is typically represented as a functiontransforming the state of the world before the ac-tion to the state after the action. The frame prob-lem is to determine which statements that were truebefore the action remain true after the action, or howthe action affects and is affected by its context. Solv-ing the frame problem requires making inferencesabout relevance and causal chains.

In traditional mathematical logic, statements proventrue remain true forever, a property called mono-tonicity. However, the addition of context changesthat, since if we learn more about the context (forexample, we learn that Tweety recently died), we

Figure 7 The electronic bed

Figure 8 Talking trivet

LIEBERMAN AND SELKER IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000628

Page 13: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

might change our minds. Nonmonotonic logic stud-ies this phenomenon. A standard method for deal-ing with nonmonotonicity in AI systems is the so-called “truth maintenance system,” which recordsdependencies among inferences and can retract as-sertions if all the assumptions on which they rest be-come invalid.

Even traditional modal logics can be seen as a re-action to the context problem. Modal logics intro-duce quantifiers for “necessary” and “possible”truths, and are typically explained in terms of pos-sible world semantics. Something is necessary if andonly if it is true in every possible world, and possiblein case it is true in at least one world. Each possibleworld represents a context, thus modal logics enablereasoning about the dependence of statements oncontext.

A continuing issue in AI is also the role of backgroundknowledge or “commonsense” knowledge as context.A controversial position, probably best exemplifiedby Doug Lenat’s CYC project,25 maintains that in-telligence in systems stems primarily from knowinga large number of simple facts, such as “water flowsdownhill” or “if someone shouts at you, she is prob-ably angry with you.” The intuition is that even sim-ple queries depend on understanding a large amountof context: commonsense knowledge that remainsunstated, but is shared among most people with acommon language and culture. The CYC project hasattempted for more than ten years to codify suchknowledge, and has achieved the world’s largestknowledge base, containing more than a million facts.However, the usability of such a large knowledgebase for interactive applications such as Web brows-ing, retrieval of news stories, or user interface as-sistants has yet to be proven. You can get knowledgein, but it is not so easy to get it out.

The CYC approach could be labeled the “size mat-ters” position. It could also be considered an out-growth of the expert systems movement of the 1980s,where systems of rules for expert problem-solvingbehavior were created by interviewing domain ex-perts, and the rule base matched to new situationsto try to determine what to do. Expert systems werebrittle; since there was no explicit representation ofthe context in which the expertise was situated, smallchanges in context would cause previously enteredrules to become inapplicable. Many researchers atStanford, including John McCarthy and Mike Gen-esereth, worked on axiomatic representations of

commonsense knowledge and theorem-proving tech-niques to make these representations usable.

AI is now turning toward approaches in which largeamounts of context are analyzed, both throughknowledge-based methods and statistically, to detectpatterns or regularities that would enable better un-derstanding of context. Data mining techniques canbe viewed in this light. Data mining is a knowledgediscovery technique that analyzes large amounts ofdata and proposes hypothetical abstractions that arethen tested against the data. The Web has also en-couraged the rise of information extraction26 tech-niques, where Web pages are analyzed with parsersthat stop short of complete natural language under-standing. The parsers approximate inference usingtechniques such as TFIDF (term frequency times in-verse document frequency) keyword analysis, latentsemantic indexing, lexical affinity (inferring seman-tic relations from proximity of keywords), part-of-speech tagging and partial parsers. The availabilityof semantic knowledge bases such as WordNet27 alsoencourages partial understanding of context ex-pressed in natural language text.

Finally it is worth noting that there is a dissentingcurrent in AI that decries the use of any sort of rep-resentation, and therefore denies the need for con-text-aware computing. This position is best repre-sented in its most extreme form by Rodney Brooks,28

who maintained that intelligence could be achievedin a purely reactive mode, without any need for main-taining a declarative representation. Abstraction isbuilt up only by a subsumption architecture, wheresets of reactive behaviors are successively subsumedby other reactive behaviors with greater scope.

It should be clear from the previous discussion thatour position on context places emphasis on sharedunderstanding of context between humans and ma-chines, growing as both the computer and the userobserve and understand mutual interaction. We be-lieve that even though some knowledge may be cod-ified in advance, it is impossible to assume, as CYCand the Stanford axiomatic theoreticians do, thatmost or all can be codified in advance. We also re-ject the Brooks position that representation and con-text is unnecessary for intelligence. We would liketo structure an interaction so that users teach thesystem just a little bit of context with each interac-tion, and the system feeds back a little bit of its un-derstanding at each step. We believe that in this way,representations of both commonsense and personalcontext can be built up slowly over time.

IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000 LIEBERMAN AND SELKER 629

Page 14: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

Context in the human-computer interface field. Con-text plays a big role in information visualization andvisual design in general. Tufte29 and other authorshave noted that the choice of visual appearance ofan interface element should depend on its context,since human perception tends to pick up similaritiesof color, shape, or alignment of objects. Visual sim-ilarity in a design implies semantic similarity, whetherit is intentional or not. A visual language of inter-face design needs to consider these relationships, andtools can be designed that automatically map seman-tic relationships into visual design choices.30,31

Introductory texts on user interface design stress theimportance of interaction with end users. Designersare admonished to ask users what they want, testingpreliminary mock-ups or low-fidelity prototypes earlyin the design process. User-centered and participa-tory design practices have been especially widespreadin Scandinavia. This gives the interface designers thebest understanding of the users’ context in order tominimize mismatches between the designers’ and theusers’ expectation.

Context in sociology and behavioral studies. Ap-proaches in sociology have stressed the importanceof context in observing how people behave and inunderstanding their cognitive abilities. Lucy Such-man and colleagues32 have championed what theycall the situated action approach33 that stresses theeffect of shared social context on human behavior.However, the situated critique focuses on getting sys-tem designers to adapt designs to context, and noton having the system itself dynamically adapt to con-text.

Another relevant field is the activity theory ap-proach,34 growing out of a Russian psychology move-ment. Other related fields, such as industrial systemsengineering, ecological psychology, ethology, andcognitive psychology, also study context. They inves-tigate how the contexts of behavior are critical fordetermining both what constitutes successful behav-ior and what strategies an agent must employ to gen-erate that behavior.

Probably the most striking work in understandinghow context affects human interaction with comput-ers is Clifford Nass and Byron Reeves’ Media Equa-tion.35 Ingeniously designed experiments dramati-cally demonstrate how individuals transfer humansocial context into interaction with machines, vol-untarily or not.

Simplifying interfaces without “dumbingthem down”

Using computers takes too much concentration. Ittakes significant time to learn and deal with com-puter interaction, rather than the task one is attempt-ing. Individuals must switch contexts, from thinkingabout what they are interested in to thinking explic-itly about what commands will have the effects theyintend. The danger is that the presence of comput-ers may distract from direct experience. It is similarto an eager relative, so engrossed in taking picturesat a beach party that we wonder if they are truly ex-periencing the beach.

Context-aware computing gives us a way out of thisdilemma. Tools can get in the way of tasks, and con-text-aware computing gives us the potential for tak-ing the tool out of the task. When computers or de-vices sense automatically, remember history, andadapt to changing situations, the amount of unnec-essary explicit interaction can be reduced, and oursystems will be more responsive as a result.

We want simpler interfaces. But if the only way wecan get simpler interfaces is to reduce functionality,we “dumb down” our interfaces. Reduced function-ality works well in simple situations, but can be in-appropriate or even dangerous when the situationbecomes more complex. Context-aware agents andcontext-sensitive devices can give us the sophisticatedbehavior we need from our artifacts without burden-ing the users with complex interfaces.

We have seen how software agents that record andgeneralize user interactions, and sensor-based de-vices that provide context-appropriate behavior, holdthe potential for getting us off the treadmill of addedfeatures. It is now time to integrate what we havelearned about context, both from the mathematicaland sociological fields in which it has been tradition-ally studied, and from the new perspective that comesfrom AI and human-computer interfaces, to worktoward the effortless success we always dream of.

*Trademark or registered trademark of International BusinessMachines Corporation.

**Trademark or registered trademark of Microsoft Corporation,Allaire Corporation, Apple Computer, Inc., or A. T. Cross Com-pany.

Cited references

1. C. Wisneski, H. Ishii, A. Dahley, M. Gorbet, S. Brave, B. Ull-mer, and P. Yarin, “Ambient Displays: Turning Architectural

LIEBERMAN AND SELKER IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000630

Page 15: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

Space into an Interface Between People and Digital Infor-mation,” Proceedings of First International Workshop on Co-operative Buildings (CoBuild ’98), Darmstadt, Germany (Feb-ruary 25–26, 1998), pp. 22–32.

2. Software Agents, J. Bradshaw, Editor, AAAI Press/MIT Press,Menlo Park, CA (1997).

3. P. Maes, “Agents That Reduce Work and Information Over-load,” Communications of the ACM 37, No. 7, 30–40 (July1994).

4. T. Selker and W. Burleson, “Context-Aware Design and In-teraction in Computer Systems,” IBM Systems Journal 39, Nos.3&4, 880–891 (2000, this issue).

5. Watch What I Do: Programming by Demonstration, A. Cypher,Editor, MIT Press, Cambridge, MA (1993).

6. H. Lieberman, “Mondrian: A Teachable Graphical Editor,”Watch What I Do: Programming by Demonstration, A. Cypher,Editor, MIT Press, Cambridge, MA (1993).

7. E. Stoehr and H. Lieberman, “Hearing Aid: Adding VerbalHints to a Learning Interface,” Proceedings of the Third ACMConference on Multimedia, San Francisco, CA (November 5–9,1995).

8. H. Lieberman, “Autonomous Interface Agents,” ACM Con-ference on Human Factors in Computer Systems, Atlanta, GA(March 22–27, 1997), pp. 67–74.

9. H. Lieberman, B. Nardi, and D. Wright, “Training Agentsto Recognize Text by Example,” ACM Conference on Auton-omous Agents, Seattle, WA (May 1–5, 1999). Also to appearin the Journal of Autonomous Agents and Multi-Agent Systems(2000).

10. B. Nardi, J. Miller, and D. Wright, “Collaborative, Program-mable Intelligent Agents,” Communications of the ACM 41,No. 3, 96–104 (March 1998).

11. C. Fry, “Programming on an Already Full Brain,” Commu-nications of the ACM 40, No. 4, 55–64 (April 1997).

12. D. A. Norman, “Some Observations on Mental Models,” Men-tal Models, D. Gentner and A. L. Stevens, Editors, LawrenceErlbaum Associates, Hillsdale, NJ (1983), pp. 15–34.

13. S. Vemuri, personal communication (1999). Also see http://context99.www.media.mit.edu/courses/context99/.

14. E. Rich, “Users Are Individuals: Individualizing User Mod-els,” International Journal of Man-Machine Studies 18, 199–214 (1983).

15. C. Lewis and D. A. Norman, “Designing for Error,” Read-ings in Human-Computer Interaction: A Multi-Disciplinary Ap-proach, R. M. Baecker and W. A. S. Buxton, Editors, Mor-gan Kaufmann Publishers, Inc., Los Altos, CA (1987), pp.621–626.

16. J. Brown, R. Burton, and A. G. Bell, “Sophie: A Step Towarda Reactive Environment,” International Journal of Man Ma-chine Studies 7 (1975).

17. D. H. Sleeman and J. S. Brown, “Introduction: IntelligentTutoring Systems: An Overview,” Intelligent Tutoring Systems,D. H. Sleeman and J. S. Brown, Editors, Academic Press,Burlington, MA (1982), pp. 1–11.

18. T. Selker, “COACH: A Teaching Agent That Learns,” Com-munications of the ACM 37, No. 7, 92–99 (July 1994).

19. N. S. Borenstein, “Help Texts vs Help Mechanisms: A NewMandate for Documentation Writers,” Proceedings of theFourth International Conference on Systems Documentation,Ithaca, NY, (June 18–21, 1985), pp. 78–83.

20. M. Weiser, “The Computer for the 21st Century,” ScientificAmerican 265, No. 3, 94–104 (September 1991).

21. D. Norman, The Invisible Computer, MIT Press, Cambridge,MA (1998).

22. J. Landay and R. C. Davis, “Making Sharing Pervasive: Ubiq-

uitous Computing for Shared Note Taking,” IBM SystemsJournal 38, No. 4, 531–550 (1999).

23. R. Picard, Affective Computing, MIT Press, Cambridge, MA(1997).

24. J. McCarthy, “Circumscription—A Form of Non-MonotonicReasoning,” Artificial Intelligence Journal 13, 27–39 (1980).

25. D. B. Lenat and R. V. Guha, Building Large Knowledge BasedSystems, Addison-Wesley Publishing Co., Reading, MA(1990), pp. 38–48.

26. W. G. Lehnert, “Cognition, Computers and Car Bombs: HowYale Prepared Me for the 90’s,” Beliefs, Reasoning, and De-cision Making: Psycho-Logic in Honor of Bob Abelson, R. C.Schank and E. Langer, Editors, Lawrence Erlbaum Associ-ates, Hillsdale, NJ (1994), pp. 143–173.

27. C. Fellbaum, WordNet: An Electronic Lexical Database, MITPress, Cambridge, MA (1998).

28. R. A. Brooks, “Intelligence Without Representation,” Arti-ficial Intelligence Journal 47, 139–159 (1991).

29. E. Tufte, Visual Explanation, Graphics Press, Hopkinton, MA(1996).

30. M. Cooper, “Computers and Design,” Design Quarterly 142,22–31 (1989).

31. H. Lieberman, “Intelligent Graphics,” Communications of theACM 39, No. 8, 38–48 (August 1996).

32. L. Suchman, Plans and Situated Actions, Cambridge Univer-sity Press, Cambridge, UK (1987).

33. J. Barwise and J. Perry, Situations and Attitudes, MIT Press,Cambridge, MA (1983).

34. Context and Consciousness: Activity Theory and Human-Com-puter Interaction, B. Nardi, Editor, MIT Press, Cambridge,MA (1995).

35. B. Reeves and C. Nass, The Media Equation: How People TreatComputers, Television, and New Media Like Real People andPlaces, Cambridge University Press, Cambridge, MA (1996).

General references

The Art of Human-Computer Interface Design, B. Laurel, Editor,Addison-Wesley Publishing Co., NY (1989).P. Langley, Elements of Machine Learning, Morgan Kaufmann,San Francisco, CA (1996).H. Lieberman and D. Maulsby, “Software That Just Keeps Get-ting Better,” IBM Systems Journal 35, Nos. 3&4, 539–556 (1996).H. Yan and T. Selker, “A Context-Aware Office Assistant,” ACMInternational Conference on Intelligent User Interfaces (IUI-2000),New Orleans, LA (January 9–12, 2000).

Accepted for publication May 9, 2000.

Henry Lieberman MIT Media Laboratory, 20 Ames Street, Cam-bridge, Massachusetts 02139-4307 (electronic mail: [email protected]). Dr. Lieberman has been a research scientist at the MITMedia Laboratory since 1987. His interests are in the intersec-tion of computer graphics, human interface, and artificial intel-ligence. His current projects involve media interfaces that learnfrom examples presented by the user. He is a member of the Soft-ware Agents Group, which works on interface agents, intelligentassistants for interactive media applications. He has also workedwith the Visible Language Workshop, which is concerned withvisual design issues. From 1972 to 1987 he was a researcher atthe MIT Artificial Intelligence Laboratory, where he worked onparallel object-oriented programming, knowledge representation,programming environments, machine learning, and computer sys-

IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000 LIEBERMAN AND SELKER 631

Page 16: Out of context: Computer systems that adapt to, and learn from, context … · 2010-12-01 · computers,intelligentagentsoftware,anddigital ... Out of context: Computer systems that

tems for education. He holds a doctoral-equivalent degree (Ha-bilitation) from the University of Paris and was a visiting pro-fessor there.

Ted Selker MIT Media Laboratory, 20 Ames Street, Cambridge,Massachusetts 02139-4307 (electronic mail: [email protected]).Dr. Selker is an MIT professor focusing on context-aware com-puting. Before joining the MIT faculty he worked at IBM for overa decade, created the User System Ergonomic Laboratory, andwas named an IBM Fellow. During that time he also served onthe faculty of Stanford University. He is recognized for the de-sign of the “TrackPointt III” in-keyboard pointing device, forcreating the “COACH” adaptive agent that improves user per-formance (Warp Guides in OS/2), and for the design of the Think-Padt 755CV notebook computer that doubles as a liquid crystaldisplay projector. Dr. Selker obtained his B.S. degree from BrownUniversity, his M.S. degree from the University of Massachusetts,and his Ph.D. degrees from the City University of New York incomputer science and information sciences and applied math-ematics. Prior to joining IBM Research in 1985, he worked at theXerox Palo Alto Research Center, Atari Research Labs, and Stan-ford University, and was also a Stanford consulting professor. Heis Chief Scientist for Vert Corporation, is on the Board of Di-rectors for GetGoMail.com, and is on the Board of Advisors ofFindTheDot.com and Xift.

LIEBERMAN AND SELKER IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000632