Top Banner
70 PERVASIVE computing Published by the IEEE Computer Society 1536-1268/07/$25.00 © 2007 IEEE Design Challenges and Principles for Wizard of Oz Testing of Location- Enhanced Applications L ocation-enhanced applications adapt their behavior to or provide informa- tion about the location of people, places, and things. Boost Mobile’s Loopt service (www.boostmobile.com/ boostloopt), for example, lets mobile phone users identify their friends’ current locations, while Navi- time’s EZNaviWalk (http://brew.qualcomm.com/ brew_bnry/pdf/press_kit/navitime.pdf) helps users find optimal routes to their destinations based on the mode of transportation they select. Such loca- tion-enhanced applications are the most widely adopted type of ubicomp application; analysts pre- dict that their use will grow tremendously in the near future. 1 The problem, however, is that designing location-enhanced ap- plications is often difficult. 2 Re- searchers have built toolkits for developing these applications, 3–5 but using the toolkits requires significant technical expertise. This makes it hard for inter- action designers who lack a pro- gramming or hardware back- ground to prototype, evaluate, and iterate on designs. In addition, location infra- structures aren’t always available—GPS, for exam- ple, doesn’t work indoors—and location-sensing technologies are still nonstandard and complex. In short, it takes significant effort to deploy a loca- tion-enhanced application so that you can realis- tically test it with users. And, by that time, it’s often too late and too expensive to make major changes. Here, we discuss Wizard of Oz techniques 6 for testing location-enhanced applications. WOz tech- niques are often employed in user interface (UI) prototyping and have been successfully used in sev- eral tools for early stage design. 7 The WOz ap- proach lets users try out a system before it’s fully developed. It accomplishes this using a “wizard” to simulate system parts that require sophisticated technologies, such as speech recognition 7 or loca- tion sensing. 8,9 This makes it easy to quickly explore many ideas because designers are less con- strained by technical details. 10 Previous research has performed field experiments with location- based systems. 8–11 Here, we focus on tool support for WOz testing of location-enhanced applications, which pose two new challenges: • They must incorporate location contexts, which have a much larger design space than traditional GUI input. The target setting is often a dynamically chang- ing field environment, rather than a desk. To address these issues, we built Topiary, which includes a suite of WOz techniques for testing location-enhanced applications. We previously de- scribed how designers can use Topiary to prototype and test location-enhanced applications. 9 Here, we use it as a proof of concept of WOz techniques and highlight three important research issues related Using Wizard of Oz techniques, designers can efficiently test location- enhanced application prototypes during the early stages of design, without building a full-fledged application or deploying a sensing infrastructure. RAPID PROTOTYPING AND TESTING Yang Li University of Washington Jason I. Hong Carnegie Mellon University James A. Landay University of Washington and Intel Research Seattle
6

Design Challenges and Principles for Wizard of Oz Testing ...yangl.org/pdf/locationwoz-ieee2007.pdf · Figure 1. A typical location-based Wizard of Oz test setting. (a) An end user

Feb 06, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Design Challenges and Principles for Wizard of Oz Testing ...yangl.org/pdf/locationwoz-ieee2007.pdf · Figure 1. A typical location-based Wizard of Oz test setting. (a) An end user

70 PERVASIVEcomputing Published by the IEEE Computer Society ■ 1536-1268/07/$25.00 © 2007 IEEE

Design Challenges and Principles for Wizard of Oz Testing of Location-Enhanced Applications

Location-enhanced applications adapttheir behavior to or provide informa-tion about the location of people,places, and things. Boost Mobile’sLoopt service (www.boostmobile.com/

boostloopt), for example, lets mobile phone usersidentify their friends’ current locations, while Navi-time’s EZNaviWalk (http://brew.qualcomm.com/brew_bnry/pdf/press_kit/navitime.pdf) helps usersfind optimal routes to their destinations based onthe mode of transportation they select. Such loca-tion-enhanced applications are the most widelyadopted type of ubicomp application; analysts pre-dict that their use will grow tremendously in the

near future.1

The problem, however, is thatdesigning location-enhanced ap-plications is often difficult.2 Re-searchers have built toolkits fordeveloping these applications,3–5

but using the toolkits requiressignificant technical expertise.This makes it hard for inter-action designers who lack a pro-gramming or hardware back-ground to prototype, evaluate,

and iterate on designs. In addition, location infra-structures aren’t always available—GPS, for exam-ple, doesn’t work indoors—and location-sensingtechnologies are still nonstandard and complex.In short, it takes significant effort to deploy a loca-tion-enhanced application so that you can realis-

tically test it with users. And, by that time, it’s oftentoo late and too expensive to make major changes.

Here, we discuss Wizard of Oz techniques6 fortesting location-enhanced applications. WOz tech-niques are often employed in user interface (UI)prototyping and have been successfully used in sev-eral tools for early stage design.7 The WOz ap-proach lets users try out a system before it’s fullydeveloped. It accomplishes this using a “wizard”to simulate system parts that require sophisticatedtechnologies, such as speech recognition7 or loca-tion sensing.8,9 This makes it easy to quicklyexplore many ideas because designers are less con-strained by technical details.10 Previous researchhas performed field experiments with location-based systems.8–11 Here, we focus on tool supportfor WOz testing of location-enhanced applications,which pose two new challenges:

• They must incorporate location contexts,which have a much larger design space thantraditional GUI input.

• The target setting is often a dynamically chang-ing field environment, rather than a desk.

To address these issues, we built Topiary, whichincludes a suite of WOz techniques for testing location-enhanced applications. We previously de-scribed how designers can use Topiary to prototypeand test location-enhanced applications.9 Here, weuse it as a proof of concept of WOz techniques andhighlight three important research issues related

Using Wizard of Oz techniques, designers can efficiently test location-enhanced application prototypes during the early stages of design,without building a full-fledged application or deploying a sensinginfrastructure.

R A P I D P R O T O T Y P I N G A N D T E S T I N G

Yang LiUniversity of Washington

Jason I. HongCarnegie Mellon University

James A. LandayUniversity of Washington and Intel Research Seattle

Page 2: Design Challenges and Principles for Wizard of Oz Testing ...yangl.org/pdf/locationwoz-ieee2007.pdf · Figure 1. A typical location-based Wizard of Oz test setting. (a) An end user

to designing WOz testing techniques forubicomp applications:

• designing visual languages for WOztesting,

• allocating tasks between wizards anddesigners, and

• automating wizard tasks.

Topiary overviewTopiary provides a set of high-level pro-

totyping abstractions—maps, scenarios,and storyboards—that designers can usein the tool’s active map, storyboard, andtest workspaces.

Designers use the active-map work-space to create a model of the location ofpeople, places, and things. They can thenplace graphical objects representing theseentities on a map to signify their locationsand spatial relationships to one another—for example, “Alice is in the café” and“Bob is far from Tom.”

In the storyboard workspace, designerscan capture location contexts as scenar-ios. To create interface mockups, design-ers sketch pages that represent screens andcreate links that represent transitionsbetween pages. They can then use cap-tured scenarios as link conditions or trig-gers. For example, the designer mightspecify that an application prototype auto-matically switches from one page toanother when “anyone moves near Bob.”Likewise, when users click on a button,the prototype could show different infor-mation depending on the user’s location.

Designers can use Topiary to test adesign with real users by running the inter-face mockup on a mobile device, such asa PDA (see figure 1a). During a test, usersinteract with the interface mockup, whilea wizard follows them and updates thelocation of people and things on a sepa-rate device (see figure 1b). The test work-space has two components:

• the end-user UI that users see andinteract with (see figure 1a), and

• the wizard UI, where designers simu-late location contexts (see figure 2).

A wizard simulates location contexts bymoving people and things around todynamically update their locations. If mov-ing a person or a thing activates a story-board transition, the end-user UI auto-matically transitions to a target page. Tosimulate the entity’s orientation change, thewizard rotates the arrow attached to theentity. A wizard navigates in the wizard

map by either panning the map directly orcircling a target region in the radar view.

Designing visual languages for WOz testing

Understanding how to design moreeffective WOz testing interfaces is one ofour major research goals. In our view, tohelp wizards better run user tests andbetter simulate ubicomp applications,we need more appropriate visual lan-guages. Traditionally, researchers address

APRIL–JUNE 2007 PERVASIVEcomputing 71

Figure 1. A typical location-based Wizard of Oz test setting. (a) An end user interactswith the end-user UI on a device, such as a PDA. (b) A wizard follows the user andupdates his or her location in the wizard UI on a tablet PC.

Figure 2. Topiary’s wizard UI. The wizard map represents entities’ current location andorientation; to simulate updates, a wizard drags them on the map. The end-userscreen lets a wizard monitor the user’s action on the end-user UI. The storyboardanalysis window lets the wizard debug the interaction flow by highlighting the currentpage and the most recent transition. The radar view provides an overview of the wizard map. In this example, a wizard drags Bob’s arrow to reflect his orientation; themap in the end-user UI (and on the end-user screen) automatically rotates so that itsorientation is always consistent with the direction Bob is facing.

Page 3: Design Challenges and Principles for Wizard of Oz Testing ...yangl.org/pdf/locationwoz-ieee2007.pdf · Figure 1. A typical location-based Wizard of Oz test setting. (a) An end user

visual language issues only for creating adesign. For ubicomp applications, how-ever, designing visual languages for test-ing is just as important.

In the iterative design process, wizardsand designers play two different roles,which either the same person or differentpeople can perform. Ideally, WOz testingrequires a wizard to update the proto-type’s state on the basis of the world state.A visual language for testing should helpa wizard observe dynamic changes in atest environment and easily specify thesechanges to update the prototype’s state.For example, in location-enhanced appli-cations, dynamic changes refer to themovement of users and physical objects.In Topiary, the prototype’s state includes

• the locations of entities stored in theprototype’s repository (see the wizardmap in figure 2), and

• the current storyboard page (see the sto-ryboard analysis window in figure 2).

Most of the time, a wizard updates onlythe location of entities and Topiary auto-matically updates and maintains otherparts of the prototype’s state (for exam-ple, the current page) on the basis of thestoryboard’s interaction logic. To meetthe goal of having wizards update pro-totypes on the basis of the real-worldstate, we propose two basic design prin-ciples. A visual language should

• make it easy for a wizard to efficiently

visualize the prototype’s current stateand perceive the difference betweenthat state and the real-world state, and

• provide an interaction vocabulary thatlets the wizard easily update the pro-totype state on the basis of his or herdirect observations.

Topiary’s wizard map, for example,provides a simple overview of entitylocations maintained by a prototype,which lets the wizard easily see whereupdates are needed. Wizards need onlycapture two things about entities: theirposition and orientation. Both are basedon the wizard’s direct observation, andhe or she can change them through directmanipulation.

When we evaluated Topiary withseven researchers and interface design-ers, they rated the tool’s wizard interfacehighest among all features.9 Participantsparticularly appreciated how straight-forward the wizard interface is. We alsoconducted six field experiments withfour people to test various designs of atour-guide application. While acting aswizards, we found Topiary’s interfacewas effective at capturing users’ move-ment while walking. Our participantsconfirmed Topiary’s WOz testing effec-tiveness: they didn’t realize their loca-tions were being updated by a wizard,rather than by real sensors.

Allocating tasksIn designing Topiary’s WOz interfaces,

we found that there was often a trade-off in allocating tasks between designersand wizards. On one hand, designers canquickly create rough designs with fewdetails, placing the burden on the wiz-ard to simulate more sophisticatedbehaviors. On the other hand, designerscan shoulder more of this burden by ini-tially specifying more detail and func-tionality up front so that it can be auto-matically executed at test time, makingthe wizard’s job much easier.

For example, to show users the short-est path in Topiary, a designer can—atdesign time—draw a road network on amap (see the bold brown lines on thewizard map in figure 2) and specify astarting point and destination (such asBob’s current location and the parkinglot, respectively). Then, at test time, To-piary automatically constructs a short-est path on the basis of Bob’s currentlocation (see the bold pink line on figure2’s end-user screen). Alternatively, a de-signer can leave this feature unspecifiedand have the Wizard draw lines repre-senting the shortest paths on the enduser’s screen during the tests. Clearly, theformer requires more work from thedesigner, while the latter requires morefrom the wizard. The latter also lets adesigner quickly try out rough ideas. Asa design evolves, designers need to solid-ify their design ideas into a design thatis validated by user tests (see figure 3).

Given this, a WOz tool should letdesigners choose appropriate task allo-cations for wizards and designers as adesign evolves, to ensure a smooth iter-ative design process. For example, a toolcan offer designers multiple options fordesigning the same feature. Such optionsmight vary in terms of how much workthey require from designers and wizards,and how large a design/testing scale theycan handle. For example, while havinga wizard manually draw paths requires

72 PERVASIVEcomputing www.computer.org/pervasive

R A P I D P R O T O T Y P I N G A N D T E S T I N G

Task

allo

catio

n (%

)

Early LateDesign stage

Wizard Designer

Figure 3. Workload distribution asdesigns progress. As a design matures, a wizard does less work, while a designerdoes more.

Page 4: Design Challenges and Principles for Wizard of Oz Testing ...yangl.org/pdf/locationwoz-ieee2007.pdf · Figure 1. A typical location-based Wizard of Oz test setting. (a) An end user

little work from a designer, it’s very inef-ficient for a wizard to do in a complextesting environment. So, tools should letdesigners choose an appropriate ap-proach based on whether they preferquick iteration, larger-scale design, orsomething in between. This creates an-other design challenge for tool builders:how to present this redundant supportto tool users so that they’ll know whichapproach to choose for a particulardesign at a particular design stage.

Automating wizard tasksIt can be challenging for wizards to

keep track of dynamically changing envi-ronments while testing ubicomp appli-cations. We’ve therefore been designingand experimenting with various tech-niques to automate wizard tasks. Here,we discuss two such techniques. Al-though both are specific to location-enhanced applications, the techniquesaddress two common problems of WOztesting of ubicomp applications: simu-lating sensing inaccuracy to better ap-proximate realistic testing, and relievingwizards from routine tasks.

Simulating sensing inaccuracySensing inaccuracy is an important issue

in designing ubicomp applications. Forlocation-enhanced applications, locationacquisition depends heavily on sensed data(such as wireless signals, GPS, or ac-celerometer readings) and inferences (PlaceLab uses temporal probabilistic reason-ing, for example5). Sensed data is oftennoisy and sometimes even abnormal

owing to environmental variations or sen-sor hardware failure. Inferencing tech-nologies are also imperfect. Consequently,location context is inherently ambigu-ous—a reported location isn’t necessarilythe user’s real location, for example.

Simulating sensing inaccuracy canhelp designers uncover related usabilityissues in the early stages of design. In ourprevious Topiary version, a wizard couldmanually simulate sensing errors bydragging entities to a random map posi-tion. However, to lower wizards’ cogni-tive load and systematically examinehow sensing inaccuracy influences usa-bility, we wanted to offer explicit sup-port for modeling sensing inaccuracy.

Suede, a tool for designing speech-based UIs, lets designers specify speech-recognition accuracy and automaticallygenerate random errors during a test.7

Inspired by Suede, our new version ofTopiary lets designers model location-sensing inaccuracy by specifying a targetlocation-sensing infrastructure’s stabilityand accuracy (see figure 4a). On the basisof such specifications, Topiary automat-ically generates location-sensing errors attest time using a simple probabilisticmodel. Generally speaking, it’s hard toattribute a location-sensing error purelyto sensor hardware or inference algo-rithms. So, we don’t make distinctions inTopiary as to the ambiguity’s source.

As figure 4a shows, a designer canspecify sensing inaccuracy using the

Specify Sensing Errors dialog box. Bychecking the “Apply sensing errors”option, a designer can add noise to thewizard’s simulation during a test. Forexample, when a wizard drags Bob to amap position, Topiary would automat-ically move Bob to another random posi-tion—such as five meters away—basedon the generated noise.

Before testing, designers must specifyhow often a sensing infrastructure mightwork under normal conditions. Thisimplies the sensing infrastructure’s stabil-ity. In an abnormal condition, a sensinginfrastructure gives completely unrea-sonable location reports. A designer canspecify a percentage by either typing it inthe text field or dragging the slider. Dur-ing a test, the system uses this percentageto sample whether the simulated sensinginfrastructure is in a normal condition.When it’s not, the system randomly gen-erates a map location based on a uniformprobabilistic distribution, without con-sidering the entity’s location as set by thewizard. Otherwise, Topiary generates arandom location around the wizard’slocation by applying Gaussian noise. To-piary automatically centers a translucentcircle around a location generated by thesensing-error model on the end user’sscreen; this circle represents the entity’sregion of possible location (see figure 4b).The smaller the circle, the more accuratethe location sensing.

Currently, we use a simple model for

APRIL–JUNE 2007 PERVASIVEcomputing 73

Figure 4. Specifying sensing errors. (a) The Specify Sensing Errors dialog box.Topiary uses the sensing accuracy (24meters) as the standard deviation forgenerating Gaussian noise. (b) The end-user screen. The translucent circle represents the region of Bob’s possiblelocations. The circle’s size dynamicallychanges according to the distancebetween the wizard’s selected locationand the location generated by the sensing-error model.

Page 5: Design Challenges and Principles for Wizard of Oz Testing ...yangl.org/pdf/locationwoz-ieee2007.pdf · Figure 1. A typical location-based Wizard of Oz test setting. (a) An end user

simulating sensing errors. Given the diver-sity of available sensing technologies aswell as the physical world’s complexity,our approach can’t model all errors.However, we felt it was more importantto make the model easy for nontechnol-ogists to use and understand. Our ap-proach also lets wizards simulate otherlocation errors that might occur in a tar-get situation by manually dragging anentity to a random position.

Wizards’ cruise controlIn our field study, we often observed

that participants moved straight aheadat a constant speed, such as when theyknew their destination and were walkingdown a street. To relieve wizards fromroutine updating tasks in such situations,we designed wizards’ cruise control.

Topiary automatically infers an en-tity’s velocity on the basis of its trajec-tory as produced by the wizard. Giventhis information, Topiary can automat-ically help a wizard update an entity’slocation. We designed this feature on thebasis of the metaphor of automobilecruise control, which lets drivers main-tain a constant speed without steppingon the accelerator or brake. In Topiary,a red arrow indicates an entity’s velocity

during a test (see figure 5). The arrow’slength indicates the entity’s speed. A wiz-ard might determine that an end user ismoving straight ahead at a constantspeed on the basis of observation of anentity’s motion patterns and the testingenvironment’s geographical attributes.In such cases, the wizard can turn on theentity’s cruise control option, and Topi-ary will automatically move it at the cur-rent speed. If the wizard thinks the auto-matic update is inconsistent with the enduser’s current location, he or she canmanually drag the entity to the correctmap position. This mediation automat-ically turns off the automatic update,similar to how automobile drivers candeactivate cruise control by stepping onthe accelerator or brake.

We’re developing a new suiteof tools for WOz testing.This work involves severalideas that researchers can

extend to WOz testing of general ubi-comp applications.

First, we’d like Topiary to supportimpromptu design during test sessions.Designers can use a WOz approach torapidly explore various designs. In the

early stages of design, ideas are oftenvague—making it hard to cover all sys-tem aspects—while designers often havedesign inspirations while running a test.Given this, a WOz testing interfaceshould let designers try out different out-put options and even create new feed-back in response to unexpected discov-eries made during a test. Building sucha feature entails two challenges. First,how much flexibility should a wizardhave? A WOz test shouldn’t overload thewizard, and achieving flexibility shouldrequire little effort. Second, how can wecapture and solidify impromptu designsso that they’re not ephemeral?

Second, combining WOz testing withsensor-based testing will let users testtheir designs in more realistic situations.It will also enable longitudinal, large-scale design testing, which is useful forgetting feedback and experience fromusers’ daily lives rather than from con-trolled experiments. As a design matures,the need for wizards to mediate a testdecreases, and sensor-based testing canplay a larger role. Between the two ex-tremes—complete WOz testing and com-plete sensor-based testing—the two ap-proaches can work together. A toolshould thus let designers transitionsmoothly from WOz testing to sensor-based testing as a design evolves.

Third, implementing multiwizard test-ing in Topiary would let users scale uptesting when a target setting involvesmultiple entities and complex activities.For example, each wizard could use aseparate device to observe and track adifferent entity’s movement. However,the cost of multiwizard testing in a real-istic situation is often high in the earlystages of design. Consequently, it would

74 PERVASIVEcomputing www.computer.org/pervasive

R A P I D P R O T O T Y P I N G A N D T E S T I N G

Bob

Figure 5. Topiary’s cruise control. The tool can automatically update an entity’slocation on the basis of its current velocity. Wizards can deactivate the function by manually dragging the targetentity. (The “gray star men,” shown hereto illustrate Bob’s path, aren’t visible inthe actual interface.)

Page 6: Design Challenges and Principles for Wizard of Oz Testing ...yangl.org/pdf/locationwoz-ieee2007.pdf · Figure 1. A typical location-based Wizard of Oz test setting. (a) An end user

be useful—though challenging—to esti-mate a WOz study’s complexity and auto-matically generate a plan for distributingtasks or roles across multiple wizards.

Finally, when designers evaluate a de-sign, it’s useful to analyze test data. So,a tool should be able to log both userbehaviors and environmental changes.Topiary can capture user actions—suchas mouse movements and clicks—as wellas the physical paths traveled (whethersimulated by wizards or generated bysensors). Because testing often generateslarge amounts of data, it’s important togive designers an efficient UI to accessthis data. Currently, the only way design-ers can analyze tests in Topiary is to re-play test logs. In the future, we’ll supporttest data visualization by producing sta-tistics of user–prototype interactions,such as how long a user took to find acorrect location. We’ll also enable moresophisticated visualizations for analyz-ing multidimensional test data. Theseadditions will let designers efficiently an-alyze test data and collect feedback froma complex test setting.

REFERENCES

1. E.W. Pfeiffer, “WhereWare,” MIT Tech-nology Rev., Sept. 2003, pp. 46–52.

2. S. Long et al., “Rapid Prototyping ofMobile Context-Aware Applications: TheCyberguide Case Study,” Proc. 2nd Ann.Int’l Conf. Mobile Computing and Net-working (MobiCom 96), ACM Press, 1996,pp. 97–107.

3. A.K. Dey, D. Salber, and G.D. Abowd, “AConceptual Framework and a Toolkit forSupporting the Rapid Prototyping of Con-text-Aware Applications,” Human-Com-puter Interaction, vol. 16, nos. 2–3, 2001,pp. 97–166.

4. J.I. Hong and J.A. Landay, “An Architec-ture for Privacy-Sensitive Ubiquitous Com-puting,” Proc. 2nd Int’l Conf. Mobile Sys-tems, Applications, and Services (MobiSys04), ACM Press, 2004, pp. 177–189.

5. A. LaMarca et al., “Place Lab: Device Posi-tioning Using Radio Beacons in the Wild,”Proc. 3rd Int’l Conf. Pervasive Computing(Pervasive 05), LNCS 3468, Springer, 2005,pp. 116–133.

6. N. Dahlbäck, A. Jönsson, and L. Ahren-berg, “Wizard of Oz Studies—Why andHow,” Proc. 1st Int’l Conf. Intelligent UserInterfaces, ACM Press, 1993, pp. 193–200.

7. S.R. Klemmer et al., “Suede: A Wizard ofOz Prototyping Tool for Speech User Inter-faces,” Proc. ACM Symp. User InterfaceSoftware and Technology (UIST 2000),Computer-Human Interaction Letters, vol.2, no. 2, 2000, pp. 1–10.

8. A. Krüger, I. Aslan, and H. Zimmer, “TheEffects of Mobile Pedestrian NavigationSystems on the Concurrent Acquisition ofRoute and Survey Knowledge,” Proc. 6thInt’l Symp. Mobile Human-Computer In-teraction (MobileHCI 04), LNCS 3160,Springer, 2004, pp. 446–450.

9. Y. Li, J.I. Hong, and J.A. Landay, “Topiary:

A Tool for Prototyping Location-EnhancedApplications,” Proc. ACM Symp. User In-terface Software and Technology (UIST2004), Computer-Human Interaction Let-ters, vol. 6, no. 2, 2004, pp. 217–226.

10. S. Dow et al., “Wizard of Oz Supportthroughout an Iterative Design Process,”IEEE Pervasive Computing, vol. 4, no. 4,2005, pp. 18–26.

11. J. Goodman, S. Brewster, and P. Gray,“Using Field Experiments to Evaluate Mo-bile Guides,” Proc. Human-Computer In-teraction in Mobile Guides, Springer, 2004,pp. 38–48.

For more information on this or any other comput-ing topic, please visit our Digital Library at www.computer.org/publications/dlib.

APRIL–JUNE 2007 PERVASIVEcomputing 75

the AUTHORSYang Li is a research associate in the University of Washington’s Computer Scienceand Engineering Department. His research interests include activity-based ubiqui-tous computing, rapid-prototyping tools, and pen-based user interfaces. He was apostdoctoral researcher in electrical engineering and computer science at the Uni-versity of California, Berkeley. He received his PhD in computer science from the Chi-nese Academy of Sciences and is a member of the ACM and the ACM SIGCHI. Con-tact him at the Univ. of Washington, Dept. of Computer Science & Eng., 506 Paul G.Allen Center, Box 352350, Seattle, WA 98195-2350; [email protected];www.cs.washington.edu/homes/yangli.

Jason I. Hong is an assistant professor in the Human-Computer Interaction Instituteat Carnegie Mellon University’s School of Computer Science. His research interestsinclude ubiquitous computing, focusing on location-based services and usableprivacy and security. He’s an associate editor of IEEE Pervasive Computing, and hecoauthored The Design of Sites (Addison Wesley, 2006), a book on the pattern-based approach to designing customer-centered Web sites. He received his PhD incomputer science from the University of California, Berkeley and is a member of theACM and ACM SIGCHI. Contact him at the Human-Computer Interaction Inst., Schoolof Computer Science, Carnegie Mellon Univ., 5000 Forbes Ave., Pittsburgh, PA15213-3891; [email protected]; www.cs.cmu.edu/~jasonh.

James A. Landay is a professor in the Computer Science & Engineering Departmentat the University of Washington, specializing in human-computer interaction. Hewas previously director of Intel Research Seattle, a lab exploring ubiquitous comput-ing. His current interests include automated usability, demonstrational interfaces,ubicomp, design tools, and Web design. Landay received his PhD in computer sci-ence from Carnegie Mellon; his dissertation was the first to demonstrate sketching inUI design tools. Contact him at the Univ. of Washington, Dept. of Computer Science& Eng., 642 Paul G. Allen Center, Box 352350, Seattle, WA 98195-2350; [email protected]; www.cs.washington.edu/homes/landay.