Top Banner
Doctors’ Online Information Needs, Cognitive Search Strategies, and Judgments of Information Quality and Cognitive Authority: How Predictive Judgments Introduce Bias Into Cognitive Search Models Benjamin Hughes and Jonathan Wareham Department of Information Systems, ESADE, 60-62 Av. Pedralbes, Barcelona, Spain 08036. E-mail: [email protected]; [email protected] Indra Joshi Southampton University Hospitals NHS Trust (SUHT), Tremona Road, Southampton, Hampshire, United Kingdom, SO16 6YD. E-mail: [email protected] Literature examining information judgments and Internet search behaviors notes a number of major research gaps, including how users actually make these judgments out- side of experiments or researcher-defined tasks, and how search behavior is impacted by a user’s judgment of online information. Using the medical setting, where doctors face real consequences in applying the infor- mation found, we examine how information judgments employed by doctors to mitigate risk impact their cogni- tive search. Diaries encompassing 444 real clinical infor- mation search incidents, combined with semistructured interviews across 35 doctors, were analyzed via thematic analysis. Results show that doctors, though aware of the need for information quality and cognitive author- ity, rarely make evaluative judgments. This is explained by navigational bias in information searches and via predictive judgments that favor known sites where doc- tors perceive levels of information quality and cognitive authority. Doctors’ mental models of the Internet sites and Web experience relevant to the task type enable these predictive judgments. These results suggest a model connecting online cognitive search and informa- tion judgment literatures. Moreover, this implies a need to understand cognitive search through longitudinal- or learning-based views for repeated search tasks, and Received June 13, 2009; revised August 12, 2009; accepted September 3, 2009 Additional Supporting Information may be found in the online version of this article. © 2009 ASIS&T Published online 24 November 2009 in Wiley Inter- Science (www.interscience.wiley.com). DOI: 10.1002/asi.21245 adaptations to medical practitioner training and tools for online search. Introduction Information search is a process by which a person seeks knowledge about a problem or situation, constituting a major activity by the Internet’s millions of users (Browne, Pitts, & Wetherbe, 2007). The Web is now a primary source of infor- mation for many people, driving a critical need to understand how users search or employ search engines (Jansen & Spink, 2006). Extensive literature examines not only behavioral models detailing the different moves or tactics during Inter- net search but also decision making or strategies described as cognitive search models (Navarro-Prieto, Scaife, & Rogers, 1999; Thatcher, 2006, 2008). The latter examines the cog- nitive aspects of the moves users employ to optimize their search performance, exploring elements such as expert- novice differences or judgments on when to terminate the search (e.g., Thatcher, 2006, 2008; Cothey, 2002; Jaillet, 2003; Browne et al., 2007). This notion of judgment intro- duces a second stream of literature, Internet information judgments, where authors note that the use of predictive infor- mation judgments impacts decision making in search, based on an anticipation of a page’s value before viewing it (Rieh, 2002; Griffiths & Brophy, 2005). Cognitive search models rarely explore the impact of pre- dictive judgments. Most studies are based on tasks defined by researchers in experimental settings that are difficult to generalize to professional contexts or real use (Thatcher, 2006; 2008). Scholars have, therefore, called for research into JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, 61(3):433–452, 2010
21
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Doctor onlineneeds

Doctors’ Online Information Needs, Cognitive SearchStrategies, and Judgments of Information Qualityand Cognitive Authority: How Predictive JudgmentsIntroduce Bias Into Cognitive Search Models

Benjamin Hughes and Jonathan WarehamDepartment of Information Systems, ESADE, 60-62 Av. Pedralbes, Barcelona, Spain 08036.E-mail: [email protected]; [email protected]

Indra JoshiSouthampton University Hospitals NHS Trust (SUHT), Tremona Road, Southampton, Hampshire,United Kingdom, SO16 6YD. E-mail: [email protected]

Literature examining information judgments and Internetsearch behaviors notes a number of major research gaps,including how users actually make these judgments out-side of experiments or researcher-defined tasks, andhow search behavior is impacted by a user’s judgmentof online information. Using the medical setting, wheredoctors face real consequences in applying the infor-mation found, we examine how information judgmentsemployed by doctors to mitigate risk impact their cogni-tive search. Diaries encompassing 444 real clinical infor-mation search incidents, combined with semistructuredinterviews across 35 doctors, were analyzed via thematicanalysis. Results show that doctors, though aware ofthe need for information quality and cognitive author-ity, rarely make evaluative judgments. This is explainedby navigational bias in information searches and viapredictive judgments that favor known sites where doc-tors perceive levels of information quality and cognitiveauthority. Doctors’ mental models of the Internet sitesand Web experience relevant to the task type enablethese predictive judgments. These results suggest amodel connecting online cognitive search and informa-tion judgment literatures. Moreover, this implies a needto understand cognitive search through longitudinal-or learning-based views for repeated search tasks, and

Received June 13, 2009; revised August 12, 2009; accepted September 3,2009

Additional Supporting Information may be found in the online versionof this article.

© 2009 ASIS&T • Published online 24 November 2009 in Wiley Inter-Science (www.interscience.wiley.com). DOI: 10.1002/asi.21245

adaptations to medical practitioner training and tools foronline search.

Introduction

Information search is a process by which a person seeksknowledge about a problem or situation, constituting a majoractivity by the Internet’s millions of users (Browne, Pitts, &Wetherbe, 2007). The Web is now a primary source of infor-mation for many people, driving a critical need to understandhow users search or employ search engines (Jansen & Spink,2006). Extensive literature examines not only behavioralmodels detailing the different moves or tactics during Inter-net search but also decision making or strategies described ascognitive search models (Navarro-Prieto, Scaife, & Rogers,1999; Thatcher, 2006, 2008). The latter examines the cog-nitive aspects of the moves users employ to optimize theirsearch performance, exploring elements such as expert-novice differences or judgments on when to terminate thesearch (e.g., Thatcher, 2006, 2008; Cothey, 2002; Jaillet,2003; Browne et al., 2007). This notion of judgment intro-duces a second stream of literature, Internet informationjudgments, where authors note that the use of predictive infor-mation judgments impacts decision making in search, basedon an anticipation of a page’s value before viewing it (Rieh,2002; Griffiths & Brophy, 2005).

Cognitive search models rarely explore the impact of pre-dictive judgments. Most studies are based on tasks definedby researchers in experimental settings that are difficult togeneralize to professional contexts or real use (Thatcher,2006; 2008). Scholars have, therefore, called for research into

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, 61(3):433–452, 2010

Page 2: Doctor onlineneeds

information judgments during real instances of informationsearch and retrieval (Metzger, 2007), examining how theseimpact search behavior (Browne et al., 2007; Rieh, 2002).In addition, studies of this nature must also consider method-ological issues identified with the Internet search literature,most notably, the limits of the most commonly used methods,surveys, and log files (Hargittai, 2002; Rieh, 2002; Metzger,2007).

We address this critique through a study of real Inter-net use by practicing medical doctors. The study employeddiaries and interviews examining daily online informationsearch and retrieval encompassing 444 search incidents by35 doctors, with particular focus on the information searchbehavior. Doctors were seeking information to make clinicaldecisions for treating patients, before or during a consulta-tion, implying substantial positive or negative consequencesfrom its use. In this context, medical researchers note that thecredibility of the online source is a major factor influencingdoctors’ information search and retrieval (Bennett, Casebeer,Kristofco, & Strasser, 2004). This suggests that the medi-cal practice is a rich setting in which to examine the impactof information judgments on cognitive search. Therefore,we pose the following questions concerning doctors’ onlineclinical information retrieval for professional purposes:

RQ1: What characterizes the cognitive search models ofpracticing medical doctors?RQ2: What information judgments do doctors apply duringonline search?RQ3: How do information judgments impact doctors’ cogni-tive search models?

We begin by reviewing the literature on Internet searchand online information judgments, and then we summa-rize the relevant research gaps that this study addresses,which includes examining real information retrieval (Rieh,2002; Thatcher, 2008), how users actually make informa-tion judgments (Metzger, 2007), and how search behavior isimpacted by them (Rieh, 2002; Browne et al., 2007). Themethod and the results of each research question are thendescribed in turn. Finally, we discuss the major contributionsof this article, which extends previous research by the fol-lowing: (a) detailing the dominant types of information need,cognitive search strategies, and information judgments usedby practicing doctors; (b) suggesting the low applicabilityof the credibility construct in this context; (c) demonstrat-ing the navigational bias in cognitive search models, a biasthat acts on information queries and is driven by doctors’predictive judgments; (d) describing how the predictive judg-ments are enabled by users’ mental models of the Internetand search experience relevant to the task; (e) proposing amodel to connect information judgment and cognitive searchliterature; (f) suggesting the difficulty of studying cognitivesearch as an isolated task in experimental settings, and theneed for a longitudinal view of search behavior over time; and(g) providing specific avenues for further research for bothinformation science and medical practitioners in addressingpotential needs for Internet search training.

Research Framework

Online Search Behavior

The extensive research into online search behavior demon-strates that Internet search is strongly characterized by auser’s goals and objectives (Jansen, Booth, & Spink, 2008;Rose & Levinson, 2004). Scholars broadly categorize thesegoals as navigational (to arrive at a URL), informational,and resource based or transactional (to obtain products, ser-vices, or other resources). This last category has been a majorfocus of research, examining Web consumers and online pur-chases from a marketing perspective (e.g., Rowley, 2000;Ward & Ostrom, 2003; Wu & Rangaswamy, 2003). However,although online shopping is proceeded by information search,it seeks to obtain resources and is influenced by a user’sown previous experience with a physical product or brand(Rowley, 2000). Hence, this is distinct to Rose and Levinson’s(2004) directed information goals that would apply to theprofessional medical context.

For this reason, our study focuses on general Internetsearch literature, where action models (Thatcher, 2006, 2008)represent a major stream detailing users’ specific “moves” insearch (Marchionini & Schneiderman, 1988). Scholars iden-tify two fundamental starting choices: accessing a generalsearch engine or using a familiar Web site (Choo, Detlor, &Turnbull, 2000; Holscher & Strube, 2000). They also detailspecific moves that describe the user’s first guess query, useof Boolean terms, or selection processes from the resultsreturned. The selection process involves assessing the valueof results returned and making trade-offs between furtheriterative text searches and browsing the directories of largesites (Choo et al., 2000; Dennis, Bruza, & McArthur, 2002).Authors examining “moves” also called for studies exploringsearch at higher levels of abstraction or as strategies and pat-terns of behaviors (e.g. Byrne, John, Wehrle, & Crow, 1999).Scholars have denoted these as cognitive search models(Navarro-Prieto et al., 1999; Thatcher, 2006, 2008), deci-sion making in search (Browne et al., 2007), and informationforaging (e.g., Pirolli, 2007).

Although the effectiveness of online searching relative toother sources has also been examined (e.g. Hodkinson & Kiel,2003; Sohn, Joun, & Chang, 2002), cognitive search modelsfocus only on online behavior. Researchers observe that inaddition to task type, many user characteristics impact deci-sion making or strategy, including expert-novice differences,users’mental models of the Internet, individual cognitive andlearning styles, demographic characteristics, subject matteror domain knowledge, and physical and affective state.

Table 1 shows these major research lines and associatedpapers, serving as an introduction to key areas of the field,rather than exhaustive review.

Looking more deeply at cognitive search, Thatcher (2006,2008) identifies 12 different strategy archetypes, which arestrongly differentiated by the starting choice of either searchengine use or direct familiar site access (see Choo et al.2000; Holscher & Strube, 2000). Search engine-based strate-gies include using a generic search engine (“broad first”),

434 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010DOI: 10.1002/asi

Page 3: Doctor onlineneeds

TABLE 1. Research areas in online information retrieval or Internet search.

Construct Example construct or factor examined Papers

Action models or “moves” Analysis of discrete “moves” that form searchbehavior (e.g., analytical searching using searchterms; browsing by clicking on hypertext; scan-and-select through search engine resultsgenerating queries; examining search results;selecting results; reformulating queries)

Byrne et al., 1999; Choo et al., 2000; Griffiths &Brophy, 2005; Jansen & Spink, 2006; Johnson et al.,2004; Pan et al., 2007; Tauscher & Greenberg, 1997

Cognitive models (focusing on strategy) Examining how these “moves” combine intocognitive patterns, e.g., Fidel et al.’s, (1999) orThatcher’s (2008) cognitive search strategyarchetypes or Pirolli’s (2007) navigational modelbased on the Information Foraging Theory

Catledge & Pitkow, 1995; Cothey, 2002; Fidel et al.,1999; Fu & Pirolli, 2007; Navarro-Prieto et al.,1999; Kim, 2001; Pirolli, 2007; Schacter et al., 1998;Thatcher, 2006, 2008; Wang et al., 2000

Task structure and complexity Differences in the task complexity resulting indifferent search patterns (e.g., a migration to Booleansearch in highly complex tasks when experiencingnavigational disorientation)

Browne et al., 2007; Ford et al. 2005a, 2005b; Navarro-Prieto et al., 1999; Kim & Allen, 2002; Schacter et al.,1998; Thatcher, 2006, 2008

Expert-novice differences How Web search experience impacts search behavior,e.g., experts demonstrate more selective and analyticalsearch processes

Browne et al., 2007; Cothey, 2002; Hargittai, 2002;Hodkison & Kiel, 2003; Holscher & Strube, 2000;Lazonder, 2000; Thatcher, 2006, 2008; Wang et al.,2000

Mental models (of the Internet) How mental models of the internet inducebehaviors, via simplistic and utilitarian models,or complex structural mental models of the Internet

Cahoon, 1998; Hargittai, 2002; Papastergiou, 2005;Slone, 2002; Wang et al., 2000; Zhang, 2008

Domain knowledge How subject matter or domain knowledge influencessearch strategy (e.g., domain knowledge induces lesstime with a document from that domain)

Holscher & Strube, 2000; Jaillet, 2004

Individual characteristics How cognitive style, learning style, epistemologicalbeliefs, or demographic characteristics producetendencies to use specific search patterns(Boolean, best match, combined etc.)

Ford et al., 2002, 2005a, 2005b; Hodkison & Kiel,2003; Jansen, Booth, & Smith, 2008; Kim, 2001;Kim & Allen, 2002; Kyung-Sun & Bryce, 2002;Sohn et al., 2002; Whitmire, 2004

Physical or affective How affective or physical state relates to the speedof a search

Wang et al., 2000

search engines with specific attributes (“search engine nar-rowing down”), and “to-the-point” strategies, where usershave knowledge of specific search terms to drive a partic-ular result. In navigating directly to a known site (“knownaddress”), users initiate their search at familiar Web sites.Other strategies named by Thatcher attempt to optimizesearch through use of mixed approaches and multiple browserwindows. In additional to efforts, like Thatcher’s, to catego-rize search patterns, other authors have attempted to modelspecific aspects of search or navigation. For instance, Pirolli’s(2007) SNIF-ACT model describes navigational behaviorbased on the information foraging theory (IFT). Using theperceived relevance or utility of a Web link, called informa-tion scent, this model provides an integrated account of thelink selections and the timing of when people leave the cur-rent Web page (Fu & Pirolli, 2007). Although focused onnavigation, this highlights the rarely researched link betweensearch patterns and information judgments.

However, few studies examine this in a professional con-text or via real information use (Thatcher, 2006, 2008),and so the transferability of this previous research cannotbe assumed. Moreover, we cannot simply apply these con-structs, as often literature does not arrive at a consensus ontheir contents. For example, mental models can be described

FIG. 1. Factors influencing cognitive search strategy.

via Zhang’s (2008) interpretation of technical, functional, orprocess views, distinct to the utilitarian view supported byPapastergiou (2005). Consequently, while seeking to iden-tify the factors identified in previous research, this studylooks for them inductively, utilizing their conceptual framerather than the previous detailed implementations of the con-struct. Furthermore, for the purpose of constraining the studyscope (and presuming the ability to generalize across differentcontexts), we exclude the aforementioned factors of cogni-tive style and affective state. Figure 1 provides a simplifiedrepresentation of this literature, where the arrows indicate aninfluence on cognitive search strategy.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010 435DOI: 10.1002/asi

Page 4: Doctor onlineneeds

Information Judgments on the Internet

In the broader literature, quality is often used to denotethe concept of credibility (Haddow 2003; Klobas, 1995).However, judgments during online information retrieval dif-fer from other contexts such as traditional media (Danielson,2005; Sohn et al., 2002). Hence, this article focuses on litera-ture specific to information judgments on the Internet, wherescholars identify different judgment criteria that encom-pass information quality, credibility, and cognitive authority(Rieh & Danielson, 2007). Information quality is a user crite-rion concerning excellence or truthfulness in labeling, and itincludes the attributes of usefulness, goodness currency, andaccuracy (Rieh, 2002). Credibility refers to the believabil-ity of some information or its source (Fogg, 1999; Fogg &Tseng, 1999; Metzger, 2007;Wathen & Burkell, 2002), whichencompasses accuracy, authority, objectivity, currency, andcoverage judgments (Brandt, 1996; Meola, 2004; Metzger,2007; Metzger, Flanagin, & Zwarun, 2003). Finally, cogni-tive authority explores users’ relevance judgments, based onWilson’s (1983) definition of “influence on one’s thoughtsthat one would recognize as proper” (p. 15). Rieh (2002)examined a series of studies in this last stream (such as Park,1993; Wang & Soergel, 1998, 1999), proposing its facetsas trustworthiness, credibility, reliability, scholarliness, howofficial it is, and its authority. Most studies can be relatedto these three higher order constructs based on their self-declared focus. However, these concepts clearly overlap, andmany scholars use different definitions and alternative lowerorder constructs (see Table 2).

In addition, literature also details many judgment meth-ods, including checklist, contextual, external and stoppingrule approaches. The most common is the checklist approach,where users scrutinize aspects of the document obtained (e.g.,source, author or timestamp) to determine the value of a page(Meola, 2004; Metzger, 2007). However, users rarely fullyemploy this method, leading authors to propose a contex-tual approach covering comparison, corroboration, and thepromotion of reviewed resources (Meola, 2004). Compari-son involves the relative judgment of two similar Web sitesand corroboration as the verification of some informationcontained therein with an alternative source. Promotion ofreviewed resources overlaps with literature on rating sys-tems (e.g., Eysenbach, 2000; Eysenbach & Diepgen 1998;Wathen & Burkell, 2002), where the judgment is partlyexternal to the user performing the information retrieval.

In contrast, Browne et al. (2007) suggest users employstopping rules to terminate search, judging that they have theinformation to move to the next stage in the problem-solvingprocess (Browne at al., 2007; Browne & Pitts, 2004). Brownedetails mental lists similar to the checklist containing criteriathat must be satisfied. Other rules are possible, such as havingrepresentational stability on the information found, stoppingwhen nothing new is learned, gathering information to acertain threshold, or using specific criteria related to the task.

Moreover, authors note that these judgments occur atdifferent times and on different artifacts. For instance,

evaluative judgments occur when information is browsed andpredictive judgments are made before a page is accessed.The latter is based on a user’s anticipation of a page’s valueimpacting their search strategy (Griffiths & Brophy, 2005;Rieh, 2002).

All in all, there are many overlapping constructs for explor-ing users’ information judgments (see Table 2). It is notour objective to propose a single abstract definition, rec-oncile them, or explain each lower order construct. Rather,we seek an appropriate framework for the applied medi-cal context. Research into use of online health informationhas mainly focused on patients, and studies suggest a pri-mary focus on information accuracy (Haddow, 2003; Rieh &Danielson, 2007) and cognitive authority (Rieh, 2002). Forhealth information experts, research indicates focus on judg-ments of source and author (Fogg et al., 2003). However,scholars also note that a wide range of judgments are used forhealth information (Eysenbach, Powell, Kuss, & Sa, 2002);hence, this approach provides no unified definition of onlineinformation judgment constructs. Partial attempts to delin-eate these constructs made by Rieh (2002) and Metzger(2007) provide a basis for taking information quality toinclude usefulness, goodness, currency, and accuracy (Rieh,2002), credibility to encompass accuracy, authority, objec-tivity, currency, and coverage (Metzger, 2007), and cognitiveauthority to include trustworthiness, credibility, reliability,scholarliness, how official it is, and its authority (Rieh,2002). Although there is major overlap between credibil-ity and other the constructs, each is used in an extensivenumber of studies and cannot be simply discounted, sug-gesting an inductive approach to this study to determinewhich is most appropriate. Furthermore, these are consid-ered alongside the distinction of predictive judgments madebefore a page is seen and evaluative judgments while a page isbrowsed (Rieh, 2002). These definitions are marked in bold inTable 2 below, alongside certain examples of the alternativevariations used.

Medicine as a Rich Context in Applied Internet Search

Previous research reveals important gaps that are yet to beaddressed: (a) examining how information judgments impactsearch behavior (Browne et al, 2007; Rieh, 2002); (b) detail-ing how users actually make these judgments (Metzger,2007); and (c) supplementing the predominantly used studydesign of log file analysis, survey analysis, researcher-defined experiments, and academic settings (Hargittai, 2002;Metzger, 2007; Rieh, 2002; Thatcher, 2008). This last obser-vation is supported by our own analysis; of 43 empiri-cal studies described in the Supporting Information, eachinvolves at least one of the researcher-defined experiments,academic contexts, or the use survey and log file meth-ods. These approaches have different advantages, such asthe potential large sample sizes possible through log fileanalysis, or better isolation of variables and detection ofcause-and-effect relationships through experimental meth-ods. However, a concentration in experimental approaches

436 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010DOI: 10.1002/asi

Page 5: Doctor onlineneeds

TABLE 2. Different criteria with which users judge information on the Internet (those used shown in bold).

Higher order construct Papers/authors Lower order constructs contributing to higher order construct

Quality Olaisen, 1990 Actual value, completeness, credibility, accessibility, flexibility, form

Knight & Burn, 2005; Klobas, 1995; Wang &Strong, 1996

Multidimensional construct based on fit for purpose, e.g., intrinsic,representational, accessibility, contextual

Zeist & Hendriks, 1996 Functionality, reliability, maintainability, portability efficiency, usability

Kahn, Strong, & Wang, 2002 Product quality (sound and useful), service quality (dependable anduseable)

Haddow 2002; Rieh, 2002 Information quality covering usefulness, goodness, currency, andaccuracy

Meola, 2004 Quality (authority, accuracy, objectivity, currency, coverage) assessedby comparison, corroboration and promoted resources

Tombros, Ruthven, & Jose, 2005 Quality (scope/depth, authority/source, currency, formatting), structure,presentation or physical presentation

Credibility Brandt, 1996 Credibility (with reliability, perspective, purpose, and author credentials)

Johnson & Kaye, 1998 Credibility (with focus on trust)

Fogg et al, 2001, 2003; Fogg & Tseng, 1999;Wathen & Burkell, 2002

Credibility (with particular focus on trustworthiness and expertise)

Liu & Huang, 2005 Credibility (presumed, reputed, and surface: author/reputation/affiliationand information accuracy/quality)

Hong, 2006 Credibility (in terms of expertise, goodwill, trustworthiness, depth, andfairness)

Metzger, 2007; Metzger, Flanagin, & Zwarun,2003

Accuracy, authority, objectivity, currency, and coverage (basedon the review of the field)

Cognitive authority Fritch & Cromwell, 2001 Personal (author), institutional (publisher), textual type (document type),intrinsic plausibility (content of text)

McKenzie, 2003 (based on Wilson, 1983) Using Wilson’s definition as “influence on one’s thoughts that onewould recognize as proper” (p. 15)

Rieh, 2002 (based on Wang & Soergel, 1998,1999;Wilson, 1983); Rieh & Belkin, 1998, 2000

Trustworthiness, credibility, reliability, scholarliness, how officialit is, and authority. Rieh & Belkin’s separation of the information objectand the information contained within it. Predictive judgments (beforeseeing page) versus evaluative judgments (while browsing page).

echoes the concerns for generalizability to other social sci-ences, where the innocuous consequences for the participantcan produce potential behavioral differences compared withreal life contexts (see Camerer, 2003).

Hence, doctors’ Internet use provides a rich research set-ting to supplement these predominantly used methods, asthere are stakes or risks for the user in information search.Doctors use the Internet frequently, with a major use andthe focus of this study being the search and retrieval of clin-ical information (Masters, 2008). Use of online resourceshas been shown to generally improve doctors’ clinical deci-sions, but occasionally leads to errors in which individualsrespond to information supplied by the computer, even whenit contradicts their existing knowledge (Westbrook, Coiera, &Gosling, 2005). This risk is inherent in the introduction of anyclinical decision support system, where doctors potentiallybecome less vigilant towards errors (Kohli & Piontek, 2007).Hence, despite this potential improvement to clinical care,there is much concern about the possible use of inaccurateonline health information, and doctors’perceptions of sourcecredibility has been identified as a major factor driving its use(Bennett et al., 2004).

In this context, Google is described as a useful diagnos-tic tool (Johnson, Chen, Eng, Makary, & Fishman, 2008;Sim, Khong & Jiwa, 2008; Tang & Ng, 2006), but its usein medicine has been met with controversy. Authors criti-cize its effectiveness or downplay Google’s role entirely bysuggesting that doctors go directly to preferred or trustedmedical sites (De Leo, LeRouge, Ceriani, & Niederman,2006; Falagas, Pitsouni, Malietzis, & Pappas, 2007; Koenig,2007; Taubert, 2006). Furthermore, online health infor-mation is being impacted by the emergence of Web 2.0,a term that represents both a new philosophy of openparticipation on the Internet, and a second generation ofWeb-based tools and communities that provide newinformation sources (Boulos & Wheeler, 2007; Giustini,2006; McLean, Richards & Wardman, 2007; Sandars &Haythornthwaite, 2007; Sandars & Schroter, 2007). Web2.0 has also cultivated further concerns about the qual-ity and credibility of information generated (Hughes,Joshi, & Wareham, 2008), and implicit in the negativereaction to Google and Web 2.0 use is the fear of intro-ducing “inaccurate” information into decision making inhealth.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010 437DOI: 10.1002/asi

Page 6: Doctor onlineneeds

Consequently, our study provides an ideal setting to exam-ine how information judgments influence search behavior(see Browne et al., 2007; Griffiths & Brophy 2005; Rieh,2002). Few studies examine the detail of doctors’online infor-mation judgments or search behaviors (Podichetty, Booher,Whitfield, & Biscup, 2006). In addition, there are overlappingconstructs in studies examining judgments of online informa-tion, and only little work connecting cognitive search modelsand information judgment literature. For this reason, we takean exploratory and mainly inductive approach to this study,as described in the following section.

Methods

The sample of 35 volunteer doctors was selected viastratified sampling from a group of 300 that had originallygraduated from a major London medical school. This ensureda diverse range of specialties, as information-seeking behav-iors are observed to differ among types of medical practice(Bennett, Casebeer, & Kristofco, 2005). The stratificationwas approximate, given the sample size, using incremen-tal recruiting to fill quotas to ensure multiple participantsfrom each of the 10 most numerous specialties (for detailsee National Health Service, Department of Health, Eng-land, 2004). In addition, a specific seniority of 2–3 years outof medical school was selected to ensure regular informationretrieval on the Internet, as this age group is more comfortablewith the Internet (Rettie, 2002) and use it more in medicalpractice (Masters, 2008). The participants were 57% female,43% male, and had an average age of 27 years. They werecontacted via e-mail, without any specific incentive to partic-ipate, and provided the information between April and July2008.

A multimethod approach was employed after scholars’rec-ommendations to supplement the commonly used survey orlog file methods for investigating behavior in Internet use(Hargittai, 2002). Moreover, literature has highlighted thevalue of diaries in recording routine or everyday processes(Verbrugge, 1980) and was augmented by the interview-diary method (Easterby-Smith, Thorpe, & Lowe, 2002),which allowed the capture and discussion of real instances ofinformation needs.

An initial test of the diary instrument, not included in finalresults, was completed with five doctors. This was to address aknown issue with diaries; participants often require detailedtraining sessions to fully understand the protocol (Bolger,Davis, & Rafaeli, 2003). This testing allowed a short train-ing session to be developed (e.g., example diary, introductionby phone). After the evaluation of the diary instrument, par-ticipants were invited to complete diaries online during adoctor’s break or at shift end, avoiding interference with theonline behavior in observation, but within a short enoughtime frame such that detailed aspects of use could be docu-mented. Each encompassed a minimum of 5 days at work,and was semistructured around the following topics: (a) thesites that they had used during the day, (b) examples of howand for what purposes they had used the sites, and (c) negative

or positive incidents in using the Internet that day (if any).The recording of the diary was on sequential days; hence, ifno information retrieval was made, the diary entry remainedblank. The researchers were able to monitor the diary com-pletion online, which allowed encouragement to doctors torestart or complete them via phone or e-mail. Two diaries hadto be discarded as they did not follow this process (e.g., allthe diary was filled in on one day).

The remaining completed diaries represented 177 daysof recorded online information, and, in general, participantsreported that this occurred in the doctor’s work location in ahospital ward or in a clinic, as an individual task, and duringor before patient encounters. Within 2 weeks of completingthe diary, participants were interviewed for 20–70 minutes(recorded, transcribed, and shared with the participants). Theinterviews were semistructured and elicited further qualifica-tion of the incidents described in the diary, thereby offeringa complementary perspective on the same data. Preanaly-sis of the diaries was not performed, though the interviewerwas familiar with its contents. In the interview, doctors wereasked to tell stories about a particular incident; hence, theinterviews were loosely structured around the critical inci-dent technique, a robust technique to identify the participant’smotivations (Easterby-Smith, Thorpe, & Lowe, 2002).

The extensive qualitative data were examined via thematicanalysis (Boyatzis, 1998). Early code development allowedthe sample size to be determined, as saturation was seenafter only 20 interviews. However, a final sample of 35 wasused as recruitment had already exceeded this amount. Twotypes of coding were initiated: a priori codes identified viathe literature review (specifically codes 8–11 in the resultswere completely a priori) and inductive and open coding toidentify themes emerging from the data. These two groupswere then reconciled by two researchers through resolution ofoverlaps and establishing hierarchy in code groups or nodes.This mixed approach was required, as although the appli-cability of constructs from literature to this context couldnot be assumed, the extensive research into general Internetsearch could also not be ignored via an entirely inductiveapproach. Given that a large number of themes and codeswere identified, focus was placed only on those of strongpresence, specifically, when observed in over 50% of the sam-ple in individual’s interview and diary. This approach wastaken as authors have argued that such measures help ensurerobustness of the patterns observed (e.g., Bala & Venkatesh,2007). Based on this, a final coding template (King, 2004)was applied to the full data set, followed by a measurementof intercoder reliability using the Cohen’s Kappa statistic (seeFleiss & Cohen, 1973). This obtained a value of k = 0.886(standard error = 0.023) across all interview and diary codesbased on comparison between two researchers.

Results

The results of the diaries revealed 444 search incidents.Hence, doctors were searching for online clinical informationan average of 2–3 times a day. No differences were observed

438 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010DOI: 10.1002/asi

Page 7: Doctor onlineneeds

between types of specialty or groups of related specialtieswith similar characteristics (e.g., hospital vs. clinic), and alldoctors used the Internet and exhibited some of the pat-terns identified. Doctors used an extensive number of sites(over 50), including some recommended by the NHS, suchas Pubmed1 (30% of doctors and 8% of all searches inci-dents). However, they also used many other general-purposesites, such as Google (79% of doctors or 32% of all incidents),Wikipedia (71% of doctors and 26% of incidents), as well asan array of patient forums or medical-specific wikis. On aver-age, doctors made 12.7 searches (standard deviation = 8.7)using 4.9 separate sites (standard deviation = 2.3) during theweek. This latter figure includes only search engines usedand the final content site where the participant achieved (orgave up) the information search. We specifically quote thisfigure as the recording of intermediate sites (those partici-pants had visited during the search, but continued searching)was inconsistent.

In the following sections we describe the results ofthe coding of both diaries and interviews, relating themto each research question in turn. The coding scheme isfully detailed in Appendix, and includes direct quotes fromdoctors. During the course of describing the results we willprovide IDs of the codes (e.g., ID x) to allow reference to thistable where needed.

RQ1: Characteristics of the Cognitive Search Modelsof Practicing Medical Doctors

Doctors had two dominant types of information need orsearch task: to solve an immediate defined problem (e.g., “thebest beta blocker to use for someone with heart failure”) orto get background information on a subject. The former is toadvance an immediate task in the clinical context and formsa closed question with a specific answer (ID 1). The latteris an open question driven by the need to be knowledgeableabout a subject in front of medical staff or patients, to under-stand a topic in greater depth, or to later define a specificclosed question relating to patient management (ID 2). If it isa background or open question, then the impact on doctors’immediate decision making is reduced:

To get some background information on something that I’mnot really familiar with. . . . It tends not have a big influenceon my management plan. (ID 2)

To find out information about something that I did not reallyknow about, but not necessarily to make clinical decisions onhow to treat a patient. (ID 2)

Most of the time you don’t want to know a great amountof information. You just want a basic overview about a rarecondition. (ID 2)

Doctors’ search models have similar characteristics tothose of experts noted in previous studies, spanning three

1www.ncbi.nlm.nih.gov/pubmed: A service of the U.S. National Libraryof Medicine that includes over 19 million citations from MEDLINE andother life science journals for biomedical articles.

main types: (a) direct access to familiar site (ID 3), (b) usingGoogle as a navigational device or biased search (ID 4),and (c) using Google for normal search (ID 5). The first andthird search patterns were previously identified by Thatcher(2008); however, the second is a distinct pattern not clearlynoted in previous studies, and might be known as “knownaddress bias” following Thatcher’s specific nomenclature.

This notion of address bias is used to orientate searchengine use towards a site that the user believes may haveappropriate information on the required subject, and if foundin the search engine results, to navigate directly to that pagewithin the preferred site. This was used by 48% of doctors,with two approaches, as 28% of all doctors used specific sitenames in the informational queries and 41% made preferredselections from results. This is clearly based on previousexperience and site use related to the specific task. It alsoextends Rose’s (2004) notion of navigation goal, which refersto a shortcut to a site in general. Additionally, it differsfrom Thatcher’s “search engine narrowing down” where biascomes from the attributes of a specific search engine, andhere bias originates from anticipated value of the specificfinal content site that will be used. For example, during queryformulation:

I put what I’m looking for, and then I put eMedicine2 andWikipedia, and I put that through Google [clicked search].(ID 4)

If there is syndrome that I haven’t heard of, then I wouldtype into Google with the exact phrase. . . . I would select theWeb sites that have heard of. (ID 4)

In addition, closed information needs precipitated a direct-to-site or known address strategy. In 37 examples of detailedcases examined, 84% of closed question needs where satisfiedby direct-to-site strategies (ID 6a/b).

RQ2: Information Judgments Doctors Apply DuringOnline Search

In looking at the criteria doctors apply, the credibilityconstruct is not as useful as information quality or cogni-tive authority in detailing doctors’ information judgments fortwo reasons (see ID 8). First, within the construct of cog-nitive authority, the notion of credibility appears the leastimportant. Second, objectivity and coverage are the only partsof the credibility construct not encapsulated in informationquality or cognitive authority; however, these parts of theconstruct were also not considered important (see Table 2 fordefinitions). Even so, doctors are using very diverse criteriato judge the value of information, similar to those used bypatients, focusing on information quality (usefulness, good-ness) and cognitive authority (trustworthiness, authority). Allof these important criteria observed are encompassed byRieh’s (2002) notions of information quality and cognitiveauthority.

2www.emedicine.com: Online clinical reference owned by WEBMD,constructed with over 10,000 contributing doctors.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010 439DOI: 10.1002/asi

Page 8: Doctor onlineneeds

FIG. 2. Level of predictive judgment impact to cognitive search models.

Regarding the method used to make these judgments, dif-ferent elements of the checklist, the following contextuallyand externally rated approaches are used:

• The checklist approach is dominated by judgments regardingpast experience with source/organization or ranking in searchengine output, though many other techniques were seen, suchas citations of scientific data or other sources (ID 9).

• The contextual approach is used, especially the use of pro-moted resources by hospitals or medical schools, or corrob-oration of content found. Few doctors compared resourcesdirectly (ID 10).

• Finally, the externally rated approach was heavily used, lessvia official ratings of resources or tools, and mainly due torecommendations by colleagues (ID 11).

In addition, little use of stopping rules was observed,except for the dominant criterion of using information fromsites for which the user had a mental model. This said, despiteawareness of and the occasional use of these methods, bothdiary and interview data revealed that doctors rarely madeevaluative judgments on information found for two reasons.First, an open or background information need is less directlyrelated to immediate clinical decisions and has lower require-ments for information quality or cognitive authority. Second,and more important, doctors rely on predictive judgments tointroduce navigational bias into their informational queries,

thereby arriving at sites with known information quality orcognitive authority.

RQ3: How Information Judgments Impact CognitiveSearch Strategy

Doctors used cognitive strategies with navigational bias atvarious stages search. We will demonstrate this interplay ofinformation judgments and cognitive search using a basic nar-rative of search as described by Holscher and Strube’s (2000)action model of select/launch search engine, generate/submitquery, select document from results, and browsing the docu-ment obtained. This discussion follows Figure 2 below, wherecoding results are “hung” on the action model as a descrip-tive device. The numbers in brackets denote the code IDrelevant to an action step in Holscher and Strube’s repre-sentation. Grey boxes have no specific code in this study, butthey are included for descriptive completeness. Finally, solidlines indicate the dominant patterns observed in the study, anddashed lines indicate patterns that were either less frequentor not observed at all.

Following the diagram top to bottom, the task initiallydictates a specific closed or open information need (ID 1, 2).As noted before, closed information needs often impact amedical decision and require a specific level of quality orauthority.As a result, in selecting a Web site or search engine,

440 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010DOI: 10.1002/asi

Page 9: Doctor onlineneeds

the tendency is for closed information needs to precipitateknown address strategies of going directly to known sites(ID 6).

Nonetheless, the majority of searches did use a genericsearch engine, often with bias towards the known sites in thegenerate/submit query stage by including site names inthe search query (ID 4). As noted previously in the knownaddress bias strategy, Google, in particular, was used as anavigation device to access the appropriate part of a spe-cific site quickly. The doctor attempts a match to sites ofwhich they have a mental model. In addition, in selectingthe document to be browsed from the returned list, there wasinherent bias towards sites of which they had a mental model.Even before formulating the query, the doctors knew thatthe sites with known quality or cognitive authority will bereturned, selecting them in predetermined orders. Althoughfinding new sites is possible, the use of search engines wastherefore strongly orientated towards existing trusted sites.For example:

So, you can just Google basic facts . . . more often than notit does come up with sites such as eMedicine or the nationallibrary. (ID 4)

If you type in a medical symptom in Google, most of thehits will be medical Web sites and it is quicker than going tothem directly. (ID 4)

The doctors’ mental models of various Internet sites allowthis target to be selected, and the model contains perspectiveson a sites information quality (including utility) and cognitiveauthority. Hence, this supports the utilitarian view of mentalmodels described by Papastergiou (2005). For example:

I would start with the official government sites first, sites thatyou know are accredited second, and then work my way down.I have a kind of hierarchy of sites in my head. (ID 12a)

From experience, you tend to do it every day, you find somesites usually provide better clearer information than others,and you learn as you go along. What is reliable or not, youremember. . . . (ID 12a)

In browsing and assessing a certain document, it is likelythat the doctor has already resolved issues around informa-tion quality and authority, as search is biased towards a siteof which they have a mental model. This applies even wherethe task requires information of increased quality or cog-nitive authority, as doctors use predictive judgments basedon experience of the source to determine if these needs areresolved.

The process of building and using a mental model of siteswas employed by most of the sample and was constructedfrom past experience and the contextual approach prescribedby Meola (2004). In particular, this relied on resources pro-moted by medical schools or hospitals and recommendationsfrom colleagues. For example:

I was told by colleagues which ones are reliable, and thetrust[ed] Web site has useful links. Or, by searching you learnwhich sites are useful and which aren’t. (ID 13)

It’s through Googling, whatever comes up in the top 5.Youuse them and can learn to trust them. NICE3 guidelines andPubmed I picked up at med school. (ID 13)

This experience allows users to create mental models thatallow them to make predictive judgments and optimize theirsearch effectiveness, and this explains the bias in cognitivestrategies described in research question 1.

On the rare occasions that doctors made evaluative judg-ments, they performed evaluative information judgmentsusing sites or sections of sites of which they have no mentalmodel. Most often, they use checklist actions or their domainknowledge to corroborate the quality of the informationfound. For example:

If they are sites I rely on anyway, then a lot of it I won’t[validate] unless it’s a point of specific interest. So, probablyabout 5–10% of the time I’ll look at references and things.(checklist – ID 11)

Generally when you are looking for something, say, forexample, you want details of a particular symptom or disease,I vaguely know what to expect. If it seems sensible we use,which may not be very good practice, but it is something wedo all the time. (use of domain knowledge – ID 14)

As stated previously, these evaluative judgments were,in fact, very rare. Moreover, only a few participants actu-ally reported retrieving information from a Web site new tothem, despite the fact that over 50 different sites were usedin the sample, with the majority of search incidents usinga generic search engine (Google). Overall, the search pro-cess is highly biased towards sources of known informationquality and cognitive authority, although doctors are usingcognitive search models, similar to those identified, with-out these sources of bias, and they use a large number ofsites. As such, strategies to the left of Figure 2 become moredominated by these predictive judgments, which are, in turn,enabled through mental models of different sites that con-tain doctors’ perspectives of information quality (e.g., utility,goodness) and cognitive authority (e.g., trustworthiness)of these sources (see ID 8). Table 3 summarizes the results ofeach research question and the latent themes or codes thatwere identified that support this analysis. Themes identifiedinductively (or redefined in terms of the previous descrip-tion in literature) are shown in bold italics and are describedfurther in the results discussion of each research questionfollowing the table.

Discussion

In this discussion, we highlight the contributions of ouranalysis, which include: cognitive strategies with naviga-tional bias; the low applicability of the credibility construct;potential explanations for why users rarely make evaluativejudgments; the difficulty of studying cognitive search in iso-lation from information judgments or as a researcher definedtask; and emerging theory to connect the large but separate

3www.nice.org.uk: National Institute for Clinical Excellence for the UK’sNHS.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010 441DOI: 10.1002/asi

Page 10: Doctor onlineneeds

TABLE 3. Key results.

RQ Themes Description and subthemes Subtheme ID

1 Information need (task type) • Information needs are characterized by two dominant types, backgroundor open questions (58% of doctors), and closed question with a specificanswers (55%).

1, 2

Cognitive search strategy • Three main types of search were observed: (a) using Google as anavigational device or biased search (48%); (b) direct access to familiarsite (27%); (c) using Google for normal search (27%)◦ Doctors’ search patterns had similar characteristics to experts in previous

studies, and mainly relied on “to-the-point,” “known address searchdomain,” or “known address” strategies,

◦ with (a) being a combination of “to-the-point” and known address”strategies (better described as “Known address bias”). This wascomposed of two approaches, with 28% of all doctors using specificsite names in the informational queries, and 41% making preferredselections from results.

3, 4, 5

• “Known address” strategies were mainly for closed questions(84% of 37 closed cases analyzed)

6, 7

2 Criteria for Information judgments • Credibility does not appear to be an important factor in doctors’ informationjudgments, supporting Rieh’s (2002) view that information quality(usefulness 41%, goodness 31%) and cognitive authority (trustworthiness,31%, authority, 24%, and reliability 21%) are key.

8

Methods of Information judgment • Although doctors articulated many facets of the approaches on how to judgeinformation (checklist, contextual and the external or rater based), thesefacets were rarely applied to evaluate content found.

9, 10, 11

3 Predictive judgments, mental models,and search bias

• Predictive information judgments were made via a mental model ofdifferent sites, containing the doctor’s perceptions of their informationquality and cognitive authority.

• Mental model construction used a combination of past experience,resources promoted by medical schools, hospitals, or recommendationsfrom colleagues.

• This approach of using mental models was dominant; its construction anduse being early articulated by 83% of the sample.

12

Domain knowledge and judgments • Domain and contextual knowledge was used to assess the need forinformation quality or cognitive authority (55% of sample).

• In addition, in rare cases where information was judged evaluative, domainknowledge was used (31% of doctors).

13, 14

areas of online cognitive search and information judgment lit-erature. We expand on these points in the following sections,discuss their implications for research and practice, and thendetail the study’s limitations.

Cognitive Internet Search Adapted for Predictive andEvaluative Judgments

First, the results differ from previous studies into cognitivesearch strategy by identifying inherent bias at various stagesof search.This bias is navigational, orientating users’searchestowards known sites via two mechanisms: (a) performing anormal informational query with the anticipation that theseknown sites will appear at the top search results, and select-ing them with preference; and (b) actually entering specificsite names alongside the informational query. Consideringthe former, scholars speculate that informational queries canoften have a navigational component (Tann & Sanderson,2009). The latter is an additional mechanism to achieve this,but it also relies on users’ mental models of an array of con-tent sites appropriate to the task. The bias towards these sites

is enabled by predictive judgments (detailed in Figure 2),which, in turn, primarily rely on information quality and cog-nitive authority. This extends Thatcher’s (2006, 2008) view,with the identification of new strategy archetypes describedas known address bias, denoting the use of search enginesfor informational queries with bias towards sites of predictedauthority and quality. However, since the majority of previ-ous cognitive search studies are completed in the academicenvironment via experiments with students, lacking signifi-cant consequences of the actual use of the information, it isnot surprising that previously observed search patterns mightdiffer from those of real needs in the professional context.

Second, we noted the low applicability of the credibilityconstruct among the judgment criteria doctors apply. Mostof the concepts associated with credibility (accuracy, author-ity, objectivity, currency, and coverage) can be explained bythe two other well-known, higher order constructs. Althoughobjectivity and coverage within credibility are ideas not incor-porated by information quality and cognitive authority, bothwere considered to be of low importance by the sample. Cred-ibility is a common construct used across a range of studies,

442 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010DOI: 10.1002/asi

Page 11: Doctor onlineneeds

FIG. 3. Cognitive Internet search adapted for predictive and evaluative judgments.

and though their contents differ only slightly, the plethoraof constructs used in research may need to be addressed.Only a few studies have compared these directly (e.g., Rieh,2002), and these findings provide important direction for theirconsolidation.

In terms of judgment methods, a number of authors havenoted that users rarely apply evaluative information judg-ments (Eysenbach & Kohler, 2002; Meola, 2004; Rieh, 2002).Although this study concurs with this in the context wherethe information obtained is influencing important decisions,this does not imply the complete absence of any judgment.Doctors are using predictive judgments to resolve needsfor information quality or cognitive authority, reducing thecost of searches, because of the time-consuming nature ofchecklist-type evaluative judgments (Meola, 2004). Addi-tionally, the use of stopping was not directly observed, savefor the single criterion of finding information in sites of whichusers had mental models. Nonetheless, it should be notedthat Browne et al.’s (2007) work is based on experimentaltasks unfamiliar to the user, as opposed to repeated taskswhere users have significant experience and domain knowl-edge. Hence, this difference is not surprising, and inferencesabout stopping rule’s role in other types of tasks cannot bedetermined from this study.

Third, we observed the use of mental models of Internetsites, which enables predictive judgments and, in turn, allowsnavigational bias to be introduced into informational queries.As information seeking on the Internet is a repeated exercise,and the doctors in the sample are making many such infor-mation searches every week, they can construct models ofdifferent sources. These models were generally articulatedat the level of a site or domain and are related to the judg-ment on criteria noted to be of importance, which includedinformation quality (usefulness, goodness, etc.) and cognitiveauthority (trustworthiness, authority, reliability, etc.). Vari-ous authors have previously identified such mental models(Cahoon, 1998; Hargittai, 2002; Papastergiou, 2005; Wang,Hawk, & Tenopir, 2000; Zhang, 2008), but there is lack ofagreement on its exact contents. These results support theutilitarian view of mental models, extended with notionsof information quality and cognitive authority that are rel-ative between different sites, a view not strongly identified inprevious literature.

It should also be noted that these results are, to a certainextent, a consequence of examining a real-life and extensively

repeated search task. Doctors noted that they had learned andadapted strategies, taking into account changing Web searchexperience and developing mental models of the Internetsites. These models were not entirely fixed as many doc-tors noted that it had changed over time and with the focus oftheir professional work. In particular, this suggests that a lon-gitudinal view of Internet search should be examined, wheresuccessful previous cognitive strategies (registered in men-tal models and Internet search experience) dictate plannedstrategies for the future. Certain researchers have begun tolook at this longitudinal view of search (e.g., Cothey, 2002;Zhang, Jansen, & Spink, 2008), but these are limited toaction views of search derived from log files rather thanexploring behavioral intention.Although we do not propose adetailed model here, further research could consider how cog-nitive search styles are learned, beyond simple distinctions ofexpert and novice, by exploring the development Web experi-ence and the construction of mental models. To achieve this,recent attempts to apply learning levels to Internet searchcould be explored (see Jansen, Booth, & Smith, 2008). Thiswould need to be examined in relation to different types orcategories of task, where the task is relatively constant andextensively repeated, to approximate search conditions suchas those found in certain professional contexts.

Finally, the results suggest a revised high-level model ofcognitive search shown in Figure 3 below. Task type is adominating factor in determining cognitive strategy (e.g.,Thatcher, 2008; Browne et al., 2007); however, a user’s men-tal model of the Internet also dictates the use of preferredsites, and the user’s Web search experience the execution ofcertain moves such as specific text queries. Both of these aredriven by predictive judgments as users attempt to anticipatemoves that will yield improved search results. Because eachsearch task is not exactly identical to the last, the use of pre-dictive judgments may not be sufficient to avoid the needfor evaluative judgments. This evaluation may encompassthe use of checklists, contextual approaches, or corrobora-tion of content found with their existing domain knowledge.Some of these elements have been suggested in previous lit-erature, such as Marchionini (1995) or other authors listed inTable 1. However, this view differs by connecting conceptsfrom information judgment literature to those in the cognitivesearch literature.

This difference can also be understood from a practicalpoint of view; cognitive search literature has often been based

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010 443DOI: 10.1002/asi

Page 12: Doctor onlineneeds

on search with single hypertext or database systems, whereusers may assume a certain standard level of informationquality or cognitive authority in this single source. Addi-tionally, in researcher-defined tasks on the Internet, suchjudgments may be inconsequential to the user. The advan-tage of this view is to explain cognitive strategy over a widerange of potential sources now available on the Internet, inwhich the user has different levels of confidence and certainneeds for information quality or cognitive authority. There-fore, these differences concur with certain author’s claimsthat previous research poorly describes what users actuallydo (e.g., Metzger, 2007), although their constructs provide auseful frame for analysis.

Implications for Research

Results show that Web experience and mental models, keyconcepts from cognitive search literature, can be viewed toimpact search strategy through key constructs in informa-tion judgment literature. This offers a basis to connect thetwo fields. Further research should examine other possiblerelationships between these constructs, detail the contents ofthe two types of judgments in use, and understand how thecontents of mental models and Web experience changes overtime as individuals gain experience in a certain task category.

In addition, authors working with Technology Accep-tance Models (TAM) and user satisfaction also approachusers’ information judgments in computer systems (e.g.,DeLone & McLean, 1992; McKinney, Kanghyun, & Zahedi,2005; Wixom & Todd, 2005). For instance, user satisfactioncan clearly be delineated by information quality and systemsquality (DeLone & McLean, 1992; McKinney et al., 2005),both of which impact attitude and behavioral intention viaTAM’s notions of ease of use and usefulness (Davis, 1989;Wixom & Todd, 2005). However, constructs such as cognitiveauthority are not considered, which can be partly explained bydiffering units of analysis, meaning these two literature setsare not easily reconciled. Moreover, research in this area hasmore recently examined Web Acceptance Models (WAM),where constructs such Web experience and experience witha Web site have been shown to have moderating effects onperceived ease of use and usefulness (Castañeda, Muñoz-Leiva, & Luque, 2007). WAM and cognitive search modelsconsider more similar constructs and fundamentally simi-lar real-life phenomena. Thus, WAM’s explanatory powermight benefit from examining discrete judgments of dis-tributed information objects across the Internet over time,encompassing such concepts as mental models, mental modelconstruction, and predictive judgments.

Given this, there are a number of priorities and ques-tions for future research. Clearly the exploratory nature ofthis study invites an empirical and confirmatory test of theresults. However, it also suggests a number of other importantresearch avenues, including further focus on real informa-tion search (rather that task based experiments), as well as:1) examining a longitudinal view of mental models’ and pre-dictive judgments’ over time; 2) establishing more detailed

contents of the predictive and evaluative judgments at thedifferent stages of search; 3) determining how a range ofprofessional and business contexts, and their specific conse-quences or risks from using information, drive differences inmental models or predictive judgments, and; 4) work towardsan enrichment with WAM via the view of a network ofdifferent sites in a user’s mental model.

Implications for Practice

Two major insights are gleaned from this study: the roleof generic search engines in medicine and an increasinglysophisticated use of the evolving Internet. First, medicalresearchers have conflicting views on the role of Google ininformation search as being a key facilitator (Johnson et al.,2008; Sim et al., 2008), versus having an unimportant role (DeLeo et al., 2006). The opposing nature of these views is partlyexplained by Google’s predominant use for accessing differ-ent sites for which doctors have an existing mental model ofutility. Consequently, generic search engines play an impor-tant role in both determining the availability of content andproviding fast access to specific locations in these sites, butalso a limited role in guiding doctors to previously unknownsites. Second, this study shows a sophisticated level of Inter-net customization by doctors, and despite cognitive authorityconcerns, many are using sites not normally promoted by themedical profession. The prominence of user-generated con-tent or Web 2.0 sites, like eMedicine and Wikipedia, implythat these tools are becoming ingrained into medical prac-tice (Hughes, Joshi, Lemonde, & Wareham, 2009). Despitewarnings not to use Wikipedia for medical decision mak-ing (Lacovara, 2008), their usefulness, different informationneeds, and occasional compensatory evaluative judgmentsmean they play a useful role for doctors. However, the levelsof awareness of techniques for information judgment varybetween the doctors, and those of lower experience, seeingcolleagues use tools like Wikipedia, may attempt their usewithout the same level of awareness of risk.

This perspective has two main implications for practition-ers. First, for medical policy makers, consideration of therisks of the emergence of this behavior must be made. Giventhe utility of such general-purpose tools, rather than restrict-ing access, further Internet awareness training enabling alldoctors to efficiently manage the associated risks could beconsidered. However, medical students are often taught basicsearch skills and are introduced to tools such as Pubmed inmedical school, and the effectiveness of these types of inter-ventions needs to be better understood (Brettle, 2007). Toenable such training to be effective, research needs to considerwhat constitutes sufficient predictive or evaluative informa-tion judgment for patient safety, considering the nature ofdifferent information needs derived from the predominanttask types and the time constraints of practicing medicalprofessionals.

Second, this customization of search by medical profes-sionals should also be noted by providers of these infrastruc-ture services, from companies providing search engines to

444 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010DOI: 10.1002/asi

Page 13: Doctor onlineneeds

medical librarians. The entrenchment of users’ customizedsearch processes shows the gap between the software avail-able to them and their information needs. The need topersonalize Web search has already been identified andexplored by research (for example Ma, Pant, & Sheng, 2007;Liu,Yu, & Meng, 2004), and it encompasses techniques suchas user profiling and search histories or search categories thatmodify page rank. However, our results indicate that infor-mation needs are driven by task type, which drive certaininformation quality or cognitive authority needs that doctorssatisfy efficiently by building models of sites via experi-ence and corroboration with colleagues, hospitals, or medicalschools. Hence, although the current approaches to personal-ized search may improve its efficiency, further gains could bemade by modeling this behavior. To this end, the significantreliance on corroboration to identify levels of informationquality and cognitive authority needs suggests further sup-port for certain authors’ claims, such as Pirolli (2007), thatimprovements in search will need to involve cooperative orparticipatory Web 2.0 models.

Limitations

This study has clear limitations when generalizing to othercontexts, notably due the use of diaries, the sample sizeand nature, and the naturalistic design of the study. Firstly,although diary methods offer many benefits, especially whencompared with traditional survey methods, diary studies mustachieve a level of participant commitment and dedicationrarely required in other designs. A common issue is the train-ing requirements for participants to follow protocol, and weoutlined a number of steps in the method used to mitigate this.There is also potential for reactance, referring to a change inexperience or behavior as a result of participation; though,at present, there is little evidence that reactance poses athreat to the validity of diaries (Bolger et al., 2003). Overall,the use of diaries, though very beneficial, meant the studydesign resulted in a sample size only suitable for exploratoryresearch.

Secondly, the specific sample relates to junior doctors,and there is known differences in online information-seekingbehavior based on doctor seniority. However, not only arejunior doctors as a population significant in the context of theUK’s NHS (approximately 38,000 junior doctors), but givenhow quickly online search mechanisms change, their studyprovides value in examining emergent behaviors in the overalldoctor population. Scholars note that such Internet use, led bythe junior segment, will become increasingly prevalent in thepopulation as a whole (e.g., Sandars & Schroter, 2007) andis increasingly replacing the use of traditional informationsources (Bennett et al., 2004).

Finally, in a study based on a post-event reflection and ofnaturalistic design, there are possible discrepancies betweenusers’ actual actions and what they report. The use of diariesmitigates this to a certain extent as the users’ perspectiveswere captured close to the event in question. In addition,naturalistic studies are often contextual and the ability to

generalize to cannot be assumed. For instance, clinical infor-mation is a specific task type and users are completing aregular or repeated task and have significant Internet expe-rience and domain knowledge relevant to the informationsearch and retrieval.

However, there are many contexts to which the resultsare potentially transferable. In the health sector as a whole,healthcare professionals such as pharmacists or dentistsuse the Internet in this manner (McKnight & Peet, 2000).Another major and similar use of the Internet is by patientsseeking health information, where many patients regularlysearch on conditions and acquire significant domain knowl-edge (Bundorf, Wagner, Singer, & Baker, 2006). Moreover,we would speculate that there are many settings wherethe characteristics would apply, such as in important butrepeated decision making for general users or in extensiveuse of the Internet for professional purposes by other typesof knowledge workers.

Concluding Remarks

This study addresses major gaps in research in three waysby: a multimethod study design that supplements the dom-inant research methods examining this subject, using themedical context that highlights repeated online informa-tion search with stakes or risks for the user (rather than aresearcher defined or single inconsequential task), and exam-ining the previously identified but under researched linkbetween cognitive search models and information judgments.

Results indicate that: (a) doctors’ principal type of infor-mation needs can be characterized as closed (specific answer)or open (background reading); (b) principal cognitive strate-gies used are similar to expert strategies identified in previousstudies; (c) closed information needs precipitate direct accessto specific content Web sites (denoted as known address strat-egy) rather than generic search engine use; (d) dominant typesof information judgments used by doctors relate to informa-tion quality and cognitive authority, suggesting the low appli-cability of the credibility construct; (e) use of evaluative judg-ments in examining a document are scarce, explained by areliance on predictive judgments to resolve information qual-ity and cognitive authority needs; (f) predictive judgmentsare enabled by users’ mental models of Internet sites; and(g) navigational bias is created by predictive judgments dur-ing informational queries, suggesting new cognitive searchstrategy archetypes (described as known address bias) andmixed approach to navigational/information search types.

A model is proposed that demonstrates how the constructsin information judgment literature can describe the influenceon search strategy of constructs normally associated withcognitive search literature. This responds to scholars’ callsto examine this link and enable the connection of two largebut previously separate fields. The model is potentially trans-ferable to settings where the task is repeated and the use of theinformation has consequences or potential risks for the user.

Results also suggest that research needs to supplement thedominant research method of examining discrete tasks with

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010 445DOI: 10.1002/asi

Page 14: Doctor onlineneeds

a view of strategies that are built over time on real informa-tion needs. Hence, in addition to a confirmatory approach tothis study, other opportunities for future research are as fol-lows: (a) examining longitudinal view of how users learn tooptimize repeated search tasks, detailing how mental modelsand predictive judgments change over time; (b) establishingmore detailed contents of the predictive and evaluative judg-ments at the different stages of search; (c) determining how arange of professional and business contexts, and their specificconsequence of information use, drive differences in mentalmodels or predictive judgments; and (d) work to towards anenrichment of WAM, considering the Internet as a network ofdifferent sites of which users have mental models that driveusage intentions.

References

Bala, H., & Venkatesh, V. (2007). Assimilation of interorganizational busi-ness process standards. Information Systems Research, 18(3), 340–362.

Bundorf, K., Wagner, T., Singer, S., Baker, L., & Laurence, C. (2006). Whosearches the Internet for health information? Health Services Research,41(3), 819–36.

Bennett, N., Casebeer, L., & Kristofco, R. (2005). Family physicians’ infor-mation seeking behaviors: A survey comparison with other specialties.BMC Medical Informatics and Decision Making, 5(9).

Bennett, N., Casebeer, L., Kristofco, R., & Strasser, S. (2004). Physicians’Internet information-seeking behaviors. Journal of Continuing Educationin the Health Professions, 24(1), 31–8.

Bolger, N., Davis, A., & Rafaeli, E. (2003). Diary methods: Capturing Lifeas it is Lived. Annual Review of Psychology, 54, 579–616.

Boyatzis, R. (1998). Transforming Qualitative Information: Thematic Anal-ysis and Code Development. San Franscisco, CA: Sage Publishers.

Browne, G., & Pitts, M. (2004). Stopping rule use during informationsearch in design problems. Organizational Behavior and Human DecisionProcesses, 95(2), 208–224.

Browne, G., Pitts, M., & Wetherbe, J. (2007). Cognitive Stopping Rules forTerminating Information Search in Online Tasks. MIS Quarterly, 31(1),89–104.

Boulos, M., & Wheeler, S. (2007). The emerging Web 2.0 social software:Anenabling suite of sociable technologies in health and health care education.Health Information & Libraries Journal, 24(1), 2–23.

Brandt, D. (1996). Evaluating information on the Internet. Computers inLibraries. 16(5) 44–46.

Brettle, A. (2007). Evaluating information skills training in health libraries:A systematic review. Health Information & Libraries Journal, 24(1),18–37.

Byrne, M., John, B., Wehrle, N., & Crow, D. (1999). The tangled Web:We wove a taskonomy of WWW use. In Proceedings of the SIGCHIConference on Human Factors in Computing Systems: The CHI is thelimit (pp. 544–551). New York: ACM Press.

Cahoon, B. (1998). Teaching and learning Internet skills. New Directionsfor Adult and Continuing Education. 78, 5–13.

Camerer, C. (2003). Behavioral game theory: Experiments in strategicinteraction. Princeton, NJ: Princeton University Press.

Castañeda J.A., Muñoz-Leiva, F., & Luque, T. (2007). Web AcceptanceModel (WAM): Moderating effects of user experience, Information andManagement, 44(4), 384–396.

Catledge, L., & Pitkow, J. (1995). Characterising browsing strategies inthe World Wide Web. Computer Networks and ISDN Systems, 27,1065–1073.

Choo, C., Detlor, B., & Turnbull, D. (2000, February). Information seekingon theWeb:An integrated model of browsing and searching. First Monday,5(2). Retrieved November 9, 2009, from http://www.firstmonday.org/issues5_2/choo/index.html

Cothey, V. (2002). A longitudinal study of World Wide Web users’ informa-tion searching behaviour. Journal of theAmerican Society for InformationScience and Technology, 53, 67–78.

Danielson, D. (2005). Web credibility. In C. Ghaoui (Ed.), Encyclopedia ofhuman–computer interaction (pp. 713–721). Hershey, PA: Idea Group.

Davis, F. (1989). Perceived usefulness, perceived ease of use, and useracceptance of information technology. MIS Quarterly, 13(3), 319–339.

De Leo, G., LeRouge, C., Ceriani, C., & Niederman, F. (2006). Web sitesmost frequently used by physician for gathering medical information.AMIA Annual Symposium Proceedings, 6, 902.

DeLone, W., & McLean, E. (1992). Information systems success: The questfor the dependent variable. Information Systems Research, 3(1), 60–95.

Dennis, S., Bruza, P., & McArthur, R. (2002). Web searching: A process-oriented experimental study of three interactive search paradigms. Journalof the American Society for Information Science and Technology, 53(2),120–133.

Easterby-Smith M.P.V., Thorpe. R., & Lowe, A. (2002). Managementresearch: An Introduction. London: Sage.

Eysenbach, G. (2000). Consumer health informatics. British Medical Jour-nal, 320, 1713–1716.

Eysenbach, G., & Diepgen, T. (1998). Towards quality management of med-ical information on the Internet: Evaluation, labeling, and filtering ofinformation. British Medical Journal, 317, 1496–1502.

Eysenbach, G., & Kohler, C. (2002). How do consumers search for andappraise health information on the World Wide Web? Qualitative studyusing focus groups, usability tests, and in-depth interviews. BritishMedical Journal, 324, 573–577.

Eysenbach, G., Powell, J., Kuss. O., & Sa, E. (2002). Empirical stud-ies assessing the quality of health information for consumers on theWorld Wide Web: A systematic review. Journal of the American MedicalAssociation, 287, 2691–2700.

Falagas, M., Pitsouni, E., Malietzis, G., & Pappas, G. (2008). Comparisonof PubMed, Scopus, Web of Science, and Google Scholar: Strengths andweaknesses. The FASEB Journal, 22, 338–42.

Fidel, R., Davies, R.K., Douglass, M.H., Holder, J.K. Hopkins, C.J.,Kushner, E.J., et al. (1999). A visit to the information mall: Web search-ing behavior of high school students. Journal of the American society forInformation Science, 50, 24–37.

Fleiss, J., & Cohen, J. (1973). The equivalence of weighed kappa and theintraclass correlation coefficient as measures of reliability. Educationaland Psychological Measurement, 33, 613–9.

Fogg, B. (1999). Persuasive technologies—Now is your chance to decidewhat they will persuade us to do—and how they’ll do it. Communicationsof the ACM, 42, 26–29.

Fogg, B., Marshall, J., Laraki, O., Osipovich, A., Varma, C., Fang, N., et al.(2001). What Makes Web Sites Credible? A Report on a Large Quantita-tive Study. CHI. 31. In Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems (pp. 61–68). New York: ACM Press.

Fogg, B., Soohoo, C., Danielson, D., Marable, L., Stanford, J., & Tauber, E.(2003). How do users evaluate the credibility of Web sites? A study withover 2,500 participants. Proc. of the 2003 Conference on Designing forUser Experiences. San Francisco, CA, 1–15.

Fogg, B., & Tseng, H. (1999). The elements of computer credibility. Paperpresented at the Conference on Human Factors and Computing Systems.Pittsburgh, PA.

Ford, N., Miller, D., & Moss, N. (2005a). Web search strategies and humanindividual differences: Cognitive and demographic factors, Internet atti-tudes, and approaches. Journal of the American Society for InformationScience and Technology, 56, 741–756.

Ford, N., Miller, D., & Moss, N. (2005b). Web search strategies and humanindividual differences: A combined analysis. Journal of the AmericanSociety for Information Science and Technology, 56, 757–885.

Ford, N., Wilson, T., Foster, A., Ellis, D., & Spink, A. (2002). Informationseeking and mediated searching. Part 4. Cognitive styles in informationseeking. Journal of the American Society for Information Science andTechnology, 53(9), 728–735.

Fritch, J., & Cromwell, R. (2001). Evaluating Internet resources: Iden-tity, affiliation, and cognitive authority in a networked world. Journal

446 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010DOI: 10.1002/asi

Page 15: Doctor onlineneeds

of the American Society for Information Science and Technology, 52(6),499–507.

Fu, W., & Pirolli, P. (2007). SNIF-ACT:A cognitive model of user navigationon the World Wide Web. Human-Computer Interaction, 22, 355–412.

Giustini, D. (2006). How Web 2.0 is changing medicine. British MedicalJournal, 333(7582), 1283–4.

Griffiths, J., & Brophy, P. (2005). ‘Student searching behaviour and theWeb: use of academic resources and Google. Library Trends, 53(4),539–554.

Haddow, G. (2003). Focusing on health information: How to assessinformation quality on the Internet. Australian Library Journal, 52, 169.

Hargittai, E. (2002). Beyond logs and surveys: In-depth measures of people’sWeb use skills. Journal of the American Society for Information Scienceand Technology, 53(14), 1239–1244.

Hodkinson, C., & Kiel, G. (2003). Understanding Web information searchbehavior: An exploratory model. Journal of End User Computing, 15(4),27–48.

Holscher, C., & Strube, G. (2000). Web search behavior of Internet expertsand newbies. International Journal of Computer and TelecommunicationsNetworking, 33(1–6), 337–346.

Hong, T. (2006). The influence of structural and message features on Website credibility. Journal of the American Society for Information Scienceand Technology, 57, 114–127.

Hughes, B., Joshi, I., Lemonde, H., & Wareham, J. (2009). Junior physician’suse of Web 2.0 for information seeking and medical education: A qualita-tive study. International Journal of Medical Informatics, 78(10), 645–655.

Hughes, B., Joshi, I., & Wareham, J. (2008). Medicine 2.0: Tensions andcontroversies in the field. Journal of Medical Internet Research, 10(3), e23.

Jaillet, H. (2003). Web Metrics: Measuring patterns in online shopping.Journal of Consumer Behaviour, 2(4), 369–381.

Jansen, B., Booth, D., & Smith, B. (2009). Using the taxonomy of cog-nitive learning to model online searching. Information Processing &Management. 45(6), 643–663.

Jansen, B., Booth, D., & Spink, A. (2008). Determining the informa-tional, navigational, and transactional intent of Web queries. InformationProcessing & Management, 44(3), 1251–1266.

Jansen, B., & Spink, A. (2006). How are we searching the World Wide Web?A comparison of nine large search engine transaction logs. InformationProcessing & Management, 42(1), 248–263.

Johnson, P., Chen, J., Eng, J., Makary, M., & Fishman, E. (2008). A compar-ison of World Wide Web resources for identifying medical information.Academic Radiology, 15(9), 1165–72.

Johnson, T., & Kaye, B. (1998). Cruising is believing?: Comparing Internetand traditional sources on media credibility measures. Journalism & MassCommunication Quarterly, 75(2), 325–340.

Johnson, E., Moe, W., Fader, P., Bellman, S., & Lohse, G. (2004). On thedepth and dynamics of online search behavior. Management Science,50(3), 299–308.

Kahn, B., Strong, D., & Wang, R. (2002). Information quality benchmarks:Product and service performance. Communications of the ACM, 45(4),84–192.

Koenig, K. (2007). Diagnostic dilemma? Just Google it? JWatch EmergencyMedicine, 5, 5.

Kohli, R., & Piontek, F. (2007). DSS in healthcare: Advances and opportuni-ties. In F. Burstein, & C. Holsapple (Eds.), Handbook on decision supportsystems. Berlin, Germany: Springer-Verlag.

Kim, K.-S. (2001). Implications of user characteristics for information seek-ing on the Web. International Journal of Human Computer Interaction,13(3), 323–340.

Kim, K.-S., &Allen, B. (2002). Cognitive and task influences on Web search-ing behaviour. Journal of the American Society for Information Scienceand Technology, 53, 109–119.

King, N. (2004). Using templates in the thematic analysis of text. InC. Cassell & G. Symon (Eds.), Essential guide to qualitative methodsin organizational research. London: Sage.

Klobas, J.E. (1995). Beyond information quality: Fitness for purpose andelectronic information resource use. Journal of Information Science, 21,95–114.

Knight, S.A., & Burn, J.M. (2005). Developing a framework for assessinginformation quality on the World Wide Web. Informing Science Journal,8, 27–34.

Kyung-Sun, K., & Bryce, A. (2002). Cognitive and task influences on Websearching behavior. Journal of the American Society for InformationScience and Technology, 53(2), 109.

Lacovara, J. (2008). When searching for the evidence, stop using Wikipedia!Medsurg Nursing, 17(3), 153.

Lazonder, A. (2000). Exploring novice users’ training needs in search-ing information on the World Wide Web. Journal of Computer AssistedLearning, 16, 326–335.

Liu, F., Yu, C., & Meng, W. (2004). Personalized Web Search For Improv-ing Retrieval Effectiveness. IEEE Transactions on Knowledge and DataEngineering, 16(1), 28–40.

Liu, Z., & Huang, X. (2005). Evaluating the credibility of scholarly infor-mation on the Web: A cross cultural study. International Information &Library Review, 37, 99–106.

Ma, Z., Pant. G., & Sheng. O. (2007). Interest-based personalized search.ACM Transactions on Information Systems, 25(1), 5.

Marchionini, G. (1995). Information seeking in electronic environments.Cambridge, UK: Cambridge University Press.

Marchionini, G., & Schneiderman, B. (1998). Finding facts vs. browsingknowledge in hypertext systems. IEEE Computer, 21, 70–80.

Masters, K. (2008). For what purpose and reasons do doctors use the Inter-net: A systematic review. International Journal of Medical Informatics,77, 4–16.

McKenzie, P. (2003). Justifying cognitive authority decisions: Discursivestrategies of information seekers. Library Quarterly, 73, 261–288.

McKinney, V., Kanghyun, Y., & Zahedi, F. (2002). The measurement ofWeb-customer satisfaction:An expectation and disconfirmation approach.Information Systems Research, 13(3), 296–315.

McKnight, M., & Peet, M. (2000). Health care providers’ informationseeking: Recent research. Medical Reference Services Quarterly, 19(2),27–50.

McLean, R., Richards, B., & Wardman, J. (2007). The effect of Web 2.0on the future of medical practice and education: Darwikinian evolu-tion or folksonomic revolution? Medical Journal of Australia, 187(3),174–177.

Meola, M. (2004). Chucking the checklist: A contextual approach to teach-ing undergraduates Web-site evaluation. Libraries and the Academy, 4,331–344.

Metzger, M. (2007). Making sense of credibility on the Web: Models forevaluating online information and recommendations for future research.Journal of the American Society for Information Science and Technology,58(13), 2078–2091.

Metzger, M., Flanagin, A., & Zwarun, L. (2003). Student Internetuse, perceptions of information credibility, and verification behavior.Computers & Education, 41, 271–290

National Health Service, Department of Health, England. (2004). Depart-ment of Health 2004 medical and dental workforce census. RetrievedJune 2008 from http://www.dh.gov.uk/en/PublicationsAndStatistics/Statistics/StatisticalWorkAreas/StatisticalWorkforce/DH_4087066

Navarro-Prieto, R., Scaife, M., & Rogers, Y. (1999). Cognitive strategies inWeb searching. Proc. of the 5th Conference on Human Factors and theWeb (pp. 43–56). Gaithersburg, Maryland.

Olaisen, J. (1990). Information quality factors and the cognitive author-ity of electronic information. In I. Wormell (Ed.), Information quality:Definitions and dimensions. Los Angeles, CA: Taylor Graham.

Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L.(2007). In Google we trust: Users’ decisions on rank, position, andrelevance. Journal of Computer-Mediated Communication, 12(3), 3.

Papastergiou, M. (2005). Students’ mental models of the Internet andtheir didactical exploitation in informatics education. Education andInformation Technologies, 10(4), 341–360.

Park, T. (1993). The nature of relevance in information retrieval:An empiricalstudy. Library Quarterly, 63(3), 318–351.

Pirolli, P. (2007). Information foraging theory: Adaptive interaction withinformation. New York, NY: Oxford University Press.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010 447DOI: 10.1002/asi

Page 16: Doctor onlineneeds

Podichetty,V., Booher, J., Whitfield, M., & Biscup, R. (2006).Assessment ofInternet use and effects among healthcare professionals: A cross sectionalsurvey. Postgraduate Medical Journal, 82(966), 274–279.

Rettie, R. (2002). Net generation culture. Journal of Electronic Commerce,4(4), 254–264.

Rieh, S. (2002). Judgment of information quality and cognitive authority inthe Web. Journal of the American Society for Information Science andTechnology, 53(2), 145–161.

Rieh, S., & Belkin, N. (1998). Understanding judgment of information qual-ity and cognitive authority in the WWW. Proc. of the 61st ASIS AnnualMeeting, Silver Spring, MD.

Rieh, S., & Belkin, N. (2000). Interaction on the Web: Scholars’ judgmentof information quality and cognitive authority. Proc. of the 63rd ASISAnnual Meeting (pp. 25–33). Silver Spring, MD.

Rieh, S., & Danielson, D. (2007). Credibility: A multidisciplinary frame-work. Annual Review of Information Science And Technology, 41,307–364.

Rose, D., & Levinson, D. (2004). Understanding user goals in Websearch. Proc. of the Thirteenth Int’l World Wide Web Conf (pp. 13–19).New York. NY.

Rowley, J. (2000). Product search in e-shopping: A review and researchpropositions. Journal of Consumer Marketing, 17(1), 20–35.

Sandars, J., & Haythornthwaite, C. (2007). New horizons for e-learning inmedical education: Ecological andWeb 2.0 perspectives. Medical Teacher,29(4), 307–310.

Sandars, J., & Schroter, S. (2007). Web 2.0 technologies for undergradu-ate and postgraduate medical education: An online survey. PostgraduateMedical Journal, 83, 759–762.

Schacter, J., Chung, G., & Dorr, A. (1998). Children’s Internet searchingand complex problems: Performance and process analyses. Journal of theAmerican Society for Information Science and Technology, 49, 840–849.

Sim, M., Khong, E., & Jiwa, M. (2008). Does general practice Google?Australian Family Physician, 37(6), 471–474.

Slone, D. (2002). The influence of mental models and goals on searchpatterns during Web interaction. Journal of the American Society forInformation Science and Technology, 53(13), 1152–1169.

Sohn, Y., Joun, H., & Chang, D. (2002). A model of consumer informationsearch and online network externalities. Journal of Interactive Marketing,16(4), 2–14.

Tang, H., & Ng, J. (2006). Googling for a diagnosis- use of Google as adiagnostic aid: Internet based study. British Medical Journal, 333(7579),1143–5.

Tann, C., & Sanderson, M. (2009). Are Web-based informational querieschanging? Journal of the American Society for Information Science andTechnology, 60(6), 1290–1293

Taubert, M. (2006). Use of Google as a diagnostic aid: Bias your search.British Medical Journal, 333(7579), 1270–1270.

Thatcher, A. (2006). Information-seeking behaviors and cognitive searchstrategies in different search tasks on the WWW. International Journal ofIndustrial Ergonomics, 36, 1055–1068.

Thatcher, A. (2008). Web search strategies: The influence of Web expe-rience and task type. Information Processing & Management, 44(3),1308–1329.

Tauscher, L., & Greenberg, S. (1997). How people revisit Web pages:Empirical findings and implications for the design of history systems.International Journal of Human-Computer Studies, 47, 97–137.

Tombros, A., Ruthven, I., & Jose, J. (2005). How users assess Web pagesfor information seeking. Journal of the American Society for InformationScience and Technology, 56, 327–344.

Verbrugge, M. (1980). Health diaries. Medical Care, 18, 73–95.Wang, P., Hawk, W., & Tenopir, C. (2000). Users’ interaction with World

Wide Web resources: An exploratory study using a holistic approach.Information Processing & Management, 36, 229–251.

Wang, P., & Soergel, D. (1998). A cognitive model of document use dur-ing a research project. Study I. Document selection. Journal of AmericanSociety for Information Science, 49(2), 115–133.

Wang, P., & Soergel, D. (1999). A cognitive model of document use dur-ing a research project. Study II. Decision at the reading and citingstages. Journal of American Society for Information Science, 50(2),98–114.

Wang, P., & Strong, D. (1996). Beyond accuracy: What data quality means todata consumers. The Journal of Management Information Systems, 12(4),5–33.

Ward, J., & Ostrom, A. (2003), The Internet as Information Minefield: AnAnalysis of the Source and Content of Brand Information Yielded by NetSearches. Journal of Business Research, 56, 907–914.

Wathen, N., & Burkell, J. (2002). Believe it or not: Factors influencing credi-bility on the Web. Journal of theAmerican Society for Information Scienceand Technology, 53(2), 134–144.

Westbrook, J., Coiera, E., & Gosling, S. (2005). Do online informa-tion retrieval systems help experienced clinicians answer clinical ques-tions? Journal of the American Medical Informatics Association, 12(3),315–321.

Whitmire, E. (2004). The relationship between undergraduates’epistemolog-ical beliefs, reflective judgment, and their information-seeking behavior.Information Processing & Management, 40, 97–111.

Wixom, B., & Todd, P. (2005). A theoretical integration of user satisfac-tion and technology acceptance. Information Systems Research, 16(1),85–102.

Wu, J., & Rangaswamy,A. (2003),A fuzzy set model of consideration set for-mation calibrated on data from an online supermarket. Marketing Science,22(3), 411–434.

Zeist, R., & Hendriks, P. (1996). Specifying software quality with theextended ISO model. Software Quality Journal, 5(4), 273–284.

Zhang, Y. (2008). Undergraduate students’ mental models of the Web as aninformation retrieval system. Journal ofAmerican Society for InformationScience, 59(13), 2087–2098.

Zhang, Y., Jansen, B., & Spink, A. (2008). Time series analysis of a Websearch engine transaction log. Information Processing & Management,45(2), 230–245

448 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY—March 2010DOI: 10.1002/asi

Page 17: Doctor onlineneeds

Ap

pen

dix

Det

aile

dR

esul

tsan

dC

odin

gsc

hem

e(D

iary

and

Inte

rvie

wD

ata)

Prop

ortio

nof

doct

ors/

case

sA

rea

Cod

eID

Cod

eD

escr

iptio

nob

serv

edE

xam

ples

Cog

nitiv

ese

arch

1In

form

atio

nse

ekin

gfo

r“c

lose

d”qu

estio

nSp

ecifi

cin

form

atio

nor

ques

tion,

i.e.,

toch

eck

–qu

ickl

ya

diag

nosi

s,m

anag

emen

topt

ions

,dru

gin

form

atio

n,or

spec

ific

fact

–fo

rde

epor

spec

ific

info

rmat

ion

ona

part

icul

arm

edic

alpr

oble

m

55%

–“T

olo

okat

wha

tdif

fere

ntm

anag

emen

topt

ions

mig

htbe

.”–

“Pre

scri

bing

guid

elin

esar

eha

ndy.

”–

“Dou

ble-

chec

kpo

tent

iald

iffe

rent

iald

iagn

osis

.”–

“The

best

bbl

ocke

rto

use

for

som

eone

with

hear

tfai

lure

.”–

“Wik

iped

iase

arch

first

,lik

eso

meo

neca

me

upto

me

and

said

my

mot

her

has

Gau

lin’s

synd

rom

e,do

you

thin

kI

have

it?”

2B

ackg

roun

din

form

atio

nse

ekin

gor

“ope

n”qu

estio

n

Get

back

grou

ndor

over

view

info

rmat

ion

ona

topi

cto

:–

asce

rtai

nth

eri

ghtq

uest

ions

toas

kor

–ap

pear

know

ledg

eabl

eon

ato

pic

58%

–“Y

ouca

nge

tan

over

view

ofa

topi

cth

atyo

u’re

notr

eally

fam

iliar

with

very

easi

ly.”

–“Y

ouca

nso

und

alo

tmor

ekn

owle

dgea

ble

than

you

are

whi

chis

quite

nice

!”–

“Sta

rtw

itha

site

with

basi

cin

form

atio

nto

getm

ysel

fm

ore

know

ledg

eab

outa

subj

ect.”

3D

irec

tacc

ess

tofa

mili

arsi

teG

oing

dire

ctly

toa

know

nsi

te,r

athe

rtha

nus

ing

age

nera

lse

arch

engi

ne27

%–

“Hav

eth

eir

own

built

inse

arch

engi

ne.Y

ougo

toth

eW

ebsi

tean

dyo

uju

stad

dw

hate

ver

you

are

look

ing

for.”

–“I

fI’

mno

tusi

ngG

oogl

e,th

enI

mig

htha

vego

nedi

rect

lyto

som

ethi

nglik

eeM

edic

ine

and

just

use

key

wor

ds.”

4G

oogl

e/se

arch

engi

neas

ana

viga

tiona

ldev

ice

Usi

ngG

oogl

eas

ast

artin

gpo

intt

ona

viga

teto

site

s,as

the

final

site

sto

beus

edar

ekn

own

48%

–“I

fyo

uty

pein

am

edic

alsy

mpt

omin

Goo

gle,

then

mos

tof

the

hits

will

med

ical

Web

site

san

dits

quic

ker

than

goin

gto

them

dire

ctly

.”–

“If

ther

eis

synd

rom

eth

atI

have

n’th

eard

of,t

hen

Iw

ould

type

into

Goo

gle

with

the

exac

tphr

ase.

Iw

ould

sele

ctth

eW

ebsi

tes

that

have

hear

dof

.”–

“Ipu

twha

tI’m

look

ing

for,

and

then

Ipu

teM

edic

ine

and

Wik

iped

ia,a

ndI

putt

hatt

hrou

ghG

oogl

e[c

licke

dse

arch

].”

5G

oogl

e/se

arch

engi

nege

neri

cus

eU

sing

age

nera

lsea

rch

engi

new

ithou

tapr

edet

erm

ined

site

inm

ind

27%

–“T

ype

inth

em

ostp

ertin

entp

hras

ean

dju

stG

oogl

eit

and

take

itfr

omth

ere.

”–

“But

I’m

notu

sual

lylo

okin

gfo

rsom

ethi

ngth

atra

re,s

oge

nera

llyI

putt

hena

me

into

Goo

gle

and

hope

that

itw

illco

me

upw

ithth

eri

ghtt

hing

.”6a

Dir

ecta

cces

sfo

rcl

osed

ques

tion

Usi

ngdi

rect

site

acce

ssfo

ra

task

that

has

acl

osed

(cod

e1)

info

rmat

ion

need

.84

%*

N/A

com

bina

tive

mat

chin

gof

code

s1

and

3

6bD

irec

tacc

ess

for

open

ques

tion

Usi

ngdi

rect

site

acce

ssfo

ra

task

that

has

acl

osed

(cod

e2)

info

rmat

ion

need

16%

*N

/Aco

mbi

nativ

em

atch

ing

ofco

des

2an

d3

7aG

oogl

e/se

arch

engi

nefo

rcl

osed

ques

tion

Usi

nga

gene

ric

sear

chen

gine

fora

task

that

has

acl

osed

(cod

e1)

info

rmat

ion

need

47%

**N

/Aco

mbi

nativ

em

atch

ing

ofco

des

1an

d4/

5

7bG

oogl

e/Se

arch

engi

nefo

rop

enqu

estio

nU

sing

age

neri

cse

arch

engi

nefo

rata

skth

atha

sa

clos

ed(c

ode

1)in

form

atio

nne

ed53

%**

N/A

com

bina

tive

mat

chin

gof

code

s2

and

4/5

(Con

tinu

ed)

Page 18: Doctor onlineneeds

Ap

pen

dix

.(C

onti

nued

)

Prop

ortio

nof

doct

ors/

case

sA

rea

Cod

eID

Cod

eD

escr

iptio

nob

serv

edE

xam

ples

Info

rmat

ion

judg

men

ts8

Goo

dG

ood

job,

fine,

grea

t,be

st,w

onde

rful

,sta

teof

the

art,

brea

dth,

dept

h,co

mpr

ehen

sive

,bey

ond

31%

–“I

’dfir

stpr

obab

lyG

oogl

eit.

Now

I’d

goto

the

up-t

o-da

tea

Web

site

;it’s

mor

eco

mpr

ehen

sive

and

give

syo

ua

list

ofpa

pers

etc.

”–

“Itg

ives

syst

emat

icin

form

atio

nth

atyo

uus

ein

clin

ical

prac

tice

whi

chI

thin

kis

quite

good

.”A

ccur

acy

Acc

urat

e,co

rrec

t,ri

ght,

prec

ise,

valid

21%

“With

Wik

iped

ia,i

tis

notm

onito

red

asit

isw

ritte

nby

the

publ

ic,

butd

espi

teth

isit

tend

sto

bere

lativ

ely

accu

rate

.”C

urre

ncy

Cur

rent

,up

toda

te,o

utof

date

,old

,tim

ely

21%

–“Y

oune

edto

ques

tion

the

leve

lof

trus

tof

the

info

rmat

ion,

wha

tI

mea

nis

that

isth

ein

form

atio

nup

-to-

date

?”–

“Iti

sdi

ffer

entb

ecau

seon

the

war

dyo

um

ayha

veth

efir

stor

seco

nded

ition

,onl

ine

you

gott

hela

test

editi

on.”

–“I

nfor

mat

ion

isus

ually

up-t

o-da

te.”

Use

fuln

ess

Use

ful,

usel

ess,

hard

tous

e,in

form

ativ

e,he

lpfu

l,ca

n’t

unde

rsta

nd,n

otof

muc

hus

e,fle

xibi

lity,

user

frie

ndly

,ru

bbis

h,to

om

uch

info

rmat

ion

41%

–“G

Pnot

eboo

kb–

I’ve

used

itse

vera

ltim

esin

the

past

and

I’ve

foun

dit

usef

ul.”

–“I

’llj

ustG

oogl

ea

cond

ition

and

I’ll

end

upw

itha

rand

omsi

te,

butt

hat’s

notv

ery

usef

ul.”

Impo

rtan

ceIm

port

ant,

criti

cal,

rele

vant

3%“a

ndm

ostt

hetim

eth

ey’r

eup

toda

tean

dre

leva

nt”

Aut

hori

tyA

utho

rita

tive,

the

stan

dard

,ren

owne

d,re

puta

tion

24%

–“S

omet

imes

Ido

look

ona

coup

leof

diff

eren

tsite

sth

atar

ere

ason

ably

repu

tabl

e..

.”–

“If

itis

from

som

eone

fam

ous

inth

efie

ld,y

ouar

em

ore

likel

yto

pay

atte

ntio

n.If

ther

eis

noau

thor

ther

eor

you

dono

tkno

ww

hopu

titt

here

,the

nyo

uar

ele

sslik

ely

togi

veit

any

cred

it.”

–“w

hich

isno

tacc

redi

ted

byan

ym

eans

.”O

bjec

tive

Obj

ectiv

e,In

depe

nden

ce,b

ias

3%“I

fyo

u’re

usin

gsi

tes

like

Wik

iped

ia,y

oudo

n’ta

lway

skn

oww

hoha

sta

mpe

red

with

it,an

dyo

uha

veto

mak

esu

reth

atyo

u’re

not

getti

ngth

ings

that

are

kind

ofbi

ased

.”C

over

age

Com

preh

ensi

vene

ss,d

epth

3%“F

rom

past

expe

rien

cegi

ves

you

quite

com

preh

ensi

vein

form

atio

n.”

Tru

stw

orth

yT

rust

,cou

nton

,bia

s,fa

ceva

lue,

pinc

hof

salt

31%

“Iw

ould

trus

tit.

Itis

wri

tten

bydo

ctor

san

dge

nera

llyre

liabl

e.”

Cre

dibl

eC

redi

ble,

accr

edite

d,ve

rifie

d7%

“Tha

talo

tof

the

sour

ces

are

unve

rifie

d,an

dw

esh

ould

belo

okin

gat

evid

ence

-bas

edan

dpe

er-r

evie

wed

mat

eria

l.”R

elia

ble

Rel

iabl

e21

%–

“The

info

rmat

ion

isno

trel

iabl

e,su

chas

Wik

iped

ia.”

–“T

here

are

vari

ous

guid

esth

atyo

ukn

owar

ere

liabl

e,fr

omw

ord

ofm

outh

site

slik

eN

ICE

and

BN

Fcar

eac

cred

ited

and

evid

ence

base

d.T

hing

slik

ePu

bmed

too.

”Sc

hola

rly

Aca

dem

ic,s

cien

tific,

stud

ies,

cite

d,jo

urna

ls14

%–

“The

yar

eve

rysc

ient

ifica

llyw

ritte

n;th

est

uff

inth

ere

isve

ryro

bust

.”–

“Do

nota

lway

skn

owif

this

isth

etr

uth

ortr

uesc

ient

ific

info

rmat

ion.

”O

ffici

alO

ffici

al7%

“Iw

ould

only

take

itfr

oma

valid

orof

ficia

lWeb

site

such

asa

univ

ersi

tyW

ebsi

teor

sim

ilar.”

Page 19: Doctor onlineneeds

Judg

men

tap

proa

ch9

Che

cklis

tapp

roac

hes

Past

expe

rien

cew

ithso

urce

/org

aniz

atio

n(r

eput

atio

n)21

%“I

mus

thav

est

umbl

edup

oneM

edic

ine

whe

nIw

asat

med

ical

scho

olan

dre

aliz

edit

was

ago

odsi

tean

dco

ntin

ued

usin

git.

”R

anki

ngin

sear

chen

gine

outp

ut21

%“I

foun

dou

tabo

uteM

edic

ine

from

Goo

gle.

Itw

asco

min

gup

inse

arch

esan

dIw

asfin

ding

that

that

site

seem

edju

stto

have

usef

ulin

form

atio

nea

chtim

eI

had

sele

cted

itvi

aG

oogl

e.”

Cita

tions

tosc

ient

ific

data

orre

fere

nces

14%

“Loo

king

atth

ere

fere

nces

and

pulli

ngup

the

jour

nals

that

the

info

rmat

ion

has

com

efr

om.”

Sour

ceci

tatio

ns10

%“W

ell,

Ite

ndto

chec

kth

eso

urce

s,w

here

it’s

com

ing

from

.”Sp

onso

rshi

pby

ofex

tern

allin

ksto

repu

tabl

eor

gani

zatio

ns10

%“I

wou

ldon

lyta

keit

from

ava

lidor

offic

ialW

ebsi

tesu

chas

aun

iver

sity

Web

site

orsi

mila

r,or

even

adr

ugco

mpa

nies

Web

site

.”Pl

ausi

bilit

yof

argu

men

ts10

%“I

wou

ldal

sode

term

ine

ifit

soun

dspl

ausi

ble.

”C

ertifi

catio

ns,s

eals

,tru

sted

accr

edita

tions

7%“I

t’sgo

tsom

ebod

yor

rath

erits

gotg

over

nanc

eov

erit,

soyo

utr

usti

t.”A

utho

rid

entifi

catio

n3%

“Ifi

tis

from

som

eone

fam

ous

inth

efie

ld,y

ouar

em

ore

likel

yto

pay

atte

ntio

n.If

ther

eis

noau

thor

ther

eor

you

dono

tkno

ww

hopu

tit

ther

e,th

enyo

uar

ele

sslik

ely

togi

veit

any

cred

it.”

Prof

essi

onal

,attr

activ

e,an

dco

nsis

tent

page

desi

gn,

incl

udin

ggr

aphi

cs,l

ogos

,col

orsc

hem

es3%

“Thi

ste

nds

tode

pend

onw

hatt

hey

look

like.

Com

preh

ensi

vene

ssof

info

rmat

ion

prov

ided

3%“e

Med

icin

efr

ompa

stex

peri

ence

give

syo

uqu

iteco

mpr

ehen

sive

info

rmat

ion.

”10

Ext

erna

ljud

gmen

tE

xter

nalj

udgm

ent,

reco

mm

enda

tion,

wor

dof

mou

th,t

old

34%

–“S

eew

hato

ther

peop

lear

eus

ing.

My

med

ical

frie

nds

tell

me

wha

t’sth

ebe

stth

ing

tous

e.I

rely

onw

hatp

eopl

eha

vere

com

men

ded

tom

e.”

–“[

Ipi

cked

the

site

sup

by]

wor

dof

mou

th,n

oad

sor

e-m

ails

.W

ord

ofm

outh

,rea

lly.”

–“Y

ouge

tint

rodu

ced

tosi

teby

seni

orpe

ople

that

you

resp

ect

and

that

use

them

;the

yte

llyo

uto

use

them

.”11

Con

text

ual

Prom

oted

reso

urce

28%

“The

roya

lcol

lege

.Ith

ink

Iw

ase-

mai

led

byth

epe

rson

who

was

runn

ing

the

trai

ning

and

they

sent

the

link

tom

ean

dto

ldm

eto

goon

toit.

”C

orro

bora

tion

24%

–“I

fso

met

hing

that

Idi

dn’t

expe

ctit

tosa

y,th

enI

wou

ldpr

obab

lylo

okup

anot

her,

and

try

and

cros

sch

eck

wha

titi

ssa

ying

.–

“Nor

mal

lylo

okat

2–3

Web

site

srea

lly.I

’llv

erif

yit

with

the

peop

leI’

mw

orki

ngw

ith.”

Judg

men

tim

pact

on cogn

itive

sear

ch

12a

Usi

ngm

enta

lmod

elfo

rbi

ased

navi

gatio

nU

sing

men

talm

odel

ofsi

tes

todr

ive

navi

gatio

n(v

iapr

edic

tive

judg

men

tsof

info

rmat

ion

foun

d)63

%–

“Iw

ould

star

twith

the

offic

ialg

over

nmen

tsite

sfir

st,s

ites

that

you

know

are

accr

edite

dse

cond

,and

then

wor

km

yw

aydo

wn.

Iha

vea

kind

ofhi

erar

chy

ofsi

tes

inm

yhe

ad.”

–“I

’ve

gotr

eally

fast

atus

ing

itas

Ikn

oww

here

togo

,dep

endi

ngon

wha

tIne

edan

dho

wim

port

anti

tis..

.an

dal

soho

wm

uch

time

Iha

ve.I

have

deve

lope

da

kind

ofm

odel

that

wor

ks.”

–“Y

ouca

nch

oose

sour

cebe

caus

eI

have

expe

rien

cefr

omus

ing

itbe

fore

.You

know

wha

toth

erpe

ople

say

abou

tthe

relia

bilit

yof

this

site

s.”

(Con

tinu

ed)

Page 20: Doctor onlineneeds

Ap

pen

dix

.(C

onti

nued

)

Prop

ortio

nof

doct

ors/

case

sA

rea

Cod

eID

Cod

eD

escr

iptio

nob

serv

edE

xam

ples

12b

Usi

ngm

enta

lmod

elfo

rin

form

atio

nju

dgm

ent

Avo

idin

gan

exte

nsiv

eev

alua

tive

info

rmat

ion

judg

men

tby

rely

ing

onpr

eexi

stin

gm

odel

ofin

form

atio

nac

cura

cy/c

ogni

tive

auth

ority

55%

–“T

his

who

leW

ikip

edia

user

-cre

ated

met

hod

ofcr

eatin

gW

ebsi

tes

prod

uces

som

epr

etty

relia

ble

info

rmat

ion

for

less

impo

rtan

tfa

cts.

”–

“Lik

eN

ICE

guid

elin

esis

som

ethi

ngth

atha

sbe

enri

goro

usly

wor

ked

out.

You

wou

ldn’

tche

ckit

isso

met

hing

that

you

wou

ldtr

ust.”

–“S

omet

hing

like

eMed

icin

eI

wou

ldtr

usti

t...

itis

wri

tten

bydo

ctor

san

dge

nera

llyre

liabl

een

ough

totr

usti

t.”–

“Pat

ient

.co.

uk.d

It’s

gotg

over

nanc

eov

erit

soyo

utr

usti

t.”–

“If

itis

quite

are

spec

ted

site

,lik

eG

Pnot

eboo

k,I

wou

ldn’

tcro

ssch

eck

ifit

was

som

ethi

ngqu

ick.

”13

Bui

ldin

ga

men

talm

odel

Defi

ning

cred

ibili

tyfo

ra

spec

ific

site

and

addi

ngit

toth

elis

t/mod

el83

%–

“Im

usth

ave

stum

bled

upon

eMed

icin

ew

hen

Iw

asat

med

ical

scho

olan

dre

aliz

edit

was

ago

odsi

tean

dco

ntin

ued

usin

git.

”–

“Iw

asto

ldby

colle

ague

sw

hich

ones

are

relia

ble,

and

the

trus

ted

Web

site

has

usef

ullin

ks.O

rby

sear

chin

gyo

ule

arn

whi

chsi

tes

are

usef

ul.”

–“I

t’sth

roug

hG

oogl

ing,

wha

teve

rco

mes

upin

the

top

5.Y

ouus

eth

eman

dca

nle

arn

totr

ustt

hem

.NIC

Egu

idel

ines

and

Pubm

edI

pick

edup

atm

edsc

hool

.”14

Eva

luat

ive

judg

men

tw

ithdo

mai

nkn

owle

dge

Dom

ain

know

ledg

eus

edfo

rev

alua

tive

judg

men

ts31

%“G

ener

ally

whe

nyo

uar

elo

okin

gfo

rso

met

hing

,say

,for

exam

ple,

you

wan

tdet

ails

ofa

part

icul

arsy

mpt

omor

dise

ase,

Ivag

uely

know

wha

tto

expe

ct.I

fits

eem

sse

nsib

le,w

eus

eit,

whi

chm

ayno

tbe

very

good

prac

tice,

buti

tis

som

ethi

ngw

edo

allt

hetim

e.”

*37

Cas

es/in

cide

nts.

**63

Cas

es/in

cide

nts

(Out

of10

0ca

ses

anal

yzed

from

diar

ies)

.a

ww

w.u

ptod

ate.

org.

uk:C

ontin

uing

prof

essi

onal

deve

lopm

entW

ebsi

teof

the

Roy

alPh

arm

aceu

tical

Soci

ety

ofG

reat

Bri

tain

.b

ww

w.g

pnot

eboo

k.co

.uk:

Bri

tish

med

ical

data

base

for

gene

ralp

ract

ition

ers

prov

ided

onlin

eby

Oxb

ridg

eSo

lutio

nsL

imite

d.c

ww

w.b

nf.o

rg:J

oint

publ

icat

ion

ofth

eB

ritis

hM

edic

alA

ssoc

iatio

nan

dth

eR

oyal

Phar

mac

eutic

alSo

ciet

yof

Gre

atB

rita

inon

the

use

ofm

edic

ines

(pre

scri

bing

,dis

pens

ing)

.d

ww

w.p

atie

nt.c

o.uk

:Hea

lthan

ddi

seas

ein

form

atio

njo

intv

entu

rebe

twee

nPi

Pan

dE

MIS

(Egt

onM

edic

alIn

form

atio

nSy

stem

s).

Page 21: Doctor onlineneeds

Copyright of Journal of the American Society for Information Science & Technology is the property of John

Wiley & Sons, Inc. / Business and its content may not be copied or emailed to multiple sites or posted to a

listserv without the copyright holder's express written permission. However, users may print, download, or

email articles for individual use.