This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
strategies pose their own risks since consumers may have needs specific to a clinical condition or
a process and a single composite indicator may miss important quality differentials. Further,
patients with certain type of clinical conditions (e.g., chronically ill), who have higher odds of
sustained interaction with the healthcare system and ongoing need for effective disease self-
management, may find granular provider measures more informative (Schlesinger et al., 2012,
Shaller et al., 2014). Collectively, these debates underscore the importance of more effective
“targeting” of CQI to the underlying populations’ clinical profile and cognitive capacities.
Making CQI more “applicable” to the concerns of the target audience may well be a prudent
strategy to enhance their utilization by consumers.
Credibility of CQI
Use of an information source, especially for a decision as consequential as choosing a health
care provider, implies a high degree of trust (Cline & Haynes, 2001; Craigie et al., 2002).
Previous literature has attempted to identify sources of CQI that consumers deem trustworthy.
Acknowledging the importance of provider buy-in for the success of quality reporting initiatives
and provider role in guiding consumer choices, some studies have also looked at quality
measurement and dissemination strategies that physicians may consider more credible than
others. The emerging evidence is consistent with the intuition that consumers assign higher
“credibility” to certain sources of information than others (Dutta-Bergman, 2003; Harris &
Buntin, 2008), and that organizations may leverage that knowledge by targeting consumers via
channels that are perceived to be more “neutral” (Christianson et al, 2010). For example, some
consumers may consider information provided directly by health plans or employers as less
credible than if the same data is provided by non-profit multi-stakeholder entities that have
18
health plans as partners or members. Some studies observe a high consumer trust in sources
validated by the federal government agencies (Dutta-Bergman, 2003). Providers, on the other
hand, may be more inclined to trust information derived from patient medical records that can
capture disease severity in a more nuanced manner than claims databases. Christianson et al
(2010) provides a nice framework that distills the empirical findings into a three-fold measure of
credibility: the report is deemed more credible for consumers and providers if it is endorsed by a
national agency with expertise in quality measurement, if it is produced by a local non-profit
organization or a government agency through a collaborative process involving providers, and if
it uses medical records data as opposed to insurance claims data to generate the quality metrics.
Proactive Dissemination of CQI by Sponsors/Producers
Most product markets have well-established marketing strategies designed to “push” their
products toward consumers. Such strategies typically leverage knowledge of the timings and
circumstances when consumer’s need for a product is high (i.e., consumer is “in the market”),
considerably heightening the salience of convenient product availability (Celsi & Olson, 1988;
Pratkanis & Greenwald, 1996). Although healthcare markets are somewhat different from
conventional product markets (for one, healthcare provision is rarely viewed as a strictly
commercial commodity), similar principles likely condition consumer’s attention and dedication
of cognitive resources to information in quality report cards (Shaller et al, 2014). Recognizing
this, the initial thrust of the quality transparency movement on developing consumer-friendly
measurement and presentation approaches is gradually evolving into a more pointed emphasis on
context-informed marketing. In this, producers of report cards have increasingly sought to inform
their strategies by the burgeoning research into cognitive biases that afflict consumers’
information processing as well as affective responses to situations that pose health threats and
19
often necessitate provider choice (Shaller et al, 2014). This body of work has uncovered a
breadth of challenges to achieving optimal consumer response to quality transparency and
common dissemination approaches tend to reflect these realities. We provide an overview of the
principal issues below.
Low Numeracy, Literacy, English Proficiency, and High “Cognitive Burden” of Information
Complexity
An oft-expressed concern relates to the average consumer’s ability to understand complex
performance measures (Hibbard, Greene & Daniel, 2010), especially at a time when the sheer
number of metrics in the public realm has proliferated (Schlesinger et al., 2012). Some
consumers are at added risk of being overwhelmed by the complexity owing to cognitive
difficulties in handling information dense in numerical comparisons or medical jargon. Others
such as recent immigrants may have problems understanding the English language content of
reports. These issues have the potential to widen existing disparities in access to quality
information among socio-economically disadvantaged populations and minority ethnic groups
(Casalino et al., 2007; Greene et al., 2015). CQI producers have attempted to address some of
these concerns by experimenting with consumer friendly-presentation approaches (star-ratings,
smileys, etc.), composite quality indicators that summarize across multiple dimensions,
publishing reports in Spanish or other non-English languages, and simpler normative evaluations
of quality (indicating best or worst result on an underlying set of indicators). Despite these
initiatives, problems may remain for certain vulnerable subgroups of populations who are
lacking in resources needed to make optimal use of the available information. For such
individuals, some researchers have advocated for intermediaries (“navigators”) who can guide
20
consumers by serving as trusted counselors in helping them make informed choices (Shaller et
al., 2014).
Lack of “Consumer-Targeting” of Reports
With the recognition that most consumers at most times may not be looking for CQI and the
reports are best targeted to those who are, a major debate now underway involves how to
leverage such consumer “decision-points” to foster greater consumer engagement (Shaller et al.,
2014). For example, some otherwise healthy consumers may be in the market for a well-defined,
time-delimited need such as maternity care, preventive screenings for cancer or heart disease,
dental procedures, or elective surgery like hip replacement. These “shoppable treatments” often
involve prior planning and a need to choose a provider for the first time by consumers who are
generally outside of the healthcare system; such individuals may need aggressive outreach
strategies (e.g., targeted marketing to pregnant women by publishing in health-oriented
magazines aimed at a female audience). In some cases, people may start looking for provider
quality information when they move to a new area or a new job. Surveys indicate nearly 1 in 10
consumers are looking for a new primary care provider at any time (Tu & Lauer, 2008). Facing
such “external disruptions”, people may have a short time window to act and information
provided in a timely way at the right place can promote higher utilization (e.g., open enrollment
periods in the context of new employment, requiring employees to choose primary care provider
at open enrollment) . Finally, many contexts for choosing new provider may arise when people
have negative experiences with their regular providers (“problematic experiences”). Although
most Americans tend to trust their providers (Hall et al., 2001; Goold & Klipp, 2002), nearly a
third indicate having problems with the quality or access to healthcare and many report switching
physicians in the past year (Mitchell & Schlesinger, 2005; Schlesinger et al., 2002). This may
21
present an opportunity when individuals are more receptive to using specific types of
information (e.g., on egregious medical errors rather than on higher quality achievers on process
measures).
High Information Search Cost
A handful of studies that focus on how consumers search for provider quality information
reveal considerable difficulties in finding reliable online sources of information (Eysenbach &
Köhler, 2002; Sick & Abraham, 2011). For instance, one study found that “web sites most likely
to be found by consumers are owned by private companies and provide information based on
anecdotal patient experiences” and, further, that “searches that focus on clinics or physicians are
more likely to produce information based on patient narratives” (Sick & Abraham, 2011). These
problems are hardly new and formal CQI sponsors have looked to media and advertising to
ensure their products becomes more accessible to the general public. Many issue press releases
when new or major report updates become available or use radio or television spots to advertise
their continued availability. For instance, CMS partnered with American Hospital Association to
announce the official launch of Hospital Compare website at the Association of Health Care
Journalists National Conference, an event likely to enhance the website’s profile among reporters
and news correspondents (American Hospital Association, 2002). Perhaps similar concerns
motivate many producers to offer online material free of charge and without requiring a
password protected log-in. Nevertheless, common search queries are unlikely to lead to
authoritative sites like Hospital Compare; the multitude of sources that typically come up in
response vary considerably in content and reliability, challenging even committed consumers’
ability to sift out relevant information (Eysenbach & Köhler, 2002). Moreover, it is unclear what
search terms most consumers use when looking for CQI, amplifying the need to further probe
22
common online search patterns. Other policy/practice suggestions to guide consumers towards
reliable online content include embedding common search terms within the content of websites
housing formal report cards and partnering with high-visibility online sites to have them embed
report card hyperlinks in conspicuous locations (Sick & Abraham, 2011).
Lack of Standardization in Production and Presentation of CQI
A somewhat distinct worry is the multiplicity of quality measures for the same clinical
problems, mirroring the considerable variation in measurement and presentation approaches of
CQI sponsors (Halasyamani & Davis, 2007; Austin et al., 2015). Such variations may
unavoidably confuse users confronted with specific clinical needs but now saddled with the
unenviable task of making sense of conflicting quality signals embedded in disparate sources
(Rothberg et al., 2008; Rau, 2013). Proliferation of quality measures has fueled a growing bid to
appeal to authority of organizations with proven expertise in quality measurement such as the
National Quality Federation (NQF) or the National Center for Quality Assurance (NCQA).
Consequently, many CQI sponsors now seek to validate their metrics by using standardized
measurement and presentation approaches developed by these central agencies. To what extent
has the report card content “homogenized” in terms of measurement and presentation approaches
and what impact, if any, this may have had on consumer use is an important subject of future
studies.
Provider-Initiated CQI Dissemination
Although the quality transparency movement has largely been viewed by the provider
community with considerable skepticism (Marshall et al., 2000; Robinowitz & Dudley, 2006 ), it
has also set up incentives for them to compete on objective and transparent quality metrics aside
23
from the more generic factors such reputation or clinical experience (Marshall et al., 2000).
There are increasing signs that many have taken this opportunity to advertise their relative
standings in quality (whether self-generated or drawn from other sources) to the general public
through a variety of marketing strategies. Most hospitals have their own Facebook pages and
websites, with a vast majority now offering user-driven star ratings (1-5 stars) drawn from
unsolicited consumer feedback (Glover et al., 2015). User-generated ratings have been shown to
be broadly indicative of both the actual quality measured by more objective metrics and the
firms’ growth in market share in non-healthcare sectors of economy (Luca, 2011; Galloro, 2011);
this, increasingly, seems to be true of the healthcare sector as well (Lagu, 2010; Greaves et al.,
2014).
As to the more granular and broadly validated rating systems such as the Hospital Compare
and Leapfrog, very little empirical data exists on how hospitals use them in their marketing
efforts (Muhlestein et al., 2013). Anecdotal media reports indicate a selective and self-serving
(e.g., touting “cherry-picked” favorable ratings while ignoring negative ones) use of public
reports by many regional providers (Ornstein, 2013; Rau, 2013). These concerns are deepened by
reports of major commercial and nonprofit raters (e.g., Healthgrades, U.S. News, and Leapfrog)
charging sizable license fees to providers for using their ratings in advertisement efforts (Rau,
2013). To date, providers seeking to market federal ratings selectively have not faced any hostile
regulatory scrutiny. Indeed, American Hospital Association has even encouraged members to
disseminate ratings from the Hospital Compare and Nursing Home Compare websites to
consumers, albeit with the caveat that they refrain from comparing themselves to their peers
(American Hospital Association, 2002). Other practices that have raised similar worries include
hospitals’ advertising their Emergency Department wait times via conspicuously sited billboards
24
and strategically directed television spots (Weiner, 2014). The spread of provider-initiated
marketing efforts has drawn minimal scrutiny from health services researchers, offering a
valuable opportunity to probe into whether and how these initiatives may have affected the
overall public policy goal of matching consumers to higher quality providers.
Role of Media in Dissemination of Report Cards
Dissemination by the CQI producer/sponsor may influence consumers indirectly through
media coverage of issues (Gerbner et al., 1982; Shanahan & Morgan, 1999; Gerbner et al., 2002)
related to comparative provider quality (e.g., articles covering press events sponsored by the
publication source, efforts to increase media reporters’ awareness of CQI and its importance). As
we discussed earlier, mass media campaigns have played a prominent role in the field of public
health, attested by a large and growing literature focused on the efficacy of modern
communication channels (Rogers & Storey, 1987; Snyder, 2007). Further, many media outlets
may independently cover issues or events directly or indirectly related to CQI (e.g., safety record
of local hospitals, comparative performance of regional providers on key conditions relevant to
public health, sentinel events like major surgical mishaps). These issues may be covered by print,
television, or radio sources, but also increasingly by social media (Facebook, Twitter) and issue-
specific blogs affiliated with large media organs. For most media-driven campaigns and “push”
strategies, crucial decisions regarding the drafting of messages, developing logic models of
behavior change, specifying target population, and selecting optimal channels for message
delivery depend, in part, on answering a pivotal question: how do media messages shape public
opinion and attitudes?
25
Agenda-Setting Function of Media
Following McCombs and Shaw (1972), “agenda setting” has been defined as “the transfer of
issue salience from the news media to the public agenda” (McCombs et al., 2014). Although the
term “salience” has been used in differing ways in communication theory, political science, and
cognitive psychology, in the agenda-setting literature it specifically refers to the ability of news
media to raise the importance of specific issues in public opinion (Kiousis, 2004). Scholarship on
agenda-setting has identified two distinct dimensions of media salience: prominence and valence
(Kiousis, 2004). “Prominence” indicates the importance assigned to the story by its contextual
features (such as placement in title or text, location on front-page, space devoted to story etc.,),
often reflecting an active process of selection by the author. In the language of the agenda-setting
theory, the concept of prominence would be used to capture the “first level” of agenda-setting,
which is focused on “objects of attention” (e.g., personalities, issues etc.) (Winter & Eyal, 1981;
Behr & Iyengar, 1985; Watt et al., 1993). The other important dimension of salience is the
concept of “valence”, a measure of affective or emotional aspects of a news story that determines
its normative “framing” (positive, negative, or neutral) of the objects of the story (“attributes” of
objects or “second-level” of agenda setting) (McCombs et al., 1997; Lopez-Escobar et al., 1998;
Lopez-Escobar et al., 1998 ). In the context of media coverage of CQI, we expect that the
prominence (issue agenda-setting) will act primarily to raise media consumers’ awareness of
CQI, whereas the valence dimension (attribute agenda-setting) will have its primary impact on
the consumer attitudes towards CQI (e.g., perceived importance of CQI in healthcare decision-
making).
26
Past Literature on Media Coverage of Report Cards
Somewhat surprisingly, media coverage of report cards has attracted very little attention from
empirical researchers. Indeed, we were able to identify just two studies that have examined the
issue to date. In the first study, published nearly twenty years ago, Mennemeyer et al. (1997)
explored the coverage of hospital CQI released by the Health Care Financing Administration
(HCFA) in late eighties. Authors used a well-known archival database of print newspapers to
study the content and framing of items focused on quality rankings of local hospitals, concluding
that media coverage of releases was sufficiently sparse so as to yield null effects on hospital
market shares in their regression models. One interesting finding was that media discussion of
salient events at a hospital dramatically reduced its market share. In a more recent study, Higashi
et al. (2012) probed media coverage of the public release of unadjusted cancer survival rates of
local hospitals in five major Japanese newspapers that published a total of 13 news items
following the release. Although authors did not comment on the intensity of news coverage, their
results appear to substantiate Mennemeyer et al. conclusions regarding thinness of coverage of
report cards in the popular press.
Future Avenues of Research
Capturing media coverage raises a host of difficult measurement issues related to the
“conceptual scope” of media, geographical “reach” of individual media channels, as well as
paucity of data sources for major media channels. For instance, the interpersonal and consumer-
driven nature of dialogue on major social media platforms raises questions about whether it can
be counted as a true media organ (Hirsch & Silverstone, 2003; Kwak et al., 2010;). Its hybrid
status as a major source of information about current events in lives of millions and a personal
27
communication medium poses challenges to any empirical attempts to quantify its information
content (Mangold & Faulds, 2009). The advent of the Internet has further complicated the
measurement of consumer response to “local” media coverage (Althaus & Tewksbury, 2000;
Jeffres et al., 2012). Most print newspapers now have an online version making them instantly
accessible all over the world, raising difficult questions about the extent to which their content
can truly be considered “local”. Relatedly, despite significant expansion of work attempting to
quantify television coverage of health issues, absence of transcripts of local television broadcasts
remains a major limitation (Long et al., 2005). Beyond these issues, the existing literature lacks a
systematic effort to develop a comprehensive set of content themes and valence frames
applicable to media articles on report cards. The pair of studies that investigated media coverage
of public reports (noted above) did not dwell extensively on which content themes received more
attention from the reporter, which specific valence frames were applied to the content, and how
such framing may have affected readers’ knowledge and behavior. A systematic analysis of the
aforementioned issues using the diverse spectrum of contemporary news media, with the overall
goal of assessing its impact on consumer propensity to use CQI or consumer matching with more
efficient providers may significantly advance our knowledge of public reporting.
Consumers’ “pull” of CQI
Instead of passive responders to organizational “push” of quality reports, consumers can be
viewed as active agents, constantly responding to ongoing needs that trigger efforts to pull
information from sources around them. Although most people still consult their family members
or friends or known health professionals for recommendations on choosing providers, nearly a
third say they would look for information online, in a newspaper or a magazine, or request their
health plan for quality information (Kaiser Family Foundation, 2000). Due to the widespread
28
access to the Internet, online search for health information, in particular, has rapidly become a
focus of intense interest amongst health services researchers. The portrait that emerges from this
body of work illuminates the powerful role of the Internet: consumers, among other things, use
the Internet to self-diagnose their conditions (Fox & Duggan, 2013) prepare for clinical
encounters (Anderson et al., 2003; Otte-Trojel, 2014), self-treat minor ailments (Fox & Duggan,
2013), access their health records (Otte-Trojel, 2014), schedule appointments (Eysenbach &
Jadad, 2001), exchange emails with providers (Bhandari et al., 2014), chat with patients having
similar clinical conditions (Ziebland & Wyke, 2012), and search for provider quality information
(Kaiser Family Foundation,2000).
Only a handful of studies have looked at the actual stepwise process by which consumer
retrieve and evaluate health information about CQI from the World Wide Web. An important
early study examined consumer search patterns using focus groups and related qualitative
techniques (Eysenbach and Kohler, 2002 ); a later study attempted to simulate a real world
online search for CQI using terms expected to be used by consumers (Sick & Abraham, 2011).
Together, they provide valuable insights into the “mechanics” of the consumer pull. Most study
subjects were likely to use online search engines rather than medical or professional healthcare
sites, use search terms composed of single words rather than combinations, and use the first
results from the output to rephrase the search terms rather than examine later results. Credibility
assessment was perfunctory and rested on professional looking layout, scientific terms and
citations, ease of use, and familiarity with official sources; very few research participants
attempted to verify the actual source of information. Most online searches led to private
websites with anecdotal, unsolicited patient experience reviews, while government or community
websites that had quality comparisons along multiple clinical dimensions were harder to find. A
29
general implication of these studies is that for an average consumer, “findability” of CQI is low
while credibility of information is hard to assess. Policy wise, such findings tend to support
attempts by report producers to make their products more easily accessible and efforts to educate
consumers about the promise and the limitations of the Internet. These preliminary insights,
however, leave significant gaps in our understanding of how the general public searches for
provider quality information. For one, small sample sizes limit generalizability of findings; it
may be helpful to explore online experiences of a more representative sample of consumers,
preferably those who acknowledge looking for or using report cards. Also, a host of socio-
demographic (e.g., low income status, literacy, numeracy) and health-related factors (e.g.,
chronic debilitating illness, disability) may condition consumers’ approaches and success in
finding the desired information, and therefore warrant a closer scrutiny than current literature
permits.
Updating Effect of CQI Dissemination and Consumer Search
Mounting evidence supports the notion that the effect of information on consumer choices is
conditioned by consumers’ prior beliefs about the “state of the world” (Ackerberg, 2003;
Crawford and Shum, 2005). Increasingly, while examining the impact of provider quality
information on consumer beliefs and choices, researchers have begun to explore the impact of
“new” information in quality reports (Erdem and Keane, 1996; Mukamel et al., 2005; Mukamel
et al., 2007; Chernew et al., 2008; Jung et al, 2011). We use the analogy of such Bayesian
learning to apply it to a different context: to describe the effect of CQI dissemination on
consumer awareness/attitudes towards CQI, and consumers’ use of CQI. We expect that prior to
their exposure to the more systematic and comprehensive quality information contained in the
“formal” quality report cards, consumer’s awareness of any form of CQI is low and stems from
30
exposure to certain informal sources of provider quality information; these sources typically
include recommendation of friends and family members drawn from personal experience,
hearsay, and/or Internet sites that rate patient experience (e.g., WebMD). These experiences
generate a set of attitudes characterized by consumers assigning low importance to CQI in choice
of healthcare providers, deeming most physicians as providing roughly equal quality of
healthcare, and being reluctant to switch doctors based on healthcare quality. These set of
“priors”, in turn, yield consumers’ low odds of using quality information to inform their choices
of physicians. The organizational “push” of CQI towards consumers, along with consumers’
propensity to “pull” CQI, act on these consumer “priors” to “update” consumers’ awareness of,
attitudes towards, and use of CQI, yielding higher awareness, more favorable attitudes towards,
and consequently, higher likelihood of use of report cards.
Furthermore, we hypothesize that the updating effect of organizational push and consumer
pull of CQI will vary based on the consumers’ state of awareness (or lack thereof) at baseline.
Specifically, dissemination will increase the likelihood of consumer becoming aware of CQI if
they are unaware of CQI prior to dissemination (“gaining awareness”) and, conversely, decrease
the likelihood of becoming unaware of CQI (“losing awareness”) if they are already aware of
CQI. Similarly, dissemination of CQI will increase the likelihood of “starting” to perceive
quality reports as important in choosing providers, “starting” to acknowledge quality differentials
among doctors in the region, “becoming” willing to switch doctors on grounds of provider
quality differences, and “starting” use of quality reports among the consumers who are unaware
of CQI. Conversely, among those already exposed to CQI, dissemination will decrease the
likelihood of “losing” belief in its importance, “losing” perception of quality differentials among
31
doctors in the region, “losing” willingness to switch providers based on quality differentials, and
“stopping” use of CQI.
32
Chapter 3
33
Research Questions
1. How much formal comparative quality information is available to consumers? How
much of it is applicable to their clinical conditions? To what extent is the information
credible?
2. Do availability, applicability, and credibility of CQI vary across AF4Q regions?
3. Do greater availability, applicability, and credibility of CQI lead to its higher awareness
and use or to more favorable attitudes towards it?
4. What strategies do AF4Q alliances use to disseminate quality report cards?
5. Do AF4Q sites vary in type and intensity of report card marketing efforts at a point in
time and over time?
6. Does more intense report card dissemination generate gains in consumer awareness and
use of CQI or produce more favorable consumer attitudes towards CQI?
7. How and to what extent do local print media cover quality report cards?
8. How does media coverage of public reporting affect consumer awareness and use of
CQI?
9. How does “framing” of report card media coverage affect media’s impact on consumer
awareness, use, and attitudes towards CQI?
34
Chapter 4
35
Methods
Study Design
Our study consisted of two parts: a descriptive component that focused on regional
availability, applicability, credibility, alliance strategies to disseminate quality report cards
(describing the spectrum of CQI dissemination approaches), and print media coverage of CQI
during the two study periods, and a quantitative component that consisted of a longitudinal
(two-period panel) analyses using fixed effects regression methods to explore the relationship
between these key predictors and a set of consumer outcomes, including consumer awareness of
CQI, their attitudes towards CQI, and their actual use of CQI in decision-making related to their
health care.
Data Source(s)
AF4Q Community Quality Reporting Tracking Database (AF4QTD) regularly tracks and
records the number of quality reports released to the public in AF4Q communities by a variety of
public and private organizations including health plans, Medicaid, state government, the federal
government, and private non-profit organizations. AF4Q research staff constantly updates this
information by reviewing websites and collecting additional data by periodically interviewing
key informants in the AF4Q alliances about their public reporting activities. We collected
information on a number of distinct domains that included name and type of sponsoring
organization (government, coalition, hospital association, etc.), name of report, geographic
coverage area (local, state, or national), reporting unit (physician, hospital, health plan), source(s)
of scores (if reprinted from another public report, like Leapfrog or Hospital Compare) or
36
measures (CMS, AHRQ, AQA, HEDIS, H-CAHPS), source of data (administrative, medical
record, or survey), number of providers included in the report, year of publication, website URL
link, recipient type (health plan members, general public, physicians, etc.,), form of distribution
(web-only or hard copy), and year of data collection.
Site tracking reports include information on a variety of topics related to public reporting
activities of the alliances, including number of iterations of web-based quality reports issued,
public reporting dissemination partners, budget, public release plans, alliance website traffic
information, etc. Our site tracking databases culled information from project-related documents
ranging from community funding proposals, websites of alliance or community partners,
strategic planning papers, agenda and minutes of staff meetings, alliance feedback reports to the
AF4Q National Program Office and the Robert Wood Johnson Foundation, and media stories on
alliance public reporting activities.
Alliance public reporting summaries and public reporting timelines provide regularly
updated information on all the alliance activities related to generating, posting, and disseminating
quality report cards, along with a detailed record of important dates pertaining to such activities.
Key informant interview transcripts record insights gained from an intensive process of in-
depth open-ended interviewing with alliance personnel chosen for their in-depth knowledge of
AF4Q program activities (both in staff and leadership positions). These conversations were held
during periodic site visits by AF4Q evaluation team and covered a wide range of topics,
including participants’ views of the alliance’s progress and barriers in each of the AF4Q program
areas. The data from the audio recordings was transcribed and saved in text files, which were
thoroughly read and assigned a set of content codes in accordance with pre-established coding
37
guidelines and definitions. The coded data was then entered into Atlas.ti,, a qualitative analyses
software that allows sorting and querying of data by specific content codes
Access World News (NewsBank) and LexisNexis Academic are two large newspaper
databases that hold a systematic collection of current and archival newspaper articles in regional,
national and international news sources, with instant retrieval systems by location and time
frame. These databases allow searches by keywords and region (for Access World News this is
usually a state or a city/town within United States whereas for LexisNexis the smallest region
over which the search may be carried out is a state) of all newspapers published in the region and
contains options for organizing the search results by relevance or time of publication. Additional
functionalities allow narrowing of search to specific types of media (e.g., newspapers, blogs,
magazines, newswires, television broadcast transcripts, videos etc.,), specific publication (e.g.,
New York Times), subject, and language. The full text of the newspaper articles is displayed
with byline, dateline, length of article, and section of newspaper (front page, sports section, etc.,)
that carried the print article. Both databases allow saving of search results and generation of print
versions of Word or PDF documents for the saved results.
Aligning Forces for Quality Consumer Survey (AF4QCS) is a random-digit-dial (RDD)
survey initially conducted between June 2007 and August 2008 for chronically ill adults (18 or
older) in the 14 AF4Q regions. To be eligible for the survey, the respondents had to have at least
one of the following five chronic conditions: diabetes, hypertension, asthma, heart disease, and
depression. The same respondents were resurveyed in the second wave between July 2011 and
November 2012, along with additional RDD respondents to account for attrition and
demographic change. The first round of survey yielded a response rate of 27.6% by American
Association of Public Opinion Research (AAPOR) method and 45.8% based on the Council of
38
American Survey Research Organizations (CASRO) method. The panel response rate in the
second wave was 63.3% yielding an overall response rate of 39.7% by AAPOR and 42.1% by
CASRO method. While our response rates are comparable to most other large national telephone
surveys and reflect a continuing trend of declining response rates over the last two decades, we
attempted to validate the demographic profile of our respondents by benchmarking it against
face-to-face interview surveys that are considered a “gold standard” with respect to survey
methodology. To do this, we compared the AF4QCS respondents to the 2008 and 2011 National
Health Interview Survey respondents (which has a nearly 90% response rate) and found no
significant differences in the demographic composition and prevalence of chronic illness
between the two surveys.
Commercial health plan enrollment data from HealthLeaders InterStudy Health Plan
Enrollment Database (InterStudy) was used to compute the percentage of local population that
had access to health-plan sponsored provider quality information; this dataset has been widely
used in prior literature to calculate commercial health plan penetration rates (Adams and Herring,
2008). Following previous studies, the Dartmouth data were used to construct county-level
estimates of physician supply per 1000 residents (Lewis et al., 2013).
Study Sample
Our analytic sample consisted of 4235 chronically ill adults (18 or older) in the 14 AF4Q
regions who acknowledged having consulted a physician at least once in the past 24 months for
treatment of one or more of the following conditions: diabetes, hypertension, asthma, heart
disease, and depression.
39
Timeline of Measurement
The timeline for measurement of dissemination (independent variable) and consumer
outcomes (dependent variable) is illustrated in Figure 4-1. Dissemination efforts of alliances as
well as media coverage of CQI was measured for the period of one year immediately preceding
the administration of a round of consumer survey i.e., June 1, 2006 to June 1, 2007 (first period)
and June 1, 2010 to June 1, 2011 (second period). This was done to ensure all consumers who
were administered the survey, which was staggered over a period of roughly 12 months at both
rounds, were exposed to the factors measured by our key independent variables. The consumer
outcomes are drawn from AF4QCS, which was administered at two time periods following these
time frames, as described earlier in the data sources section.
Figure 4-1 Timeline Of Measurement of Independent (Blue Double Arrows) and Dependent Variables
(Red Double Arrows)
Dependent Variables
Our principal dependent variable, which captures consumer awareness of comparative
quality information, is generated from the following two survey items: “Did you see any
information comparing the quality among different doctors in the past 12 months?” and “Did
2006 2007 2008 2009 2010 2011 2012
40
you see any information comparing the quality among different hospitals in the past 12
months?” The resulting binary variable is assigned a value of 1 if the respondent replied yes to
any or both questions and 0 if the reply was no to both items.
We captured three distinct consumer attitudes towards CQI with different set of survey
items. First, we asked respondents about the importance they assigned to CQI (perceived
importance of CQI in choosing doctors) if they had to choose a doctor to treat their condition
and recorded their responses in the form of a Likert scale. The three items used to measure this
variable were prefaced by a common query “The next time you choose a doctor to treat your
condition(s), how important might you consider” followed by, in turn, (1) a report that shows
which doctors follow recommended approaches to treat your chronic condition(s), (2) for people
with conditions similar to yours chronic conditions, a report that shows the outcomes for
patients treated by different doctors, and (3) a report that compares how satisfied other patients
are with their doctor or medical group. If the respondent characterized any of the three types of
reports as “very important” or “important” in choosing doctors, we coded the variable as 1; else
we coded it as 0.
Second, we examined the extent to which consumers perceive providers in their community
to be different with regard to health care quality (acknowledgement of provider quality
differentials) by asking them to agree or disagree with the following categorical statement:
“Doctors in my community are all pretty much the same in terms of the quality of the care they
provide”. If consumers were in agreement or strong agreement with the statement we coded it as
0, and coded it as 1 if they strongly disagreed or disagreed.
Finally, we assessed consumers’ willingness to switch physicians based on differences in
quality by asking them to react to the following statement of intent: I would consider going to a
41
different doctor than the one I normally see if the new physician's quality was higher and my
costs were about the same. We coded this variable as 1 if respondent agreed or strongly agreed
with the statement and 0 otherwise.
Our survey allowed us to codify two distinct types of use of report cards by consumers: to
make decisions about providers (Did you personally use the information you saw comparing
quality among doctors in making any decisions about doctors? and “Did you personally use the
information you saw comparing quality among hospitals in making any decisions about
hospitals? Yes to any=1, no to both=0) and to discuss the report with their doctor (Did you talk
with your doctor about the report? Yes=1; No=0).
Independent Variables
CQI Availability
Availability of CQI was be measured for each consumer by number of publicly available
physician and hospital quality reports in the alliance region in which the consumer resided from
June 1, 2006 to June 1, 2007 (first period) and from June 1, 2010 to June 1, 2011 (second
period). Notably, reports varied considerably in completeness of coverage of physicians or
hospitals that supply their services to local residents in the specific reporting region, and whether
information was provide in aggregated form (i.e., for a group of physicians) or for individual
physicians. Following the approach of an earlier study (Scanlon et al., 2015), we excluded
reports that had information on a very narrow or small group of physicians but included national
(i.e., reports published by national public and private organizations that are available in all
regions), local, and regional reports that were available to the general public (including those
produced by health plans) without a secure log-in or password.
42
CQI Applicability
Applicability of CQI was measured for each consumer by counting regional publicly
available physician and hospital quality reports that had at least one measure related to that
consumers’ chronic condition(s). For instance, if a hypothetical region had just two report cards
available, first having (at least one) measure(s) of diabetes and hypertension and the other having
(at least one) measure for elective hip surgery, a consumer living in that region was assigned an
applicability score of 1 if he had diabetes, 1 if he had hypertension, 2 if he had both, and 0 if he
had neither. Note that availability of report will be 2 for each consumer irrespective of their
clinical condition. Hence, our measure of availability varied only at the alliance region, while
applicability varied based on both the kind of reports available and the type of clinical condition
that consumer suffered form.
CQI Credibility
Credibility of CQI was measured for each consumer by counting publicly available physician
and hospital quality reports in the consumer’s alliance region that had all of the following three
attributes: (1) were produced/sponsored by non-profit agencies and /or governmental entity (as
opposed to health plans), (2) were constructed from medical records data or patient experience
surveys (as opposed to claims data or data from provider surveys), and (3) their report quality
measures were endorsed by reputed national organizations (e.g., National Quality Forum). This
measurement strategy is inspired by Christianson et al (2010) categorization of credibility of
report cards, based on empirical evidence on consumer and provider trust in sources of provider
quality information. Provider trust may be relevant to consumer’s use of CQI since many consult
their regular physicians for guidance on choice of other providers (e.g., specialists) and providers
often provide referrals based on quality information in public reports.
43
Alliance Proactive Dissemination
We explored the role of producer-driven dissemination strategies by distilling the broad
spectrum of alliance approaches into an ordinal “Alliance Proactive Dissemination Score” for
the two study periods (June 2006–June 2007 and June 2010–June 2011) using a two-step
process. In the first step, three data sources (public reporting summaries, public reporting
timelines, and KII transcripts) were thoroughly reviewed to identify major strategies used by
alliances to disseminate their public reports. Interview transcripts were scanned through querying
software to identify excerpts that were tagged with the code “dissemination”; the resulting output
was printed out and manually read to isolate specific content that provided information about
CQI dissemination strategies. Public reporting timelines were consulted for important landmark
events (e.g., online posting of report) and public reporting summaries were probed to identify
and crosscheck information gained from the other sources. An example of the review process is
provided in Table A-1 which illustrates data analyses performed on public reporting summaries
and KII transcripts for two alliances. In the second column are excerpts culled from text in PR
summaries/KII transcripts that describes alliance dissemination activities. The third column
indicates the potential categories (drawn from review of summary excerpts and other supporting
data) along which dissemination strategies were classified (“categories” of alliance
dissemination).
We identified eight distinct categories at the culmination of this process and, in the second
step, assigned a binary score to each individual alliance based on whether a given strategy was
adopted during the time period of interest. An overall score was calculated for each alliance by
summing scores for each individual category. These eight categories were respectively, whether
alliance posted quality report on website, whether alliance quality report was published in non-
44
English language, whether online report was updated at least once within period of
measurement, whether alliance published report in consumer-focused magazines, whether
alliance issued press releases to media outlets about the quality report, whether alliance
collaborated with community-based organizations/stakeholders to disseminate report, whether
alliance hired a special public relations/communication expert to aid dissemination, and, lastly,
whether alliance did original consumer research to aid its marketing/dissemination strategies.
Media Coverage of CQI
This is one of the first studies to examine media coverage of report cards. At the outset, we
faced an important constraint: data on local non-print media sources (television, radio) was
fragmentary or unavailable. Therefore, we chose to focus on print media. Three distinct types of
regional print media coverage scores were generated: media coverage of alliance issued public
report (alliance-sponsored CQI), media coverage of public reports issued by Centers for
Medicare & Medicaid Services (CMS-sponsored CQI), and media coverage of public reports
issued by an agency other than an alliance or CMS e.g., Leapfrog group (“general” or non-
alliance, non-CMS CQI). These choices implemented an important study aim: to explore
possible differences in intensity and nature of press coverage of distinct type of report cards. The
categorization echoes our expectation that certain types of CQI may not only receive coverage of
differing intensity but that it may have distinct impacts on consumers. For instance, CMS-
sponsored websites (such as Hospital Compare and Nursing Home Compare) are possibly the
most well-known formal sources of CQI and provide information on virtually all
hospitals/nursing homes in the nation. It is plausible that press coverage of Hospital Compare
may have a stronger impact on consumer use than similar coverage of a less known CQI source.
Similarly, we expected alliance-sponsored reports to be extensively covered in local media
45
owing to special efforts made by alliances to comply with the AF4Q mandate to actively
disseminate.
In addition to measuring coverage of report cards, we also evaluated coverage of patient
safety issues in the print press. We motivated this strategy by the notion that local media
coverage of patient safety issues is likely to be strongly correlated with consumers’ likelihood of
searching for quality information as well as the regional “supply” of CQI. In other words, if
consumers are more aware of safety problems with local healthcare providers (owing to media
discussion) they may be more inclined to use CQI; CQI producers, in turn, may be more
motivated to provide quality differentials in areas with especially high incidence of events
compromising patient safety.
Selection of Local Print Media
We used Access WorldNews as our primary database and LexisNexis for some regions such
as Western New York where search results were not available owing to technical issues (e.g.,
output failed to display on repeated attempts). Each AF4Q county was linked to its towns/cities
using updated Census Bureau information. The resulting list of cities/places was used to conduct
searches for media articles published within the “catchment area” of specific alliances. Table A-
2 displays a listing of local newspapers for all 14 alliance regions. The list covers all the
newspapers published in the alliance region for which archival data was available and included
106 local newspapers in 2007 and 114 in 2011. Owing to varying size of AF4Q regions and
differing historical newspaper density, there is substantial variation in the number of newspapers
published in a given area. For instance, Humboldt County alliance, despite covering only one
county, had 3 newspapers, whereas a large city like Memphis had just two. News sources that
46
were web-only, blogs, magazines, and transcripts of radio/TV stations were excluded from the
search.
Selection of Search Keywords
In order to avoid missing relevant articles, we tested alternative keywords for general
searches that were not linked to a specific website name. As an example, we tested two keywords
(“Quality AND Physicians” and “Ranking AND Physicians”) relevant to general CQI for Maine
for the period June 1, 2010–June 1, 2011. The search “Quality AND Physicians” yielded 7
relevant items, whereas “Ranking AND Physicians” yielded just 3 and was, therefore, rejected in
favor of the former. Table A-3 provides the list of all four types of media coverage with their
corresponding keywords. Note that for some types of media coverage multiple keywords were
used to avoid the possibility of missing articles relevant to the query. For instance, general CQI
may consist of quality report cards about hospitals or individual physicians. Therefore, it was
deemed appropriate to include keywords that could extract media articles focusing on both types
of CQI.
Selection, Content-Coding, and Weighting of Articles
Each print media article retrieved from the search was selected for relevance, coded for
content, and then assigned weights that captured its likelihood of impact on consumer outcomes.
This three step process is illustrated in Figure 4-2 and 4-3.
47
Figure 4-2 Flowchart Depicting Selection For Relevance, Content Coding, And Weighting Of Media Articles On
CQI
Key word search For CQI
Alliance website name
Quality AND Physicians
Quality AND hospitals
Hospital Compare, Nursing Home Compare, HCAHPS, Home Health Compare
Shortlist initial set of results
Selection for full text
review by scanning title
and abstract of the article
Assign one or more categories to content: Discussion of importance of health quality transparency Discussion of variation in quality across health providers Web linkage to CQI source Direct comparisons between providers
Apply content coding scheme
Reject
Exclude if not
related to CQI
Weight each article for prominence based on three elements: Location of keyword in title, location of
article, and word length
Assign valence weights to each eligible coded content category within each article
Apply prominence and
valence weights to
each coded article
48
Figure 4-3 Flowchart Depicting Selection For Relevance, Content Coding, And Weighting Of Media Articles on
Patient Safety
Key word search For Patient Safety
Patents AND Safety
Medical Errors
Medical Malpractice
Shortlist initial set of results
Selection for full text
review by scanning title
and abstract of the article
Assign article content to one or more of the following categories: Discussion of patient safety practices of healthcare providers
Sentinel events
Apply content coding scheme
Reject
Exclude if not
related to
Patient Safety
Weight each article for prominence based on three elements: Location of keyword in title, location of
article, and word length
Assign valence weights to each eligible coded content category within each article
Apply prominence and
valence weights to
each coded article
49
Selection for Relevance and Full Text Review
The first step consisted of selecting articles relevant to the broad theme of CQI using the
corresponding keywords. The process of selection was guided by an algorithm that laid out a
stepwise approach with a sequential set of decision rules (Figure A-1). In determining relevance
of an article, a conservative approach was taken such that if there was some doubt, the article
was included in the shortlist for a full text review. A similar process was performed for articles
on patient safety.
Applying Content Codes
The content of each CQI-focused article was read thoroughly and assigned one or more of
four distinct content categories: discussion of health quality transparency/disclosure, discussion
of variation in quality across health providers, a web link for a CQI source, and whether the story
provides a direct comparison between providers in terms of quality, cost, or efficiency of
healthcare delivery. Code selection was guided by a process of expert review and validation after
extensive discussion among members of dissertation committee, and reflected emphasis on major
themes of policy discussion with respect to provider quality transparency. Coding assignment
was guided by clear definitions of each coding category with accompanying description of key
terms for each definition (Table A-4). Following a parallel process, articles related to patient
safety were assigned one or more of two content categories: discussion of issues related to
patient safety in healthcare delivery and discussion of “sentinel” events (egregious medical errors
by healthcare providers) (Table A-5).
Weighting the Coded Articles
While the coding scheme provides a means for capturing the major themes related to CQI
/patient safety that are expected to affect consumers’ behavior, impact of media coverage may
50
also depend on the way a news item “frames” the issues discussed. Drawing from a theoretical
framework developed by communication theorists for describing this agenda-setting function of
the media (and described above in the conceptual framework section), we used a two-pronged
scheme for weighting of articles that reflects their likelihood of capturing consumers’ attention
and mold attitudes. To do this we separately assigned valence weights and prominence weights
to each content-coded article.
“Prominence” of a media item was captured by assigning weights to three elements that
determined the conspicuousness of location of story within the newspaper: whether the text of
the article title included the keyword (1 if yes, 0.5 if no), location of article in the paper (1 if on
front page, 0.5 otherwise), and space devoted to story (1 if word-length exceeded 500, else 0.5).
“Valence” refers to the normative frame applied by the author to the topic. Valence weights were
assigned to individual content codes. The weighting scheme was motivated by the overall notion
that a news story framed to offer an exclusively positive view of importance or desirability of
provider quality transparency and/or of the importance of consumers to be aware of provider
quality differentials would plausibly have greater impact on consumer odds of being aware of
and using CQI. Conversely, an exclusively negative view of the functioning of healthcare system
or providers (e.g., compromised patient safety or egregious medical errors) may spur more
consumers to use CQI or be vigilant about provider quality differentials. Note that some coding
categories are unsuitable for assignment of valence weights because they describe a factual
scenario or an inherently negative event. For instance, we did not assign any valence weight to
discussion of web-links for CQI sources, or to text that made quality comparisons between local
providers. Similarly, we did not assign valence weight to the code for sentinel events since these
events are by definition negative. To assign valence weights to each eligible content code, we
51
developed stepwise algorithms displayed in Figure A-2 (discussion of quality transparency),
Figure A-3 (discussion of provider quality variation), and Figure A-4 (discussion of patient
safety practices of providers). Each algorithm embodies a sequential set of decision rules at the
end of which each eligible code can be assigned either a positive (weight=1), a negative
(weight=0), or a neutral valence (weight=0.5). Table A-6 provides a few illustrative examples of
the text of media articles to which the content coding and valence weighting scheme has been
applied.
Generation of Media Coverage Scores
Media coverage score was estimated separately for alliance-sponsored, CMS-sponsored,
general CQI articles, and patient safety in two steps. In the first step, a normalized weighted and
unweighted score will be calculated for each article. Table A-7 gives three examples of such a
hypothetical calculation. In the first example, the top two rows show an article that has been
coded for content related to CQI (article 1) by assigning all four code categories (indicated by
letters A, B, C, and D; see key at the bottom of table) to its text, where each content code
receives a score of 1. Without valence or prominence weights, the unweighted normalized score
(top row) will be calculated by summing up the four scores and dividing by the total number of
possible content codes (i.e., 4), resulting in a score of 1. On the other hand, when we assign
valence weights to the two eligible content codes (indicated by A and B; see key at the bottom of
table) and three prominence weights (indicated by letters G, H, I; see key at the bottom of table)
to the article, we get a normalized weighted score of 1 (i.e., total score 7 divided by the highest
possible score i.e., 7). Note that the scores for valence-weight ineligible codes (C and D; see key
at the bottom of table) enter in the final calculation without being down-weighted. In the second
step, a cumulative media coverage score is computed for each alliance by summing up
52
normalized scores across individual articles within specific media coverage categories. Within
each alliance, these scores are calculated separately for each of the three types of report cards
and patient safety.
Given our coding and weighting scheme, a score of 1 for a CQI focused article can be
interpreted as an article published in a regional newspaper on the front page whose title contains
corresponding keyword(s) or a close analogue that indicates provider quality comparison, and
which contains discussion related to all four content areas (transparency of quality or cost of
services of health care providers, variation in healthcare quality across regions or demographic
groups, comparisons of providers in terms of quality, cost or efficiency, and web links to quality
reports ), and underscores the importance of informing consumers about provider quality and
regional variation in healthcare quality without expressing any skepticism towards the utility of
CQI or doubts that it may confuse consumers. Similarly, a score of 1 for a press article focused
on patient safety can be interpreted as an article published in a regional newspaper on the front
page whose title contains corresponding keyword(s) or a close analogue that indicates patient
safety and which contains discussion related to two content areas (discussion of patient safety
practices of healthcare providers and sentinel events) and reflects negatively on patient safety
practices of providers. For brevity, in the following sections we will refer to such an article as an
“idealized” news article.
Inter-rater reliability (IRR) testing
Three raters, including the principal author and two undergraduate students, were assigned to
complete the process of article selection, content coding, and weighting. The assignment of
individual alliances among the three raters is shown in Table A-8. The rating process and the
inter-rater agreement analyses were completed in two steps. In the first step, the two student
53
raters were extensively trained in selection of articles, code application, and weight assignment.
The training included extensive discussions of the selection, code, and weight assignment
algorithms, discussion of definitions to clarify details and elucidate key terms, and test selection,
coding and weighting of output from a selected set of key words. Once the raters achieved a
significant threshold of interrater agreement with author and among themselves (roughly about
80%), a second process of random audit was put into place. In this process, principal author
selected specific alliance-keyword combinations (from among assigned alliances and illustrative
of the each of major types of media coverage variables) which the raters used to search and
select relevant articles and then apply content codes and weights. For each selected keyword-
alliance pair, inter-rater agreement was calculated between the rater and the author separately for
the selection, coding, and weighting stages. The process of calculation of inter-rater agreement is
presented in Table 4-1.
The actual inter-rater agreements achieved are shown in Table A-9. For most keyword-
alliance combinations, a high degree of agreement was reached for the initial two stages of
selection and code assignment, possibly because these steps involved a lesser degree of
subjectivity in the relevant definitions and terms than the step involving valence weighting. In all
cases where agreement was in excess of 85%, the remaining disagreement was adjudicated by
discussion between author and rater and final assignment proceeded by mutual consensus. In
cases where agreement fell below 85% (one case for coding and 5 cases for weight assignment),
author reviewed the output de novo and reassigned codes and weights.
Finally, if should be noted that interrater agreement was not calculated for prominence
weights since the process of assignment of these weights was deemed to be objective, involving
straightforward entry of binary weights for whether article was on front page or elsewhere,
54
whether article length exceeded 500 words or not, and whether or not the keyword appeared in
the article title.
Table 4-1 Calculation Of Inter-Rater Agreement
Stage Numerator (A) Denominator (B) Inter-Rater
Agreement
Selection
Number of articles assigned identical
status (selection or rejection) by
author and rater
Total number of articles (100*A) / B
Coding Number of codes applied identically
by author and rater
Total number of codes
applied (100*A) / B
Valence Weighting Number of weights assigned identically by author and rater
Total number of weights assigned
(100*A) / B
Analytic strategy
Model Specification and Identification Strategy
We used linear regression fixed effects (i.e., linear probability model) to evaluate the impact
of our key variables on consumer outcomes. The fixed effects model removes two distinct
sources of confounding variation: permanent difference between individuals and common trends
across individuals over time that may be correlated with key predictors (e.g., CQI dissemination)
and consumer awareness/attitudes towards CQI. The effect of each independent variable is
therefore identified off the residual variation i.e., variation within alliances (for variables that
vary only across regions and not individuals) or within individuals over time, which can be
treated as plausibly exogenous to consumer outcomes. This assumption breaks down if the
within-alliance (or within-individual) variation is correlated with factors that affect consumer
55
outcomes, in which case such factors may have to be explicitly controlled for in the specification
to avoid omitted variable bias.
In all our specifications, we included a set of covariates related to consumer’s socio-
demographic characteristics, health status, healthcare access, and health care utilization as
controls to account for such a possibility. Specifically, we controlled for family income, having
college education, employment status, health insurance status, self-ratings of health, patient
activation score, type of chronic condition, per-capita physician density by county, a measure of
overall satisfaction with healthcare received in last 12 months, and percent population of alliance
region that had access to a quality report cards issued by a health plan. We used the linear
probability model (LPM) instead of logistic or probit regressions in our primary specification
(even though our outcomes are binary) as LPM generates parameter estimates that are directly
and conveniently interpreted as mean marginal effects of covariates on outcome whereas
coefficients for logistic regression have a more complicated log odds ratio interpretation. As the
error terms for individuals within an alliance region are likely to be correlated (violating the
assumption of independence and identical distribution of error terms), the standard errors were
Oconomowoc Enterprise, Packer Plus, Superior Telegram,
Times Press, Washington County Daily News, Wisconsin State
Journal
135
Table A-3 Types Of Media Coverage And Corresponding Search Terms (Keywords)
Media Coverage Variable Search Term
Alliance-Sponsored CQI
“Quality Counts”(Maine)
CMS-Sponsored CQI
“Hospital Compare”
“HCAHPS”
“Nursing Home Compare”
“Home Health Compare”
General CQI “Quality AND Physicians”
“Quality AND hospitals”
Media Coverage of Patient Safety
“Patients AND Safety”
“Medical Errors”
“Medical Malpractice”
136
Figure A-1 Stepwise Algorithm To Guide Selection Of Articles Relevant To Comparative Quality
Information
Does the article reference providers?
Does the article reference quality of
healthcare?
Does the article reference comparison of
quality of healthcare among providers?
Does the article reference collecting (e.g.,
data) or disclosing information on
quality of healthcare to consumers
(patients, employers, insurers, or doctors)?
Reject
Reject
Reject
Reject
Accept
No
No
No
No
Yes
Yes
Yes
Yes
137
Table A-4 Definition Of Key Coding categories for Media coverage of CQI
Coding category Definition Definition of key terms
Discussion of health quality
transparency/disclosure
Any discussion of transparency or disclosure of quality, cost, or efficiency
of healthcare provision by healthcare
providers to patients and other
stakeholders
“Any” refers to the fact that the
discussion may range from brief to
extensive and may or may not be the
focus of the news article
“Efficiency” means the provision of
higher quality at lower cost
“Healthcare provision” refers to any form
of health care provided by healthcare providers
“Healthcare provider” refers to any type
of healthcare provider including
physicians, surgeons, nurses, dentists,
physician assistants, nurse practitioners,
pharmacists
“Other stakeholders” refers to providers,
insurers, drug and device manufacturers,
consumer advocacy groups, public sector
payers like Medicare and Medicaid, employers, and non-profit groups
involved in improving healthcare quality
Discussion of variation in
quality across health providers
Any explicit discussion of variation in
quality, cost, or efficiency of healthcare
across providers at a local, state, or
national level
“Local” refers to the city or county of
publication of the newspaper
Explicit means the article has to say (and
not merely hint or imply) that quality of
healthcare is variable or uneven across
providers
Provides direct comparisons
between healthcare providers
Provides a direct comparison between
providers/group of providers in terms of
quality, cost, or efficiency of healthcare
“Direct comparison” refers to a head-to-head comparisons between specific
providers/group of providers
“Group of providers” may refer to
providers collectively at the local, state,
or national level i.e., comparisons
between cities, counties, states , or
nations as a whole
Web linkage to CQI source Provides the web address or a hyperlink
to the public reporting website -
138
Table A-5 Definition Of Key Coding categories for Media coverage of Patient Safety
Coding category Definition Definition of key terms
Discussion of issues related
to patient safety in healthcare
delivery
Provides description of patient safety
practices of healthcare providers
“Patient safety practices” means any
actions taken by the healthcare provider
that has implications for safety of
patients when providing healthcare.
Exclude if the practice or action is linked
to drug or device manufacturers e.g.,
safety of drugs in clinical trials except when it is directly linked in some way to
actions of healthcare providers
Discussion of “Sentinel”
events
Provides description of an egregious medical error by specific healthcare
provider(s)
“Egregious” refers to an error in healthcare delivery that leads to serious
bodily or mental harm.
139
Figure A-2 Stepwise Algorithm To Guide Assignment Of Valence Weights To Discussion Of Quality
Transparency
Discussion of healthcare quality
transparency/disclosure
Does it express any doubts about
measurement?
Does it express any doubts about
relevance to consumers?
Does it say something positive about
healthcare quality
transparency/disclosure?
Negative Valence
Negative Valence
Neutral Valence
Positive Valence
Yes
Yes
No
Yes
No
No
Yes
140
Figure A-3 Stepwise Algorithm To Guide Assignment Of Valence Weights To Discussion Of Quality
Variation
Discussion of variation in quality across
healthcare providers
Does it say that disclosing variation in
quality may be confusing to consumers?
Does it say that consumers need to be
aware of variation in quality?
Negative Valence
Positive Valence
Neutral Valence
Yes
Yes
Yes
No
No
141
Figure A-4 Stepwise Algorithm To Guide Assignment Of Valence Weights To Discussion Of Patient Safety
Practices Of Healthcare Providers
Discussion of issues related to patient
safety practices of healthcare providers
Does it say anything negative about
patient safety practices of providers?
Does it something positive about patient
safety practices of providers?
Positive Valence
Negative Valence
Yes
Yes
No
Yes
142
Table A-6 Illustrative Examples Of Code Application And Valence Weight Assignment
Coding category Illustrative text Valence weight applied
Discussion of health quality
transparency/disclosure
Mainers now have an easy and reliable way to compare the quality of doctors and hospitals around the state, say the creators of a new website.
It allows patients to enter medical conditions or procedures and see which are the highest-rated
doctors and hospitals in and near their communities. The ratings are based on voluntarily reported
data such as infection rates and protocols for preventing medication errors.
Positive
Discussion of health quality transparency/disclosure
Agwunobi said he began the tour - Maine is the sixth stop - by pushing for electronic medical
records and "transparency" - the term used in health care circles to describe making information
about the cost and quality of health care easily available to the public. But he described switching
to a more passive role after hearing doctorsê concerns.
Members of the Maine group were particularly leery of the so-called "pay-for-performance"
programs that insurers use to reward doctors who meet certain standards. They expressed worries
about physicians being unfairly assessed. What if Doctor X's patients are just a sicker bunch than
Doctor Y's?
Negative
Discussion of variation in
quality across health providers
''Unfortunately, we know all health care is not created equal. There is variation in quality,'' said
Elizabeth Mitchell, chief executive officer of the foundation. ''We need that information. We need
it not only to make more informed choices, we need it to improve care.''
Positive
Discussion of variation in quality across health providers
Hospitals and clinics will post signs saying patients can request the data. Many people might find
the information confusing: Making sense of it involves factoring in cost shifting, negotiated
discounts, tiered co-pays and widely variable insurance plans. Also, fees for lab work, X-rays and
other extras usually won't be included. An example of the complexity: Meriter Hospital had a
median charge of $42,377 for a hip replacement in a recent 12-month period. That was much
higher than St. Mary's Hospital's charge of $26,608 and UW Hospital's charge of $32,821. But
insurers typically paid Meriter only about $3,800 more than they paid the other hospitals, and it's
possible a patient's out-of-pocket cost was no higher or even lower at Meriter.
Negative
143
Table A6 (Contd.) Illustrative Examples Of Code Application And Valence Weight Assignment
Coding category Illustrative text Valence weight applied
Web linkage to CQI source
The Maine Health Management Coalition Foundation, made up of hospitals, medical practices,
insurers and large employers, introduced its ratings website - www.getbettermaine.org - during a
news conference at the State House on Tuesday.
-
Provides direct comparisons
between healthcare providers
According to private assessments and surveys completed by its very own patients, Down East
Community Hospital has been named in the top 25 percent of hospitals in New England. The
recognition comes from the Harvard Pilgrim Hospital Honor Roll and is an indication of how far
the facility has come in the last three years.
-
Discussion of patient safety in
healthcare delivery
At least 34 patients died as a result of preventable mistakes in Oregon hospitals last year, the
same number reported in 2009 to the Oregon Patient Safety Commission. While the number is
small in comparison with the tens of thousands of people safely restored to health in hospitals
each year, it is one of several indicators of stalled progress in reducing serious medical errors. "The truth is, the culture of patient safety is not where it needs to be," said Bethany Higgins,
administrator of the Oregon Patient Safety Commission.
Positive
Discussion of patient safety in
healthcare delivery
At St. Agnes Hospital in Baltimore, heart attack patients are receiving faster treatment. Doctors,
nurses and other hospital staff recently slashed by 22 percent the time it takes for an angioplasty
to begin after the patient's arrival in the emergency room. Instead of 119 minutes, patients wait
roughly 93 minutes. It's an important improvement, given studies that have linked faster
treatment with a lower mortality rate, and the state requires 80 percent of patients to be treated
within two hours. To be more efficient, St. Agnes staff members are employing a practice known
as Lean Management to cut costs, reduce patient waiting times and improve safety.
Negative
Discussion of “Sentinel” events
The nursing supervisor who allowed a disoriented 61-year-old patient to leave the Down East
Community Hospital during a severe snowstorm in January 2008 has lost his nursing license. The patient was found dead in a nearby snowbank the next day.
-
144
Table A-7 Calculation Of Normalized Unweighted And Weighted Scores For Each News Article
A B C D E F G H I Normalized Unweighted
score
Normalized Weighted
score
Article 1: CQI
Unweighted score 1 1 1 1 - - - - - 4/4=1
Weighted score 1 1 1 1 - - 1 1 1 - 7/7=1
Article 2: CQI
Unweighted scores 1 0 1 0 - - - - - 2/4=0.5
Weighted scores 1 0 1 0 - - 1 .5 .5 - 4/7=0.57
Article 3:
Patient Safety
Unweighted scores - - - - 1 1 - - - 2/4=0.5
Weighted scores - - - - 0.5 1 1 1 1 - 4.5/7=0.64
Key: A= discussion of health quality transparency/disclosure; B = discussion of variation in quality across health providers; C= web link for a CQI source; D= direct comparison between providers; E=discussion of issues related to patient safety in healthcare delivery; F= discussion of “sentinel” events; G= text of the article title included the keyword; H=
location of article in the paper; I= space devoted to story A, B, and E are the valence weight eligible codes
145
Table A-8 Assignment Of Alliance Coding Among Author And Raters
Alliance Author Rater 1 Rater 2
Cincinnati, OH - x -
Cleveland, OH x - -
Detroit, MI - - x
Humboldt County, CA x - -
Kansas City, MO - x -
Maine x - -
Memphis, TN - - x
Minnesota x - -
Puget sound, WA - x -
South Central, PA x - -
West Michigan - - x
Western New York - x -
Willamette Valley, OR - - x
Wisconsin x - -
146
Table A-9 Results Of Inter-Rater Agreement For Selection, Coding, And Weighting Of Media Articles
Stage Raters Date Alliance Period Keyword
Inter-Rater
Agreement
(%)
Selection For Full
Text Review
Author/Rater 1 07/09/2015 Kansas City, MO 2010-11 Quality AND Hospitals 98
Author/Rater 1 07/15/2015 Puget sound, WA 2006-07 Patients AND Safety 91
Author/Rater 1 08/11/2015 Kansas City, MO 2006-07 Quality AND
Physicians 85
Author/Rater 2 07/10/2015 Detroit, MI 2006-07 Quality AND
Physicians 98
Author/Rater 2 07/15/2015 Memphis, TN 2010-11 Quality AND Hospitals 87
Author/Rater 2 06/09/2015 West Michigan 2006-07 Patients AND Safety 97
Coding
Author/Rater 1 08/08/2015 Kansas City, MO 2006-07 Quality AND
Physicians 92
Author/Rater 1 08/19/2015 Kansas City, MO 2010-11 Quality AND Hospitals 91
Author/Rater 1 08/19/2015 Kansas City, MO 2010-11 Patients AND Safety 80
Author/Rater 2 08/19/2015 Detroit, MI 2010-11 Quality AND
Physicians 95
Author/Rater 2 08/19/2015 Memphis, TN 2010-11 Quality AND Hospitals 89
Author/Rater 2 10/19/2015 West Michigan 2010-11 Patients AND Safety 90
Valence Weighting
Author/Rater 1 08/11/2015 Kansas City, MO 2006-07 Quality AND
Physicians 85
Author/Rater 1 08/19/2015 Kansas City, MO 2010-11 Quality AND Hospitals 70
Author/Rater 1 08/19/2015 Kansas City, MO 2010-11 Patients AND Safety 60
Author/Rater 2 08/19/2015 Detroit, MI 2010-11 Quality AND
Physicians 83
Author/Rater 2 08/19/2015 Memphis, TN 2010-11 Quality AND Hospitals 75
Author/Rater 2 10/19/2015 West Michigan 2010-11 Patients AND Safety 74
147
Appendix B: Results
148
Table B-1 Availability Of Quality Reports, By Type Of Measure And Alliance