Top Banner
Selection of indicators for Hospital Performance Measurement A report on the 3 rd and 4 th Workshop Barcelona, Spain, June and September 2003 This report has been prepared by: Jérémy Veillard, Technical Officer Ann-Lise Guisset, Technical Officer Mila Garcia-Barbero, Head of WHO Office for Integrated Health Care Services, Division of
37

Selection Of Indicators For Hospital Performance Mesurement

Nov 22, 2014

Download

Health & Medicine

Indikator Hospital Performance Project PATH WHO Europe
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Selection Of Indicators For Hospital Performance Mesurement

Selection of indicators for Hospital Performance

Measurement

A report on the 3rd and 4th Workshop

Barcelona, Spain, June and September 2003

This report has been prepared by: Jérémy Veillard, Technical Officer Ann-Lise Guisset, Technical Officer

Mila Garcia-Barbero, Head of WHO Office for Integrated Health Care Services, Division of

Page 2: Selection Of Indicators For Hospital Performance Mesurement

ABSTRACT

The current restructuring of health care services among European countries, the development of new common policy orientations, focusing on accountability and quality improvement strategies, and a growing interest in patient satisfaction assessment highlight the importance of efficient and high quality hospital organization throughout Europe. These orientations are strong incentives for raising the value of hospital performance assessment. The WHO Regional Office for Europe decided to run a project on hospital performance assessment. The aim of the project is to build and validate a flexible and comprehensive model of hospital performance assessment enhancing quality improvement and evidence-based management.

Since January 2003, the following outcomes were achieved: definition of the main concepts and identification of key dimensions of hospital performance assessment; design of the general architecture of a performance measurement tool enhancing evidence-based management and quality improvement through benchmarking; expansion of the theoretical work on the expansion of the key dimensions of hospital performance; definition of a framework to select performance indicators on the basis of evidence and availability of data (through a survey in 10 European countries); review of 200 hospital performance indicators; selection of a draft core set of 25 performance indicators and of a broader tailored set; design of a draft balanced dashboard enhancing quality improvement and evidence-based management.

During a final workshop dedicated to finalizing the balanced dashboard for future pilot implementation and possible expansion, the core balanced set of performance indicators was agreed on; the trade-offs between the measures were identified and highlighted; the presentation of the dashboard was discussed in order to maximize its educational value; the educational aspects were discussed and a strategy was defined in order to maximize the chances of success of a pilot implementation; a strategy for future expansion of the model was discussed and eventually the orientations of the pilot implementation were agreed on.

Keywords HOSPITALS – STANDARDS QUALITY INDICATORS, HEALTH CARE – STANDARDS QUALITY OF HEALTH CARE DELIVERY OF HEALTH CARE HEALTH POLICY – TRENDS EUROPE

Address requests about publications of the WHO Regional Office to: • by e-mail [email protected] (for copies of publications)

[email protected] (for permission to reproduce them) [email protected] (for permission to translate them)

• by post Publications WHO Regional Office for Europe Scherfigsvej 8 DK-2100 Copenhagen Ø, Denmark

© World Health Organization 2004

All rights reserved. The Regional Office for Europe of the World Health Organization welcomes requests for permission to reproduce or translate its publications, in part or in full.

The designations employed and the presentation of the material in this publication do not imply the expression of any opinion whatsoever on the part of the World Health Organization concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. Where the designation “country or area” appears in the headings of tables, it covers countries, territories, cities, or areas. Dotted lines on maps represent approximate border lines for which there may not yet be full agreement.

The mention of specific companies or of certain manufacturers’ products does not imply that they are endorsed or recommended by the World Health Organization in preference to others of a similar nature that are not mentioned. Errors and omissions excepted, the names of proprietary products are distinguished by initial capital letters.

The World Health Organization does not warrant that the information contained in this publication is complete and correct and shall not be liable for any damages incurred as a result of its use. The views expressed by authors or editors do not necessarily represent the decisions or the stated policy of the World Health Organization.

Page 3: Selection Of Indicators For Hospital Performance Mesurement

CONTENTS

Page

1. INTRODUCTION.....................................................................................................................1 1.1. RATIONALE...........................................................................................................................1 1.2. BACKGROUND.......................................................................................................................1 1.3. OBJECTIVES ..........................................................................................................................2 1.4. STRUCTURE OF THE REPORT...................................................................................................2

2. MATERIAL AND METHODS................................................................................................2 2.1. REVIEW OF THE LITERATURE .................................................................................................2 2.2. SURVEY IN 11 EUROPEAN COUNTRIES ...................................................................................3 2.3. PRE-SELECTION OF INDIVIDUAL INDICATORS .........................................................................4

3. RESULTS...................................................................................................................................4 3.1. SUB-DIMENSIONS OF THE OPERATIONAL AND CONCEPTUAL MODELS OF HOSPITAL PERFORMANCE..............................................................................................................................4

3.1.1. It was agreed that prerequisites to the conceptual model were:...................................4 3.1.2. Conceptual and operational model ...............................................................................4

3.2. SELECTION OF INDICATORS....................................................................................................7 3.2.2. Selection of indicators by dimension.......................................................................9

3.3 FEEDBACK OF RESULTS TO PARTICIPATING HOSPITALS...................................................16

4. CONCLUSIONS..................................................................................................................17 4.1. SUMMARY OF PRODUCTS ................................................................................................17 4.2. RECOMMENDATIONS ON DATA RELATED ISSUES.............................................................18

4.2.1. Collection and quality control of hospital data...........................................................18 4.2.2. Data aggregation ........................................................................................................18

4.3. RECOMMENDATIONS ON ROLES AND RESPONSIBILITIES FOR PILOTING THE FRAMEWORK FOR HOSPITAL PERFORMANCE ASSESSMENT ......................................................................................18

4.3.1. Beneficiaries................................................................................................................18 4.3.2. Project management/leadership..................................................................................19 4.3.3. Selection of indicators at local /national level ............................................................19 4.3.6. Training implications ..................................................................................................20

4.4. GENERAL RECOMMENDATIONS FOR IMPLEMENTATION OF THE FRAMEWORK FOR HOSPITAL PERFORMANCE ASSESSMENT.......................................................................................................20 4.5. THE STEPS FORWARD ...........................................................................................................20

ANNEX 1.....................................................................................................................................22

ANNEX 2.....................................................................................................................................25

ANNEX 3.....................................................................................................................................27

ANNEX 4.....................................................................................................................................32

Page 4: Selection Of Indicators For Hospital Performance Mesurement
Page 5: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 1

1. Introduction

1.1. Rationale

The restructuring of health care services among several European countries aims at increasing accountability, cost-effectiveness, sustainability and quality improvement strategies, and shows a growing interest in patient satisfaction. These reforms highlight a major challenge throughout Europe for efficient and high quality hospitals. They demand evidence-based policies and management strategies for hospital performance assessment. In this context, The WHO Regional Office for Europe provides a flexible and comprehensive framework called the Performance Assessment Tool for quality improvement in Hospitals (PATH). It includes i.e.

Product 1. A conceptual model of performance (dimensions, sub-dimension and how they relate to each other),

Product 2. Criteria for selection of indicators Product 3. Lists of indicators (e.g. including, rationale, operational definition, data collection

issues, support for interpretation), Product 4. An operational model of performance (how indicators relate to each other, and

also to explanatory variables and to standards), Product 5. Strategies for feedback of results to hospitals, mainly through a “balanced

dashboard”, Product 6. Strategies to foster benchmarking.

1.2. Background

This report is the summary of the two last workshops in a series of four dedicated to building a framework for hospital performance assessment. The two first workshops on hospital performance assessment led to an agreement on the objectives of the project, definitions of the main concepts (performance, quality, indicators, etc.), identification of six dimensions of performance (product 1) and criteria for indicators selection (product 2). The conceptual model encompasses six dimensions: clinical effectiveness, safety, patient centeredness, responsive governance, staff orientation and efficiency. The criteria for indicator selection are there importance and relevance to European hospitals, reliability and validity (of each individual indicator and of the set of indicators as a whole), and burden of data collection. A preliminary list of indicators was established, based on an extensive review of the literature of the current national and/or regional performance assessment projects (almost 300 indicators were reviewed). The indicators were tested against the selection criteria described above and a shortlist of indicators was drawn.

Page 6: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 2 The group decided to organize indicators into two “baskets”:

- a “core” basket gathering a limited number of indicators generally available, applicable and valid; relying on the best scientific evidence, for which data are available in most European countries and which are very responsive in different contexts; and

- a “tailored” basket gathering indicators proposed for use only in specific situations because of variability of data availability, applicability to specific settings (e.g. teaching hospitals, rural hospitals, etc.) or validity (cultural, financial, organisational contexts).

1.3. Objectives

Based on this previous work, the objectives of the third and fourth workshops were: • To refine the operational model by clarifying sub-dimensions of performance, • To select a core basket of indicators and propose a tailored list, • To build an operational model of performance • To discuss strategies for dissemination and follow-up of the framework and more

specifically its pilot implementation.

1.4. Structure of the report

In this report we first describe the steps, material and methods for achieving these objectives (section 2: material and methods), the main products of both workshops (section 3: results) and conclude with a discussion on the next steps of the project with a focus on the challenges and opportunities for implementation (section 4: discussion).

2. Material and methods

2.1. Review of the literature

The first step taken was the identification of indicators for sub-dimensions of performance not covered or partly covered by current hospital performance assessment systems under use. In this way, the list of potential indicators pre-selected during the second workshop was enlarged to cover the sub-dimensions added during the further conceptualization phase and included indicators used in research projects and not widely used by hospitals. Next, an extensive review of the grey and scientific literature was performed. Evidence for each indicator on the rationale for use, prevalence, validity and reliability, current scope of use, supposed and demonstrated relationship with other performance indicators, exogenous factors and verification of standards was collected. The review of the literature showed that some dimensions and indicators, such as clinical effectiveness, have been well researched and built on a scientific tradition of evaluation. But others, such as responsive governance and efficiency are not so well represented in the literature and tend to be based primarily on empirical evidence or expert judgment.

Page 7: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 3

A distinction between “reflective” and “formative” indicators was drawn. Formative indicators (causal) determine changes in the value of the latent variable while reflective indicators (effect) work the other way around. For instance, length of stay is a formative indicator of efficiency as efficiency is partly determined by length of stay, but at the same time as clinical effectiveness affects length of stay and hence length of stay is also a reflective indicator of clinical effectiveness. This distinction is important from a methodological point of view to evaluate indicators validity. During implementation phase it will support the interpretation of indicators results.

2.2. Survey in 11 European countries

A survey was conducted in 11 European countries in May 2003. It aimed to define the hospital management’s scope for decision-making, the relevance of various indicators and the burden of data collection. Questions were circulated to volunteer members of the Health Promoting Hospitals network and to countries participating in the pilot project. Twelve responses came from the 11 countries. One questionnaire was sent to each one of the countries. Surveys were filled in either by individuals or by large multi-professional working groups. Each working group was asked to fill in the survey for a so-called “lay hospital” in the country. Survey results have to be interpreted with great caution. Inference is limited because of a sample bias. Recipients of the questionnaire were identified from a self-selected group (Health Promoting Hospital network). There may also be a “social desirability” bias. It means that respondents answer the way they believe they are expected to answer and overrate the importance of socially desirable components of performance (e.g. health promotion, staff satisfaction). The survey only captured limited information on the content and quality of national data sets. Moreover, two questionnaires from one country showed intra-country discrepancies. Although these factors limit the interpretation of the survey, they do not render it unhelpful. The empirical findings of the survey were considered crucial to reconcile theory with practice and to develop a strategy to monitor the integrity of the model and its application to different health care systems. To evaluate applicability was extremely important because indicators were drawn from a mainly Anglo-Saxon literature and applicability of tools and extrapolation to other contexts is often questionable. The purpose of the survey to facilitate the selection of indicators with a first input from the countries was met. This purpose will be completed in the next steps with the input from countries that will pilot the balanced dashboard of performance indicators. Responses to the survey showed wide variations in data availability and data quality, including:

• continuing use of ICD-9 instead of ICD-10, • relative or absolute lack of secondary diagnosis coding, • over/under recording reflecting funding and culture, • delineation of episodes, readmissions, attribution to hospitals, and • variable, usually limited, linkage between hospitals and primary health care

In general, respondents supported the values and the measures proposed in the survey. Many of the issues, such as staff orientation, were considered to be very important, although few countries actually have systems to measure it.

Page 8: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 4 2.3. Pre-selection of individual indicators

The pre-selection was based on evidence in the literature, results of the survey in participating countries and expert judgement. Discussions took place at the third and fourth workshops. During the third workshop, four working groups composed of international experts (see appendix) in the different dimensions selected (clinical effectiveness and patient safety, staff orientation and staff safety, efficiency and patient centeredness, responsive governance and environmental safety) were asked to select indicators using a modified nominal group technique. They first scored them individually on a scale from 1 to 10 according to importance, validity and burden of data collection. Individual scores were reported to the group and discussed. Then indicators were allocated to a “core” or “tailored” baskets or excluded from the framework. During the fourth workshop, the list of indicators was reviewed to guarantee the content validity of the set of indicators as a whole.

3. Results

3.1. Sub-dimensions of the operational and conceptual models of hospital performance

3.1.1. It was agreed that prerequisites to the conceptual model were: - to be consistent with WHO policy and language; - to share a common understanding of seemingly very similar concepts e.g.

dimensions/perspectives; indicators / standards /criteria; outputs / results / performance whose complexity is increased by its translation from English to other languages;

- to clearly define concepts included under each sub-dimension e.g. emotional support, empowerment and autonomy;

- a glossary of terms for the purpose of the project will be useful and - the need to design indicators around practical customers (hospitals).

3.1.2. Conceptual and operational model The conceptual model encompasses four vertical dimensions (clinical effectiveness, efficiency, staff orientation and responsive governance) that cut across two horizontal perspectives (patient centeredness and safety) (see figure 1). Sub-dimensions for each of the six dimensions/perspectives are described in table 1.

Page 9: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 5

Error!

Figure 1: The WHO Regional Office for Europe theoretical Model for Hospital Performance

Clinical

eff

ectiv

enes

s

Efficien

cy

Staf

f o

rien

tatio

n Patient centeredness

Safety

Repo

nsive

gove

rnan

ce

Table 1: Description of the dimensions and sub-dimensions of performance Dimension Sub-dimensions Clinical effectiveness - Conformity of processes of care

- Outcomes of processes of care - Appropriateness of care

Efficiency - Appropriateness (added after discussions during the workshop) - Input related to outputs of care - Use of available technology for best possible care

Staff orientation - Practice environment - Perspectives and recognition of individual needs - Health promotion activities and safety initiatives - Behavioural responses and health status

Responsive governance - System / Community integration - Public health orientation

Safety - Patient safety - Staff safety - Environment safety

Patient centeredness - Client orientation - Respect for patients

It was made clear that several issues deserve special consideration:

1 Highlight the central role of both patient centeredness and safety values in guiding health systems and hospitals management: a patient’s perspective on clinical effectiveness, efficiency, staff orientation, responsive governance

2 Make explicit the relationships between indicators. The difference between determinants

and measures of performance is helpful in constructing and balancing the indicator set, as many measures e.g. length of stay may be seen as associated with a range of variables which may be characterised as formative drivers or reflective images.

Page 10: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 6

3 The distinction between formative and reflective indicators was crucial. However, it may be difficult to understand and might cause confusion to potential users of the indicators. For the educational material, terms such as “the indicator reflects…” and the “indicator acts upon…” will be preferred.

The overall structure for each dimension and the main points described are presented below. a. Clinical effectiveness Within clinical effectiveness, a focus on team working and on clinical conditions, rather than on individual specialties or professions, was recommended. Although it was acknowledges that several indicators have major limits, nevertheless they should be considered to guarantee content validity of the set of indicators as a whole. Some of the main problems are that: - complications and sentinel events are seriously underreported, - indicators based on data extracted from the medical record, e.g. on appropriate and timely

care, depend on content which is commonly not recorded, and represent a very high burden of data collection.

b. Efficiency There are practical limitations of linking inputs to health care outputs or outcomes due to:

• lack of activity based costing; • inconsistency of case-mix classification and • difficulty in standardising costs in monetary terms between countries

Despite these limitations, opportunities for measuring efficiency include optimal use of available technology (e.g. machine time), utilisation rates, staffing ratios, and financial management. Following the discussion, appropriateness of health services utilization was added as a sub-dimension. Efficiency without appropriateness is considered a meaningless dimension. Given the wide variations in the availability, training and functions of personnel and even within countries, indicators based on staffing ratios would be difficult to interpret. Moreover, they are largely outside the hospital’s control. Hence, they are not treated as performance indicators but as background information, as an important measure to understand and interpret other performance indicators. In market economies hospitals manage finances. In other contexts, hospitals only manage or even administer line-budgets. Because of those wide disparities in financial responsibilities, indicators on financial performance and profitability are only in the tailored set. c. Staff orientation Many potential indicators are sensitive to context. In this area there are wide variations between countries and priorities vary widely. In some countries the preoccupation is overstaffing, job security and timely payment while in others, overworking, turnover and vacancy rate, professional identity, self-regulation, team working and a main reliance on nurses (who are usually in short supply) are overreaching challenges.

Page 11: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 7

These conditions largely affect indicators on staff orientation. Staffing levels, team working and continuity of care bear also on patient safety and clinical effectiveness. Staff orientation should recognize knowledge management and its application i.e. competence and practical skills. d. Responsive governance Responsive governance relates to the hospital role, responsibilities and influence within the health care systems. It is also very sensitive to context and culture. There is also a general lack of literature on responsive governance indicators. Attention should focus on:

- continuity of care, focusing on patient perception (patients surveys) or factual issues (discharge letters)

- patient discharge planning (over which hospital has control) and - responsiveness to the health needs of the community served.

e. Patient centeredness Patient centeredness is usually assessed through patients’ surveys. There are three broad approaches to patient surveys. They measure patient experience with care received, patient satisfaction or the gap between patient’s expectations and perceived experience. The three surveys are complementary and one approach is not advocated over the other. What is really important is that hospitals listen to the patients, use the results from the survey to improve services and do it in a standardized way to allow comparisons between all major sub-dimensions. But it is unrealistic to have a same standardized questionnaire for all hospitals in Europe. f. Safety This transversal dimension is divided into patient, staff and environmental safety. It should link clusters of ideas such as:

- patient centeredness and continuity of care and - staff orientation and patient safety: training / adequacy.

Sub-dimensions of patient safety include issues such as quality monitoring, development and use of standardized guidelines, drug prescribing and delivery organization, infection control mechanisms, continuity of care, professional qualifications and job content.

3.2. Selection of indicators

The selection of indicators was a very complicated process because of the different understanding, systems and purposes. Methodological discussions reflected below facilitated the clarification of different issues and facilitated the selection of indicators.

- Indicators or standards?

Discussion centred on the definition of an indicator in relation to criteria, standards and norms. On the one hand, some members of the group considered that indicators have to be quantified, continuous variables and related to a denominator. On the other hand, the Ontario definitions include qualitative 0/1 variables, which may present a confusing message to many European

Page 12: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 8 countries. It was concluded that assessment of structural characteristics might be more appropriate questions for periodic surveys rather than for continuous measurement purposes. It was decided that ultimately the selection of indicators should be based on functionality rather than academic classifications. This would allow the inclusion of rate/ratio measures, supported by dichotomous questions, which may relate to internal or external organisational assessment. The final set of indicators does not include self-assessment against standards because of contextual validity (standards who proved useful in a setting may not be applicable to all other settings) and burden of tool development.

- Evidence and usefulness Evidence of validity may be relatively weak for some measures. For example, the indicator “return to ICU” is widely adopted by hospitals but there is very little hard evidence that this is a construct valid measure. Indicators such as waiting times and caesarean sections rates may be affected by local values, practices and norms. There is no robust evidence that they inform about the quality of clinical practice. Furthermore, even if evidence exists it may not be useful in one country but very useful in others. For instance, the implications of “overtime” in the employment environment of the USA may not be directly transferable to Europe. Similarly, turnover is only an indicator of staff satisfaction and morale in context where nurses have the opportunity to move job and unemployment rates are very low. However, it was agreed that, despite this, some indicators might be valuable for individual hospitals to use as comparative measures between hospitals even if there is no agreement on best practices for clinical decision-making. When no or little evidence is available to support the indicator but that the indicator is considered useful and is used by many hospitals or included in many systems, it has strong face validity. It was agreed that, unless there is clear evidence to the contrary, it is acceptable to recommend measures that are based on usefulness rather than hard scientific evidence. Indicators included in the core set have been selected on the basis of best available evidence and relevance to the European hospitals context.

- Content validity of the set as a whole Indicators lists aim to support a balanced and realistic view of hospital performance, progressing from a comprehensive list of known measures to a structured core set which is appropriate in most acute general hospital, and a supplementary “tailored” set for more specific situations or where data were available. Whether the balance is correct or not depends on the agreed aims and use of the indicator set. The core set is more outcomes focused, the tailored set more process focused, and structure is amenable to simple descriptive measures. But many outcomes e.g. staff satisfaction may also be seen as structural inputs to the care process. A good overall mix of input (or structure) / output (or process) / outcomes measures seems the best strategy for having an impact on quality improvement.

- Challenges with data collection and operational definitions Ultimately the reliability of hospital performance indicators rests upon the quality of data from a variety of sources, such as:

Page 13: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 9

• Clinical and administrative database: need linkage, standardized definitions, coding procedures, clinical validation

• Self assessment surveys: much used in Ontario but liable to inconsistent application and thus results

• Patient surveys • Staff surveys • Abstraction of medical records: e.g. occurrence screening, retrospective clinical audit

A conclusion was that indicators (e.g. complications) should not be excluded merely because they require regularly missing or inaccurate data. On the contrary, they should be used as an opportunity to identify and respond to a need for education and improvement leading to more effective information systems. Similarly, indicators based on data abstracted manually from records should not be excluded; the exercise is educational for staff and improves the quality of the clinical records. If indicators are to be used for international comparisons, operational definitions (and the underlying data) need to be standardised rather than left for local determination within national contexts. Although standardisation between countries should be aimed at, its achievement will be gradual. A commitment to start working for convergence is preferred to the unrealistic aim to seek immediate conformity. International comparisons are a secondary objective, aimed for at a later stage of the project.

3.2.2. Selection of indicators by dimension In this section, indicators are presented crossing vertical dimensions and horizontal perspectives. Clinical effectiveness and patient safety Initially 25 indicators were selected: 11 indicators for the core basket and 14 for the tailored basket (see both baskets on table 2). Indicators which use return home as an endpoint, and which describe merely volume of care were excluded. The following indicators were selected:

- Sentinel events especially related to surgery - Mortality in hospital (core) and out of hospital (advanced), disease specific at 30 days

e.g. neonatal, Coronary Artery Bypass Graft (CABG), hip fracture, Acute Myocardial Infarction (AMI).

- Readmission within 28 days to same hospital (core) or other hospital (tailored) for asthma and diabetes, separated for children and adults

- Return to ICU within 48 hours, admission after day surgery - Appropriate use of services: core set caesarean section and prophylactic antibiotic use (by

audit of indications rather than overall rate). An advanced level questionnaire could be used to assess availability and application of hospital policies and clinical guidelines.

Table 2: Final list of indicators for clinical effectiveness and patient safety

Dimension / Sub-dimension Core Tailored Interpretative information

Appropriateness of care Caesarean section rate Result of audit of indications for Caesarean section

Caesarean section rate in area

Conformity of processes of care Result of audit of medical records for Door to needle time

Page 14: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 10

prophylactic antibiotic use Percent of patients with CT scan (3 hours) after stroke Percent of AMI patient discharged on aspirin

Outcomes of care and safety processes Mortality rates for selected tracers Readmission rates for selected tracers Rate of admission after day surgery Rate of return to ICU for selected tracer conditions Prevalence of sentinel events

Ditto CORE, with more advanced risk-adjustment procedures and follow-up of patients (e.g. different hospitals for readmission and fixed follow-up for mortality) Post-tonsillectomy bleeding Rate of pressure ulcers for stroke and fracture patients Rate of nosocomial infections Rate of third degree perineal tear Rate of ureteric/bladder damage associated with hysterectomy

Reporting procedures for sentinel events, surveillance systems

The tailored basket includes many of the core indicators, but refined by record linkage and adding other specific conditions, for instance readmission within 28 days after surgery, door to needle time, computer assisted tomography scan within 3 hours stroke, acute myocardial infarction patients discharged on aspirin, post-tonsillectomy bleeding, pressure ulcers on neurology (stroke) and orthopaedic wards (hip fracture), hospital-acquired infection (central veinous percutaneous lines, artificial ventilation, total hip replacement), third degree perineal tear, ureteric/bladder damage associated with hysterectomy, diabetes control (see COMAC guidelines). Technical problems arise with tracers due to diagnostic variability, low prevalence rate and small samples. As a result of a focus on clinical groups many small hospitals with statistically small samples may be excluded. Due to this limitation, further work on selection of tracer conditions needs to be done. To support the interpretation of indicators, a preliminary questionnaire on safety structures and standards might include: guidelines development, committees, existence of an emergency trauma register, triage system, blood transfusion-related safety procedure (standard ordering list, haemovigilance), autopsies, technical maintenance e.g. lasers.

Other questions and comments need to be taken into consideration:

- Sentinel events: are the American Medical Association (AMA) list and definitions of incidents transferable to Europe? Should reporting mechanisms be standardised in order to make any results comparable? Does the prevailing culture promote “zero reporting” and the denial of adverse events? Sentinel events should be used as reflections of safety – by their analysis rather than their measurement.

- Appropriateness and conformity: these should be combined as “process of care”, assuming the availability of evidence-based or locally defined guidelines to define what is appropriate.

- Staff overtime: this is an invalidated determinant of safety but is an important issue to many hospitals. It may be better focused on nursing care (where evidence is clearer than medical) as a measure of staff welfare rather than patient safety.

Page 15: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 11

Efficiency Efficiency needs to be linked to complexity as can be measured by DRG if data is available. It should also include waste of resources e.g. blood, operating rooms, CT scanning, clinical time, X-ray film, and food. The use of the Appropriateness Evaluation Protocol (AEP) was discussed and was postponed because its usefulness in the European context was still unclear. A further analysis still needs to be done before AEP can be recommended in a wider context than the one it was designed for. The final selection of efficiency indicators is presented in table 3. Table 3: Final list of indicators for efficiency

Dimension / Sub-dimension Core Tailored Interpretative information

Appropriateness of services - Ambulatory surgery rate (extension: medical acute care) for selected tracers

- Result of audit of Appropriateness Evaluation Protocol (AEP – European version)

Productivity (input related to output) - Median length of stay for selected tracers

- Percent of patients admitted on day of surgery, for selected tracers

- LOS case-mix adjusted - # dosage unit (or cost)

antibiotics per patient day - Cost of corporate

services per patient day

Staff ratios (per professional category and per department)

Use of capacity - Average inventory in stock, for pharmaceuticals, blood products, surgical disposable equipment

- Operating room unused sessions

- Operating Room utilization rate

Bed occupancy rate

Financial performance - Cash-Flow/Debt

Patient centeredness Patient centeredness is primarily assessed through patient surveys. Many hospitals will need help in introducing patient surveys; others may be encouraged to ensure broad coverage for internal benchmarking even if they do not use a standard instrument. Hospitals should include results of their “home-made”, non-standardized, survey into the reporting scheme and monitor evolution over time. Although results may only be used for internal comparisons, the introduction of standardized questionnaires tested for validity and reliability on a large scale are strongly preferred. Table 4: Final list of indicators for patient centeredness

Dimension / Sub-dimension Core Tailored Interpretative information

Patient centeredness - Average score on overall perception/satisfaction items in patient surveys

Interpersonal aspects - Average score on interpersonal aspects items in patient surveys

Client orientation: access - Percent of cancelled one-day surgical procedures cancelled on day of surgery

- Average score on access items in patient surveys

Client orientation: amenities - Average score on basic amenities items in patient surveys

Client orientation: comprehensiveness ? ? Client orientation: information and empowerment

- Average score on information and empowerment items in patient surveys

Client orientation: continuity - Average score on continuity of care items in patient surveys

Page 16: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 12 Staff orientation and staff safety The following issues around indicators were raised:

- Operational definition of training days: How are “training days” to be defined? How would in-service training be included? Are they measures of structure or of process? Would training budget as a percentage of staff budget be a better measure? Both training days and training budget as a percentage of budget staff were retained after having been considered complementary tools and will be tested through the pilot implementation.

- Staff surveys should be a priority for further developments of indicators. However, no

indicator based on staff surveys results is included in the core set because even if standardised staff survey tools exist, they are not widely applicable in the participating countries. They will have to be widely adapted to the situation. Moreover, many countries do not have a culture of surveying either patients or staff and would be slow to adopt such recommendation.

Table 5: Final list of indicators for staff orientation and staff safety

Dimension / Sub-dimension Core Tailored Interpretative information

Economic factors Wages paid on time Salary and benefits Variation in workforce

Practice environment Results of staff survey on job content

HR survey on strategies to adequate staffing to needs

Perspective and recognition of individual needs

Number training hours on total number of working hours Training budget on total budget dedicated to staff

Health promotion and safety initiatives Budget dedicated to staff HP activities on total number of full time equivalent staff

Percent job descriptions with risk assessment

Staff experience Result of staff survey on organizational climate

Behavioural responses Number of days of short-term absenteeism (1 to 3 days) on total number of days contracted (stratified by department and profession) Number of days of long-term absenteeism (more 42 days) on total number of days contracted (stratified by department and profession.)

Turnover rate

Staff safety Number of work-related injuries (stratified by type) on total number of staff

Number of assaults on staff

Safety processes Staff excessive working hours

Responsive governance and environmental safety Waiting time must be analysed by urgency and interpreted to discriminate between the efficiency of waiting list management as opposed to the stewardship of health system resources. Potential medical conditions should be added to potential surgical procedures in waiting times.

Page 17: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 13

Table 6: Final list of indicators for responsive governance and environmental safety Dimension / Sub-dimension Core Tailored Interpretative

information Responsive governance and environmental safety System integration and continuity - Average score on items on

perceived continuity in patient surveys

- Percent discharge letters sent to GPs within 2 weeks

- Result of audit of discharge preparation

- Result of Appropriateness Evaluation Protocol for geriatric patients

Description of roles and functions implemented to foster integration of care

Public Health Orientation: access - Waiting time for selected tracers (median & variance)

- Score on items on perceived financial access in patient surveys

Description of strategies implemented for the management of waiting list

Public Health Orientation: Health promotion

- % women breastfeeding at discharge

- % AMI and CHF with lifestyle counselling (audit) documented in record

Self-assessment of WHO Baby Friendly standards

Equity and ethics ? ? Environmental concerns ? ?

Summary definitions of the core set of indicators Table 7: The final set of indicators and their definition, numerators and description are included

in Table 7 Dimension / Sub-dimension

Definition of the indicator

1- Clinical effectiveness Primary caesarean section delivery Numerator: Number of cases within the denominator with caesarean section

Denominator: Number of cases with first time deliveries

Alternative definition: caesarean section deliveries rate primigravidae

Numerator: total number of caesarean section delivery cases

Denominator: total number of deliveries Processes of care

Appropriateness of prophylactic antibiotic use for selected tracer procedures

Numerator: Number of patients who receive prophylactic antibiotics in adherence to accepted guidelines for selected procedures

Denominator: Total number of patients for selected procedures in the random sample of medical records audited

Readmission for selected tracer conditions / procedures within the same hospital

Numerator: Total number of patients readmitted to the emergency department of the same hospital within a fixed follow-up period relevant to initial condition procedure and with a readmission diagnosis relevant to the initial care

Denominator: Total number of patients admitted for selected tracer condition (e.g. asthma, diabetes, pneumonia, CABG)

Exclusion criteria: Patients admitted for the same tracer condition who died during the first spell

Outcomes of processes of care

Admission after one-day surgery

Numerator: Number of patients transferred from the day procedure facility following an operation procedure (by selected procedure, e.g. cardiac catheterization, digestive, respiratory or urinary system diagnostic endoscopy, laparoscopic cholecystectomy, one-day cataract surgery, curettage and dilatation of uterus) to an overnight facility, directly or within 12 hours

Denominator: Total number of patients who have an operation / procedure performed in the procedure facility

Exclusion criteria: Readmission for further planned operation to be excluded from both numerator and denominator

Page 18: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 14

Return to ICU

Numerator: Total number of patients with selected conditions / procedures discharged from intensive care unit who return to ICU within 48 hours

Denominator: Total number of patients with selected conditions / procedures discharged alive from ICU

2- Efficiency

Ambulatory surgery use

Numerator: Number of laparoscopic cholecystectomies, one-day cataract surgeries, curettage and dilatation of the uterus and oncology procedures performed in the day procedure facility (no overnight stay expected) over a period

Denominator: Total number of procedures over the same period

Appropriateness

Admissions on day of surgery

Numerator: Total number of admissions on day of surgery

Denominator: Total number of patients admitted for surgery

Input related to output

Median (or average) length of stay for specific procedures and conditions: hip replacement, CABG, diabetes and asthma, appendectomy Numerator: Total number of days for specific procedures and conditions: hip replacement, CABG, diabetes and asthma, appendectomy

Denominator: Total number of patients admitted for hip replacement, CABG, diabetes and asthma, appendectomy

Exclusion criteria: transfer to / from other hospitals.

Inventory in stock

Numerator: Total value inventory at the end of the year for pharmaceuticals, blood products, surgical disposable equipment

Denominator: Total expenditures for pharmaceuticals, blood products, surgical disposable equipment / 365 days

Use of capacity Operating rooms unused sessions

Numerator: Number of sessions used.

Denominator: Number of sessions staffed

Exclusion: Night surgical session (8 PM – 8 AM?)

3- Staff orientation (or staff responsiveness)

Practice environment

Perspectives and recognition of individual needs

Staff training

Training days

Numerator: total number of training hours

Denominator: total number of working hours

Stratification proposed: by professional category

Training budget

Numerator: total amount of budget dedicated to staff training

Denominator: total amount of budget dedicated to staff

Health Promotion activities

Health Promotion budget

Numerator: total amount of budget dedicated to staff health promotion activities

Denominator: total number of EFT staff

Page 19: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 15

Consequences

Absenteeism

Short-term absenteeism

Numerator: total number of short-term absenteeism days (from 1 to 3 days)

Denominator: total number of working days

Desegregation proposed: to be considered at hospital level but also stratified by department and professional category

Long term absenteeism

Numerator: total number of long-term absenteeism days (> 42days) over a period

Denominator: total number of working days over a period

Stratification proposed: to be considered at hospital level but also stratified by department and professional category

4- Responsive governance

System / community integration

Perceived continuity through patient survey (see patient centeredness)

Discharge letters to general practitioners

Numerator: Number of discharge letters sent to general practitioners within a maximum period of two weeks

Denominator: Total number of discharge

Public health orientation

Waiting time for selected procedures and conditions Variance of waiting time for specific surgical procedures and conditions: total hip replacement, hallux valgus, varicose veins surgery, breast cancer surgery, cataract surgery, cardiac surgery (differentiated by degree of emergency) Breastfeeding at discharge Numerator: Total number of women breastfeeding at discharge

Denominator: Total number of deliveries

Criteria for inclusion: singleton, born at greater or equal 37 weeks, weight >2,500 grams

Environmental safety

5- Patient centeredness

Client orientation

Score on patient experience/satisfaction questionnaire, including items on:

- Overall perception / satisfaction

- Interpersonal aspects

- Client orientation: information and empowerment

- Client orientation: continuity

Respect for patients

Cancelled one day surgical procedures Numerator: Number of patients booked for a one day surgical procedure cancelled on the day of the procedure or after admission

Denominator: Total number of patients booked for a one-day surgical procedure

Page 20: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 16 3.3 Feedback of results to participating hospitals

The main message to convey is that indicators should not be interpreted in isolation because: - the six dimensions are interrelated; - each dimension has its own conceptual model and sub-dimensions; - each indicator relates to other indicators within its dimension or other dimensions.

Trade-offs between indicators need to be highlighted and being taken into consideration when reporting. Reporting is a crucial step towards the interpretation of results of indicators as part of a process of quality improvement. The discussions on the reporting tool, focused on its function as a retrospective, strategic summary (“scorecard”) or as a real-time operational system (“dashboard”). The second approach was chosen for this project. Moreover, the term “scorecard” implies a score, which is very much value-loaded and implies a judgement. Though the opposite message should be conveyed: indicators cannot be used as definite judgement on hospital’s quality, they should be used as flags and as a starting point in a quality improvement process. The purpose of the balanced dashboard is to provide information to guide decision-making and quality improvement. Therefore the reporting scheme will relate results to external references as well as internal comparisons over time, and give guidance on interpretation. The structure of a balanced dashboard to report results to participating hospitals was proposed. Indicators are organized in “embedded levels”. On the first page, a synthetic overview over all dimensions is given. The following pages focus on specific dimensions and the dashboard ends up with a detailed description of each individual indicator with comparisons with different references, a focus on evolution over the past assessments, identification of relevant variables and other indicators that may influence or be influenced by. The specifications of the dashboard will initially be defined during the field implementation of the project in a limited number of countries. Constant feedback from the field will be incorporated to ensure that the tool is really valuable and usable by future participating hospitals. The design of reports should comply with the structure of accountability and authority within the institution. Implementation, the Danish experience Two current projects (one national, one in Copenhagen hospitals) provide practical experience of the development and use of clinical indicators relying largely on routine clinical data held in disease specific registers. These produce monthly and quarterly reports focused by clinical specialty with comparative data and thresholds defined by peer group providers. Evaluation showed that clinicians need to learn skills in the use and interpretation of data as well as inducement to supervise the quality of clinical data abstraction and coding.

Page 21: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 17

4. Conclusions

4.1. Summary of products

A number of products were developed in the frame of this project: - identification of WHO strategic orientations related to hospital performance - emerging conceptual model of performance standards and measures, identification of the

key dimensions of hospital performance (Product 1) - a framework for evidence-based indicator selection (Product 2) - growing catalogue of performance standards and indicators, and review of the literature

on their importance and usefulness, reliability and validity, contextual factors, current uses, etc. (Product 3)

- the definition of a core set of indicators that represent the different dimensions and sub-dimensions of performance in a balanced way (Product 3)

- the identification of the relationship betweens indicators within dimension and between dimensions and exogenous factors that affect them (Product 4)

- an insight into the importance, usefulness, impact on quality and general availability of potential indicators in ten European countries, through a survey in 10 countries (May 2003)

- a framework for the design of a reporting instrument called “balanced dashboard” (Product 5).

After having agreed on the final selection of indicators, the following orientations were agreed on:

- indicators are not measures but flags signalling potential problems that need a deep analysis of variations and understanding of factors that influence them. These variations may either reflect variations in quality of care, or may just reflect variations in the quality of data, or variations in the context and exogenous variables.

- evidence is often used as an absolute value, but the discussions made clear that evidence may change over time and may depend on the context and not be of global value.

- specific training is needed in skills for handling complex tools; while simple tools could be handled without advanced expertise. Some target countries do not have a tradition for using complex tools in quality management of health care, so the hospital performance model must be tailored to them.

- the aim of this project is to develop a model giving maximal value for quality improvement at hospital level and not especially for international benchmarking.

A tool for performance measurement may deviate from its purpose and therefore it must be carefully assessed if the tool developed could have any unwanted incentives built in. This must be considered carefully in the pilot implementation period.

Page 22: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 18 4.2. Recommendations on data related issues

4.2.1. Collection and quality control of hospital data Collection and quality control of hospital data should be the responsibility of each hospital; i.e. clinical data capture, coding, validation. This could be monitored by meta-indicators such as average number of ICD codes per discharge for each hospital or by showing compliance with agreed internal processes and checks (such as “data accreditation”). If hard measures are not available for quality control of hospital data, a self-assessment of data quality will be performed by participating hospitals. Regarding its resources and overall objective, the role of WHO might be limited in terms of concrete implementation support for collecting, validating, aggregating and presenting indicators. The role of WHO is to support its Member States to develop national capacities for assessing hospital performance. The indicators could be used (piloted initially) by individual hospitals and by aggregated databases at national level. This will support the simultaneous validation of the model, the indicators and the implementation.

4.2.2. Data aggregation Data aggregation ideally would be done by an independent agency at national or international level. To maintain objectivity, this agency would audit data, standardise aggregation; make adjustments and the calculation of distributions, norms and significance. The agency needs for time and resources should be realistically projected and funded. However even if hospital performance networks are configured at national level, they should be linked at international level. WHO could support this linkage either by directly or through other institutions or NGOs as the International Hospital Federation. In addition to data aggregation, there will be a continuing need for revising guidance on the application and interpretation of the indicators. The demonstration of “success stories” will depend heavily on internal assessment and external benchmarking that will have to be adjusted for risk and case-mix, and stratified to promote genuine peer comparison. In the short term, reference comparisons will be mostly internal until results are available for aggregation and pooling.

4.3. Recommendations on roles and responsibilities for piloting the framework for hospital performance assessment

4.3.1. Beneficiaries The main beneficiaries are the hospitals themselves. The first contact should be the chief executive or governing body of an institution, although it is important to ensure that clinicians are involved, at least enough to be committed to contributing to the accuracy of clinical data. This assumes that managers actually have the authority to manage and to improve performance.

Page 23: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 19

For this reason, pilot sites need to be selected on the basis of shared objectives and timescale and managers need to be able to take executive decisions to respond to the issues addressed by the indicators. Governments could also support the project by providing funding for the initial testing and validation of data and development of norms and benchmarks. Even if government funding is not provided, hospitals may have difficulties in keeping the resulting data from their regulatory bodies. It is therefore important to make clear the limitations of the indicators when applied for purposes other than internal management. Hospital-specific results are not for public reporting. However, a communication plan focusing on the public to describe the nature of the project and subsequent quality improvement actions could be designed by hospitals and national bodies. The initiative is fostering quality improvement actions and hence the public will indirectly benefit from the project. A compromise on explicit conditions including data quality, information management, use of indicators etc. should be fostered with WHO European Hospital Performance project. Mechanisms for monitoring and reporting back to WHO should also be established.

4.3.2. Project management/leadership Hospitals need to define terms of reference to identify the general requirements on skills and experience, including leadership and credible academic links. The leadership role may correspond to a senior manager or clinician, assuming he/she has the required technical skills. The project manager would need to be “the charismatic” leader and should be supported by a technical team. A national level committee or group may be valuable at governmental level (especially if the project is centrally sponsored), or through an NGO such as hospital or medical association where these exist (especially if private and public hospitals are to be involved).

4.3.3. Selection of indicators at local /national level There should not be a minimum number of indicators collected by individual hospitals in order to participate in the WHO programme, but sites should be strongly encouraged to take as many indicators as possible and to avoid the misuse of data. There would be no upper limit to the number of indicators in the “tailored” basket, to which any existing local indicators could be added. In practice, hospitals are likely to adopt any indicators that can be derived at minimal costs and effort from their existing data, but they should be strongly advised to include some core measures – preferably all – in each dimension and stick to the strategy of the project, which consists in assessing interrelated indicators in the context of the overall performance of the hospital. Certainly each country should be encouraged to identify some hospitals that will collect data for all of the core measures.

Page 24: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 20 4.3.6. Training implications Most potential users would not have the necessary to make effective use of the proposed indicators. There must be arrangements for initial and continuing training for pilot sites, and for cascading this at national level. WHO will prepare a user manual (in process), but local induction will be essential for the principal users. An initial two days workshop for hospital project leaders in each country would facilitate understanding and implementation. A further training of data technical staff and coding staff in technical procedures and in meeting data accreditation standards will also be needed. This could be integrated with formal management and clinical training but can also be achieved informally though user networks.

4.4. General recommendations for implementation of the framework for hospital performance assessment

The following points were agreed on:

- It is important to identify the project leader (senior managers, clinicians, other) to avoid the political barriers and seek local support from authorities

- The validity of the model and indicators (the “product”) should be evaluated – including its uptake and impact on hospital performance (improvement as well as perverse behaviour) - over a period of defined years

- Regarding the pilot of the overall model, it should be referred to implementation rather than pilot testing (it will avoid the idea of scientific validation but will rather insist on the practical use and refinement of the model)

- The implementation should be designed according to the national context (i.e. a country-by-country approach)

- Intergovernmental relationship between WHO and potential partners e.g. NGO such as the IHF should be considered for possibly hosting in the future a European database allowing benchmarking between the hospitals of the different European countries

- Develop expanded guidance on the collection and interpretation of individual indicators

- Identify the resources (time, data, training, money) which hospitals need to implement the package

4.5. The steps forward

Hospital performance is an ongoing work and indicators will need to be regularly reviewed in light of new evidence. Remaining gaps that reduce the content validity of the core set were identified, such as indicators of comprehensiveness and internal continuity are lacking. When such indicators will become available they will need to be included in the set. In summary, the group has developed a prototype but has not tested it or defined “after-sales” service or longer-term development. Remaining actions will include:

Page 25: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 21

- develop user manual,

- define data quality standards and capacity needs for hospital data systems,

- define selection criteria for hospitals participating to the pilot implementation,

- define a strategy for cascade training of hospital project leaders,

- develop and maintain participant networks,

- identify agencies or mechanisms for data validation and aggregation,

- design scope and schedule of programme for development and revision of indicators. Within these actions, the next steps agreed upon are the following:

- Refinement of the indicator model and the specifications of individual measures using, where available, existing published guidance on numerators, denominators, sample frames, coding criteria and interpretation notes (January 2004 workshop).

- Develop an introductory manual to describe the project (as work in progress), the conceptual model, some well-tested and agreed indicators and examples of others which need further development

- Further presentation of each indicator selected in the core or tailored basket. Include preface guidance on each indicator to identify the values and related standards.

- Development of educational support as a primary product for pilot sites and then other users.

- Pilot implementation in 6 countries in a limited number of hospitals to evaluate the usefulness of the framework for hospitals and identify the resources (time, data, training, money) which hospitals need to implement the package. Define expected outcomes of the pilot implementation.

- Run three-day workshop for country representatives (January 2004 workshop).

- As participants and experience grow, a mechanism should be developed to evaluate the indicators (technical) and the impact of the project (behavioural); this could be channelled through periodic meetings of user hospitals to pool and exchange data and indicator results.

A number of WHO Member States showed interest in hospital performance management during 2003, and are potential users of the indicator set; the maximization of the use of this model for country-specific work was consequently addressed.

Page 26: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 22

Annex 1

SCOPE AND PURPOSE

June 2003

The WHO Office for Integrated Health Care Services in Barcelona, Division of Country Support, is organizing a third workshop on Hospital Performance measurement from 13-14 June 2003. Continuing the second meeting, held in March 2003, this workshop is part of a WHO initiative to develop a Hospital Quality Improvement Strategy to support Member States in the implementation of Hospital performance assessment strategies and use of key indicators. The project has three main objectives:

1. Collect evidence on the use of hospital performance assessment models to support countries in their implementation.

2. Support the Member States for producing benchmarking tools to allow hospitals to compare themselves to peer groups in order to improve the quality of care provided.

3. Build an experts’ network on hospital performance assessment to support country implementation and analyze outcomes.

The work will be done in three stages: analysis of existing models worldwide and definition of a model, congruent with WHO’s policy orientations, which could be used throughout Europe; piloting of the agreed model, validated by groups of experts in a range of different countries (between 6 and 9 countries); and development of guidelines to facilitate country implementation. Conclusions of the first workshop were the proposal of generic definitions adapted to the context of this project, definitions of key dimensions of hospital performance promoting a comprehensive model of hospital performance measurement and recommendations regarding the design of a benchmarking network allowing participants to compare their own performance to peer hospitals through relevant performance indicators. The group of experts agreed on six key dimensions for assessing hospital performance:

• Clinical effectiveness • Safety • Patient centeredness • Production efficiency • Staff orientation • Responsive governance

During the second workshop, the expansion of the key dimensions of hospital performance, the design and the test of a framework to select evidence-based performance indicators, the review and first assessment of performance indicators, and the pilot test of the future set of indicators were discussed.

Page 27: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 23

The following conclusions were reached: progress was made in the definition of the main concepts of hospital performance; agreement on the sub-dimensions of hospital performance; agreement on a framework for selecting evidence-based indicators and on the orientations of the pilot test. The purpose of the third workshop will be to: i. discuss the results of the questionnaire on indicators conducted in 25 European countries in

May 2003; ii. select the set of indicators and performance tools which will be pilot tested in a range of

European countries (October 2003-April 2004); iii. discuss the overall model of Hospital performance, agree on all definitions, dimensions,

sub-dimensions and on the general architecture of the model; and iv. discuss the orientations and agree on the principles of the pilot test.

September 2003 The WHO Office for Integrated Health Care Services in Barcelona, Division of Country Support, is organizing a fourth and final workshop on Hospital Performance measurement from 12-13 September 2003. Continuing the third meeting, held in June 2003, this workshop is part of a WHO initiative to develop a Hospital Quality Improvement Strategy to support Member States in the implementation of Hospital performance assessment strategies and use of key indicators. The project has three main objectives:

1. Collect evidence on the use of hospital performance assessment models to support countries in their implementation

2. Support the Member States for producing benchmarking tools to allow hospitals to compare them to peer groups in order to improve the quality of care provided

3. Build an experts’ network on hospital performance assessment to support country implementation and analyse outcomes

The work will be done in three stages: analysis of existing models worldwide and definition of a model, congruent with WHO’s policy orientations, which could be used throughout Europe; piloting of the agreed model, validated by groups of experts in a range of different countries (between 6 and 9 countries); and development of guidelines to facilitate country implementation. The following outcomes were achieved between January 2003 and June 2003: - Identification of the key dimensions of hospital performance assessment - Identification of WHO strategic orientations related to the project - Clarification of the key dimensions and definition of the general architecture of the model - Review of literature on hospital performance indicators and definition of a framework to

pre-select evidence-based indicators - Survey on the importance, usefulness, impact on quality and general availability of potential

indicators by hospital managers in different European countries (May 2003 – 12 European countries)

- Pre-selection of performance indicators on a scientific basis - Selection of the sets of indicators and completion of the first draft of the model

Page 28: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 24 The overall purpose of the fourth workshop is to finalize the design of a balanced scorecard model, which could be pilot tested in 6 European countries. The three major objectives of the workshop will be to: 1. Finalize the core set of performance indicators included the balanced scorecard and

characterize the trade-offs between the selected indicators 2. Design a dashboard (balanced scorecard model) enhancing evidence-based management

and related challenges 3. Consequently define relevant strategies for the preparation of the pilot test The expected detailed outcomes of the workshop are: For the first objective 1.1. To reach an agreement on the final core set of indicators to be included in the balance scorecard 1.2. To agree on the trade-offs between hospital performance indicators and selected in the core set For the second objective 2.1. To reach an agreement on the design of a dashboard enhancing evidence-based management 2.2. To agree on the main challenges and strategies for facilitating the appropriation of the results For the third objective 3.1. To agree on the strategies for preparing the pilot test in six European countries (implementation test of the balanced scorecard in Albania, Denmark, France, Germany, Georgia, Lithuania)

Page 29: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 25

Annex 2

PROVISIONAL PROGRAMME

Friday, 13 June 2003 09.00 – 09.10 Introduction: Jeremy Veillard

09.10 – 09.20 Outline of the project: Jeremy Veillard

09.20 – 09.30 Discussion

09.30 – 09.50 Theoretical frame of Hospital Performance Assessment: Niek Klazinga

09.50 – 10.30 Discussion Chair: Vahe Kazandjian

10.30 – 11.00 COFFEE BREAK 11.00 – 11.30 Presentation of the pre-selected list of indicators: François Champagne and Ann-Lise

Guisset

11.30 – 12.00 Discussion Chair: Brian Collopy

12.00 – 12.15 Presentation of the results of the European questionnaire on indicators

12.15 – 12.45 Discussion Chair: Brian Collopy

12.45 – 14.00 LUNCH BREAK

14.00 – 16.00 Sub working groups – selection of indicators

16.00 – 16.30 COFFEE BREAK

16.30 – 18.00 Sub working groups – selection of indicators (continuation)

18.00 Closure of the first day Saturday, 14 June 2003 09.00 – 10.30 Sub working groups – selection of indicators (continuation)

10.30 – 11.00 COFFEE BREAK

11.00 – 13.00 Plenary session – presentation of the indicators selected by the working groups Chair: Ann Rooney

13.00 – 14.00 LUNCH BREAK

14.00 – 14.30 Synthesis and identification of major issues

François Champagne and Ann-Lise Guisset

14.30 – 15.30 Discussion

Chair: Niek Klazinga

15.30 – 16.00 COFFEE BREAK 16.00 – 16.15 First orientations for pilot testing the set of indicators: Jeremy Veillard

16.15 – 16.45 Discussion

Chair: Dr Pierre Lombrail

16.45 – 16.55 Wrap-up of the meeting: Charles Shaw

16.55 – 17.00 Conclusions: Jeremy Veillard

17.00 Closure of the meeting Friday, 12 September 2003

Page 30: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 26 09.00 – 09.15 Opening and introduction: Jeremy Veillard

Selection and interrelations of indicators

09.15 – 10.00 Selection of the core set of indicators: principles, choices and main challenges Ann-Lise Guisset and François Champagne

10.00 – 10.30 Discussion Chair: Vahé Kazandjian

10.30 – 11.00 COFFEE BREAK

11.00 – 13.00 Discussion (continuation) Chair: Vahé Kazandjian

13.00 – 14.00 LUNCH BREAK 14.00 – 14.30 Interrelations and trade-offs between performance indicators selected in the core set

Ann-Lise Guisset and François Champagne 14.30 – 15.30 Discussion

Chair: Niek Klazinga

15.30 – 16.00 COFFEE BREAK

Appropriation of the results and related challenges 16.00 – 16.45 Identification of main challenges: from indicators to interpretation to action

Ann-Lise Guisset and François Champagne 16.45 – 17.45 Discussion

Chair: Adalsteinn Brown

17.45 – 18.00 Wrap-up: Svend Jorgensen

18.00 Closure of the first day: Jeremy Veillard

Saturday, 13 September 2003 09.00 – 09.30 Educational aspects on assessing hospital performance: Part 1: Presentation of the

dashboard

09.30 – 10.30 Discussion Chair: Adalsteinn Brown

10.30 – 11.00 COFFEE BREAK

11.00 – 11.30 Educational aspects on assessing hospital performance: part 2: tools for facilitating the use of the dashboard and preliminary steps for pilot testing the model Ann-Lise Guisset and François Champagne

11.30 – 13.00 Discussion Chair: Johann Kjaergaard

13.00 – 13.15 Wrap-up of the workshop: Charles Shaw

13.15 – 13.30 Conclusions and future steps: Jeremy Veillard

13.30 Closure of the workshop

Page 31: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 27

Annex 3

LIST OF PARTICIPANTS

Temporary Advisers

Mr Onye Arah Health Services & Systems Research. Room K2-203, Department of Social Medicine Division of Clinical Methods and Public Health Academic Medical Center P.O. Box 22700 1100 DE Amsterdam NETHERLANDS

Telephone: +31 20 566 5049 Fax: +31206972316 E-mail: [email protected]

Dr Adalsteinn D. Brown Department of Health Policy, Management and Evaluation University of Toronto 150 College Street, Fitzgerald Bldg., Room 147A M5S 1A8 Toronto, ON CANADA

Telephone: +1 416 946 5023 Fax: +1 416 978 1466 E-mail: [email protected]

Dr François Champagne Professeur titulaire GRIS et Département d'administration de la santé Université de Montreal B.P. 6128, succursale Centre-ville H3C 3J7 Montreal, Quebec CANADA

Telephone: +15143432226 Fax: +15143432207 E-mail: [email protected]

Dr Brian T. Collopy Director CQM Consultants Level 4. 55 Victoria Pde Fitzroy, Victoria 3065 AUSTRALIA

Telephone: +61 3 9419 3377 Fax: +61 3 9416 1192 E-mail: [email protected]

Mr Thomas Custers Department of Social Medicine Academic Medical Center P.O. Box 22700 1100 DE Amsterdam NETHERLANDS

Telephone: +31205664786 Fax: +31206972316 E-mail: [email protected]

Ms Pilar Gavilán Responsible for the Projects Unit Directorate on Organization, Information Systems, Projects and Evaluation (DOSIPA) Catalan Institute for Health Gran via de les Corts Catalanes, 587 08007 Barcelona SPAIN

Telephone: +34 93 482 43 33 Fax: +34 93 482 45 27 E-mail: [email protected]

Page 32: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 28

Dr Alicia Granados Navarrete c/ Alfons XII, 23-27, 3r 3a 08006 Barcelona SPAIN

Telephone: +34 93 200 22 53 Fax: E-mail: [email protected]

Dr Ann-Lise Guisset GRIS et Département d'administration de la santé Université de Montreal B.P. 6128, succursale Centre-ville H3C 3J7 Montreal, Quebec CANADA

Telephone: Fax: E-mail: [email protected]

Dr Svend Juul Jørgensen WHO Consultant WHO Office for Integrated Health Care Services Marc Aureli, 22-36 08006 Barcelona SPAIN

Telephone: +34 93 241 82 70 Fax: +34 93 241 82 71 E-mail: [email protected]

Mr Vytenis Kalibatas Deputy Managing Director Kaunas Medical University Hospital Eiveniu str. 2 LT-3007 Kaunas LITHUANIA

Telephone: +370 37 32 63 23 Fax: +370 37 32 66 01 E-mail: [email protected]

Dr Vahé Kazandjian President Center for Performance Sciences (CPS) 6820 Deerpath Road Elkridge, MD 21075-6234 UNITED STATES OF AMERICA

Telephone: +1 410 379 9540 Fax: +1 410 379 9558 E-mail: [email protected]

Dr Johan Kjaergaard Head of Unit for Clinical Quality Copenhagen Hospital Corporation Bispebjerg Bakke 20C 2400 København NV DENMARK

Telephone: +45 3531 2852 Fax: +45 3531 6317 E-mail: [email protected]

Dr Niek Klazinga Department of Social Medicine Academic Medical Center P.O.Box 22700 (Meibergdreef, 9) 1100 DD Amsterdam NETHERLANDS

Telephone: +31 20 5664892 Fax: +31 20 6972316 E-mail: [email protected]

Page 33: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 29

Dr Pierre Lombrail Director Pôle Information Médicale, d'Evaluation & de Santé Publique (PIMESP) Centre Hospitalier Universitaire de Nantes. Hôpital Saint Jacques. 85, rue Saint Jacques 44 093 Nantes cedex 1 FRANCE

Telephone: +33 2 40 84 69 20 Fax: +33 2 40 84 69 21 E-mail: [email protected]

Ms Ehadu Mersini Chief of the Planning and Medical Programs Sector Hospitals Department Ministry of Health Tirana ALBANIA

Telephone: +355 4 364614 Fax: +355 4 364270 E-mail: [email protected]

Dr Etienne Minvielle INSERM 101 rue de Tolbiac 75654 Paris Cedex 13 FRANCE

Telephone: +33144236000 Fax: +33145856856 E-mail: [email protected]

Ms Anne L. Rooney Executive Director International Services Joint Commission Resources, Inc. One Lincoln Centre, Suite 1340 IL 60181 Oakbrook Terrace, IL 60181 UNITED STATES OF AMERICA

Telephone: +1-630-268-7445 Fax: 1-630-268-7405 E-mail: [email protected]

Dr Henner Schellschmidt Wissenschaftliches Institut der AOK Kortrijker Str. 1 53177 Bonn GERMANY

Telephone: +49 228 843 135 Fax: +49 228 843 144 E-mail: [email protected]

Dr Rosa Suñol Foundation Avedis Donabedian Provença 293 pral 08037 Barcelona SPAIN

Telephone: +34 93 207 66 08 Fax: +34 93 459 38 64 E-mail: [email protected]

Rapporteur

Page 34: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 30 Dr Charles Shaw Director, Audit and Quality CASPE Research 11-13 Cavendish Square London W1G 0AN UNITED KINGDOM

Telephone: +44 20 7307 2879 Fax: +44 20 7307 2422 E-mail: [email protected]

Observer

Professor Mohammed Hoosen Cassimjee Head Department of Family Medicine Pietermaritzburg Metropolitan Hospital Complex and Midlands Region Northdale Hospital. Old Greytown Road. Private Bag X9006 Pietermaritzburg 3200 SOUTH AFRICA

Telephone: +27 33 3879000 Fax: +27 33 3979768 E-mail: [email protected]

World Health Organization Regional Office for Europe

Dr Manuela Brusoni Intern WHO Office for Integrated Health Care Services Division of Country Support c/ Marc Aureli, 22-36 08006 Barcelona SPAIN

Telephone: +34932418270 Fax: +34932418271 E-mail: [email protected]; [email protected]

Dr Mila Garcia-Barbero Head of the Office WHO Office for Integrated Health Care Services Division of Country Support c/ Marc Aureli, 22-36 08006 Barcelona SPAIN

Telephone: +34932418270 Fax: +34932418271 E-mail: [email protected]

Mr Oliver Gröne Technical Officer, Health Services WHO Office for Integrated Health Care Services Division of Country Support

Telephone: +34932418270 Fax: +34932418271 E-mail: [email protected]

Dr Isuf Kalo Regional Adviser for Quality of Health Systems Division of Country Support 8, Scherfigsvej DK 2100 Copenhagen Ø DENMARK

Telephone: +45 39 17 12 65 Fax: +45 39 17 18 64 E-mail: [email protected]

Page 35: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 31

Mr Sergio Pracht STP- Patients Pathways WHO Office for Integrated Health Care Services Division of Country Support

Telephone: +34932418270 Fax: +34932418271 E-mail: [email protected]

Dr Carles Triginer Borrell Technical Officer, Emergency Medical Services WHO Office for Integrated Health Care Services Division of Country Support

Telephone: +34932418270 Fax: +34932418271 E-mail: [email protected]

Mr Jeremy Veillard Technical Officer, Hospital Management WHO Office for Integrated Health Care Services Division of Country Support

Telephone: +34 93 241 82 70 Fax: +34 93 241 82 71 E-mail: [email protected]

Page 36: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 32

Annex 4

REFERENCES

Berwick DM et al. Connections between quality measurement and improvement. Medical Care. Volume 41, Number 1, Supplement, pp I-30-I-38. Boland T, Fowler A. A systems Perspective on Performance Management in Public Sector Organizations. The International Journal of Public Sector Management 2000; 13(5): 417-446. Chassin MR, Galvin RW. The urgent need to improve health care quality: national Institute of Medicine national roundtable on health care quality. JAMA 1998; 280: p. 1000-5 FOQUAL. La qualité des soins dans les établissements hospitaliers suisses: analyse de six indicateurs. http://www.foqual.ch. September 2000. Freeman T. Using performance indicators to improve health care quality in the public sector: a review of the literature. Health Services Management Research 2002; 15: 126-137 Ibrahim JE. Performance indicators from all perspectives, International Journal for Quality in Health Care, volume 13, number 6: 431-432. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington DC: National Academy Press, 2001. Joint Commission on Accreditation of Healthcare Organizations. National Library of Healthcare Indicators.™ Health Plan and Network Edition. Oakbrook Terrace, IL: Joint Commission on Accreditation of Healthcare Organizations, 1997. Kazandjian V. Accountability through measurement: a global health care imperative. ASQ Quality Press. USA. 2003. pp. 334-335. Kazandjian V. et al. Are performance indicators generic? The international experience of the Quality Indicator Project. Journal of Evaluation in Clinical Practice 2003; 9, 2: 265-276. Kazandjian V, Lied TR. Healthcare performance measurement: systems design and evaluation. ASQ Health Care Series. Chip Caldwell Editor, 1999. p. 38. Klazinga N. et al. Indicators without a cause. Reflections on the development and use of indicators in health care from a public health perspective. International Journal for Quality in Health Care 2001, volume 16, Number 6: pp 433-438 Mc Kee M, Healy J (editors). Hospitals in a changing Europe. Buckingham: Open University, 2002. Montoya-Aguilar, C. Measuring the performance of hospitals and health centres. Geneva: World Health Organization, 1994 (document: WHO/SHS/DHS/94).

Page 37: Selection Of Indicators For Hospital Performance Mesurement

EUR/03/5038066 page 33

Roski J, Gregory R. Performance measurement for ambulatory care: moving towards a new agenda. International Journal for Quality in Health Care, volume 13, number 6: 447-453. World Health Organization. A review of determinants of hospital performance: report of the WHO Hospital Advisory Group Meeting (document: WHO/SHS/DHS/94.6). Geneva: 11-15 April 1994. World Health Organization. The World Health Report: 2000: Health systems: improving performance. Geneva, 2000. WHO Regional Office for Europe. The European Health Report 2002, (European series, No. 97). Copenhagen: 2003 WHO Regional Office for Europe. Health 21: Health for All in the 21st Century. (European series, No. 6). Copenhagen: 2000 (European series, No. 6).