Interpreting MBQIP Hospital Data Reports for Quality Improvement __________________________________________________________________________ Updated March 2018 This project is supported by the Health Resources and Services Administration (HRSA) of the U.S. Department of Health and Human Services (HHS) under grant number U1RRH29052, Rural Quality Improvement Technical Assistance Cooperative Agreement, $500,000 (0% financed with nongovernmental sources). This information or content and conclusions are those of the author and should not be construed as the official position or policy of, nor should any endorsements be inferred by HRSA, HHS, or the U.S. Government
21
Embed
Interpreting MBQIP Hospital Data Reports for Quality ......2017 Flex grant cycle, which ends August 31, 2017. Recognizing the evolving nature of health care quality measures, this
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Interpreting MBQIP Hospital Data Reports for Quality Improvement
Table of Contents .................................................................................................................................................... 2
About MBQIP ..................................................................................................................................................... 3
Purpose of this Guide .......................................................................................................................................... 3
Using MBQIP Patient Safety and Inpatient/Outpatient Care Quality Reports ....................................................... 4
N/A, D/E, 0 Patients on Patient Safety and Inpatient/Outpatient Care Quality Reports .................................... 4
Using Comparison Data for Patient Safety and Inpatient/Outpatient Measures ................................................ 5
Interpreting Reports to Support Improvement .................................................................................................... 5
Using MBQIP Patient Engagement Quality Reports .............................................................................................. 6
Using Comparison Data for HCAHPS Measures ............................................................................................... 7
Interpreting Reports to Support Improvement .................................................................................................... 7
Using MBQIP Care Transitions Quality Reports for Emergency Department Transfer Communication (EDTC) 8
Using Comparison Data for the EDTC Measure ................................................................................................ 8
Interpreting Reports to Support Improvement .................................................................................................... 9
Using MBQIP Patient Safety and Inpatient/Outpatient Care Quality Reports
The MBQIP Patient Safety and Inpatient/Outpatient Care Quality Reports include data from CMS Hospital
Compare measures that are relevant for CAHs under the MBQIP domains of patient safety and outpatient care.
The reports include data from all CAHs that have signed a MBQIP Memorandum of Understanding (MOU) and
have submitted data. Thus, the reports include data from CAHs that have not agreed to publicly report on
Hospital Compare, in addition to data from CAHs that don’t have enough cases to be publically reported on
Hospital Compare, providing a more complete picture of performance across CAHs nationally.
N/A, D/E, 0 Patients on Patient Safety and Inpatient/Outpatient Care Quality Reports The following are brief explanations and examples of why an MBQIP Patient Safety and Inpatient/Outpatient
Care Quality Report might show not available (N/A), zero (0) patients, or data excluded (D/E) for some
measures.
N/A can mean two different things:
Data was not submitted/reported by the CAH.
Data was submitted but was rejected/not accepted into the Quality Improvement Organization (QIO)
Clinical Warehouse.
Examples – N/A
A CAH did not enter a zero into the Population and Sampling grid for a given measure set and did not
submit/report any cases to QualityNet. (See the MBQIP Reporting Guide for more details on Population and
Sampling.)
A CAH may have submitted data to the QIO Clinical Warehouse, but the file could have had technical
issues or a case was missing data in some of the abstraction fields. The case(s) would be rejected from the
warehouse. CAHs are strongly encouraged to run a case status summary report each quarter after
submission (and before the deadline) to ensure cases have been accepted into the warehouse. Instructions
can be found here: https://www.ruralcenter.org/tasc/resources/get-your-data-accepted-qualitynet-warehouse
A CAH may have entered a number greater than zero into the Population and Sampling grid, but did not
submit/report any cases to QualityNet.
Zero (0) means that a CAH entered a zero into the Population and Sampling grid, indicating that it had no
eligible patients in a measure set population for the reporting quarter, and the CAH did not submit/report
anything further to QualityNet.
Example – 0 Patients
A CAH entered a zero into the Population and Sampling grid for a particular measure set. The CAH did not
submit/report anything further to QualityNet (because there were no eligible cases to submit). Because the CAH
entered a zero into the Population and Sampling grid to indicate that no cases were eligible, the CAH is
considered reporting, and a 0 is shown on the Patient Safety and Outpatient Report.
Data excluded (D/E) means that the CAH submitted eligible cases to QualityNet. Data was considered
submitted and accepted to the QIO Clinical Warehouse; however, case(s) were excluded from a particular
A CAH submits eligible cases in the population for a measure set, but the cases do not meet the inclusion
criteria for a specific measure. One way this might happen is: A CAH submits one outpatient acute
myocardial infarction (AMI) care case and the case is accepted into the QIO Clinical Warehouse. However,
since the patient was not given fibrinolytic therapy, the case does not meet the criteria for inclusion in OP-1
(median time to fibrinolytic therapy) and OP-2 (fibrinolytic therapy received within 20 minutes of ED
arrival). The OP-1 and OP-2 measures are excluded for this case, and, if no other AMI care cases are
submitted, the report would indicate D/E for the OP-1 and OP-2 measures.
Using Comparison Data for Patient Safety and Inpatient/Outpatient Measures MBQIP Patient Safety and Inpatient/Outpatient Care Quality Reports include state and national comparison
data for all reporting CAHs. The measures on these reports are process-based quality measures, which evaluate
implementation of clinically proven best practices of care. Hospitals should strive to provide these best practices
in clinical care to every patient, 100 percent of the time.
State and national comparison data are averages. In your reports, averages of the state and national median
measures (OP-1, OP-3b, etc.) are medians of those hospital-level medians for CAHs in the state, and in the
nation. To calculate such an average for a given measure, the medians of all hospitals reporting that measure
would be arranged smallest to largest, and the middle median would be displayed on the report. Averages of the
state and national percentage measures (OP-2, OP-4, etc.) are averages in the more usual sense of the term. To
calculate the state and national averages for a given measure, the sum of all numerators for that measure is
divided by the sum of all denominators.(state and national median measures are medians of the median, while
state and national percentage measures are averages in the usual use of the term).
Although it can be helpful to understand your comparison to those norms, averages represent the middle ground
for performance and everyone should strive to achieve at least the 90th percentile for each measure. For quality
improvement purposes, such data benchmarks, are more useful than average comparison data. (Note:
Benchmarks for the top 10 percent by state and for the nation are included in your MBQIP Patient Safety and
Outpatient reports, but your state Flex Coordinator may be able to provide additional state specific information)
Interpreting Reports to Support Improvement Examples of how to interpret the data for use in quality improvement efforts are listed below. Each example is
hyperlinked to the corresponding example in the sample reports found in Appendix A.
Example A: Lack of Consistent Process
Reports that show a measure routinely at low performance indicate that there is not a consistent process for
completion and documentation of that best practice of care. Hospitals in this situation are encouraged to
develop and implement standardized processes to ensure evidence-based care is being provided and
documented.
Example B: Process May Need Adjustment
Reports that show a measure routinely at high performance, but not at 100 percent indicate processes for
best practices are in place, but there is opportunity to ensure they are consistently followed. In this situation,
a hospital may want to consider reviewing records for the patient stays that did not meet the measure. They
can help the hospital to understand why those individual patients did not receive the evidence-based best
There is typically more variation in this type of survey data than in process measures. Therefore, you
should look for trends that indicate consistent decline or improvement over time.
Looking at comparison data on the MBQIP Patient Engagement Quality Reports can help provide a
better understanding of how your hospital compares to other like facilities in your state and nationally. If
benchmark data from top performers is available (such as the top 10 percent), that can be helpful in
setting targets for improvement goals, particularly if your hospital is already above the state and national
averages. If your hospital is below the state or national average in an area, that also indicates an
improvement opportunity.
Note: State and national rates in the MBQIP Patient Engagement Quality Reports represent
all hospitals in the state and nation, not just CAHs.
Not all hospitals will be given an HCAHPS Star Rating. Hospitals must have 100 completed surveys in a
rolling four quarter period to have an HCAHPS Star Rating calculated. Hospitals that generally have
near 100 completed surveys in such a time period may have no Star Rating for some time periods that
dip slightly below 100 completed surveys.
HCAHPS data are presented as a rolling four quarters (see the sample MBQIP Patient Engagement
Quality Report in Appendix A) and each report represents the most recent rolling four quarters available,
so it will take time to see improvements/changes in the data. To look at HCAHPS performance over
time, you can compare MBQIP Patient Engagement Quality Reports from different time periods. If
quarterly reports are available from the survey vendor (or through the internal processes if a vendor is
not used) those reports may be more useful for evaluating changes resulting from specific initiatives or
efforts that have been launched. Always use caution when interpreting data from individual quarters, as
the number of surveys completed in any individual quarter may be small.
Interpreting Reports to Support Improvement Examples of how to interpret the data for use in quality improvement efforts are listed below. Each example is
hyperlinked to the corresponding example in the sample report found in Appendix A.
Note: There are two pages in the sample MBQIP Patient Engagement Quality Report: hospital specific data on
the first page and average comparison data on the second page. Each example references both sets of data. The
hyperlink in each example will take you to the first page of the report.
Example F: Opportunity for Improvement
In this example of the HCAHPS composite scores for Composites 1 through 5:
The hospital’s percent “Always” response rate is consistently lower than the state and national CAH
averages for all of these composite indicators as shown on page two of the report.
The hospital could revisit earlier HCAHPS reports to see if any similar trends in these composite scores
are noticeable.
Example G: Translate to the Number of Patients
For many, talking about percentages of responses on a survey can be difficult to translate into impact on
individual patients. One strategy in using HCAHPS data to help staff understand the need for
improvement is to translate the percentages into numbers of actual patients. In the example circled, 70
percent of respondents indicated that their pain was always well-controlled. 70 percent translates to 203
individual patients that always felt their pain was well-controlled. Consider taking this example one step
further and calculate the number of patients who did not answer “Always.” Subtract 203 from 290 to
learn that 87 patients did not answer that they always felt their pain was well-controlled. Considering the
number of patients may help make a more compelling appeal to staff to improve communication and/or
processes in this area.
Calculating the Number of Patients
By using information provided on the report we can compute how many patients answered a question in a
certain way. In this case, we want to know how many patients answered “Always” to the questions making up
the pain management composite. We know that 70 percent of patients said “Always”; this is represented as 70
divided by 100. We also know that 290 people completed the survey (as listed at the top of the report). So we
are solving for X where 70 divided by 100 equals X divided by 290.
70
100=
X
290
70 ∗ 290
100= 203 Patients
We find that, in this example, 70 percent is equal to 203 patients. To calculate the number of patients who did
not answer “Always” to the pain management composite, subtract 203 from 290: 290 – 203 = 87 patients.)
Note: Survey respondents can opt out of answering questions on the HCAHPS. If using a HCAHPS vendor,
CAHs can also identify the exact number of patients with specific responses by looking for that additional
information in their vendor reports.
Using MBQIP Care Transitions Quality Reports for Emergency Department Transfer Communication (EDTC) A fundamental role of CAHs in the health care safety net for rural communities is stabilization and transfer of
patients in emergency situations. The Emergency Department Transfer Communication (EDTC) measure allows
CAHs to evaluate and demonstrate the effectiveness of that important role.
The EDTC measure evaluates the process of transfer communication through documentation of key information
(data elements) and the timeliness in which that information is communicated to the next setting of care.
Using Comparison Data for the EDTC Measure Similar to the other reports, MBQIP Care Transitions Quality Reports for Emergency Department Transfer
Communication (EDTC) also include state and national comparison data for all reporting CAHs. State and
national comparison data are averages. Although it can be helpful to understand your comparison to those
norms, averages represent the middle ground for performance. Strive to achieve at least the 90th percentile for
each measure. For quality improvement purposes, such data benchmarks are more useful than average
comparison data. (Note: Benchmarks for the top 10 percent by state and for the nation are included in your
EDTC reports, but your state Flex Coordinator may be able to provide additional state specific information)
Although the EDTC measure has been utilized sporadically across the country for over 10 years, inclusion of
the measures in MBQIP is the first systematic nationwide implementation of the EDTC measure. Since this
measure is newer to most CAHs compared to HCAHPS and OP measures, state and national averages are likely
to increase consistently over the first few quarters of data collection as CAHs across the country update
documentation and processes.
Interpreting Reports to Support Improvement Examples of how to interpret the data for use in quality improvement efforts are listed below. Each example is
hyperlinked to the corresponding example in the MBQIP Care Transitions Quality Report for Emergency
Department Transfer Communication (EDTC) found in Appendix A.
Example H: Opportunity for Improvement
EDTC sub-measure 2 (Patient Information) is low-performing among the EDTC categories for this hospital,
with a range of percentages for each quarter and an aggregate performance for the year of 77 percent. It is
also lower than the state and national averages (90 percent and 94 percent respectively) and 90th percentiles
(both at 100 percent). Therefore, it may be a target for improvement efforts such as updating documentation
fields and processes to help ensure the data is captured and communicated. Depending on the tool a CAH is
using to collect the data, they may also be able to see results at the data element level, which can be even
more useful in targeting areas for improvement. For example, the data elements for EDTC sub-measure 2
information, and patient insurance information. If results are available at the data element level it may help
target improvement opportunities for documentation and/or processes to address specific information that is
most commonly missing.
Example I: Documentation or Process?
EDTC sub-measure 7 (Procedures and Tests) also has room for improvement, with an aggregate
performance for the year at 79 percent. Although the hospital’s rate is closer to the state and national
average for this sub-measure (88 percent and 95 percent respectively), it may still represent a good
opportunity for improvement. The hospital may need to evaluate whether the lower score in this area is a
result of failure to document or an issue with the process. CAHs participating in an eight-state pilot on this
measure found that one common area for improvement was to ensure documentation of a plan for how tests
results would be communicated to the next setting of care if they were not available at the time of transfer.
Additional Resources
CAHMPAS (Critical Access Hospital Measurement and Performance Assessment System) Online data query tool from the Flex Monitoring Team which can be used to compare and visualize CAH
performance on financial, quality, and community-benefit measures between groups of hospitals defined by
users. Authorized users are state flex programs and CAH administrators. Contact
Emergency Department Transfer Communication Measure Resources Data specifications manual, Excel-based data collection tool, recorded trainings, quality improvement toolkit
MBQIP Measures Fact Sheets One-measure-per-page-overview of the data collection and reporting processes for the required MBQIP
MBQIP Reporting Guide This guide is intended to help Flex Coordinators, critical access hospital staff and others involved with MBQIP
understand the measure reporting process. For each reporting channel, information is included on how to
register for the site, which measures are reported to the site and how to submit those measures to the site.
Quality Improvement Implementation Guide and Toolkit for Critical Access Hospitals Offers strategies and resources to help critical access hospital (CAH) staff organize and support efforts to
implement best practices for quality improvement. It includes:
A quality improvement implementation model for small, rural hospital settings
A 10-step guide to leading quality improvement efforts
Summaries of key national quality initiatives that align with MBQIP priorities
Best practices for improvement for current MBQIP measures
Simple, Excel-based tool to assist CAHs with tracking and displaying real time data for MBQIP and
other quality and patient safety measures to support internal improvement efforts