This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Quality Measurement Journey Self-AssessmentThis self-assessment is designed to help quality facilitators gain a better understanding of where they personally
stand with respect to the milestones in the Quality Measurement Journey (QMJ). What would your reaction be
if you had to explain why using a run or control chart is preferable to computing only the mean, the standard
deviation or calculating a p-value? Can you construct a run chart or help a team decide which control is most
appropriate for their data?
You may not be asked to do all of the things listed below today or even next week. But, if you are facilitating a
QI team or advising a manager on how to evaluate a process improvement effort, sooner or later these
questions will be posed. How will you deal with them?
The place to start is to be honest with yourself and see how much you know about the QMJ. Once you have had
this period of self-reflection, you will be ready to develop a learning plan for self-improvement and
advancement.
Use the following Response Scale. Select the one response which best captures your opinion.
1 I could teach this topic to others!
2 I could do this by myself right now but would not want to teach it!
3 I could do this but I would have to study first!
4 I could do this with a little help from my friends!
5 I'm not sure I could do this!
6 I'd have to call in an outside expert!Source: R. Lloyd, Quality Health Care: A Guide to
Developing and Using Indicators. Jones & Bartlett
Publishers, 2004: 301-304.
3
Quality Measurement Journey Self-AssessmentSource: R. Lloyd, Quality Health Care: A Guide to Developing and Using Indicators.
Jones & Bartlett Publishers, 2004: 301-304.
Measurement Topic or SkillResponse Scale
1 2 3 4 5 6
Moving a team from concepts to set of specific quantifiable measures
Building clear and unambiguous operational definitions
Developing data collection plans (including frequency and duration of data collection)
Helping a team figure out stratification strategies
Explain and design probability and nonprobability sampling options
Explain why plotting data over time is preferable to using aggregated data and summary
statistics
Describe the differences between common and special causes of variation
Construct and interpret run charts (including the run chart rules)
Decide which control chart is most appropriate for a particular measure
Construct and interpret control charts(including the control chart rules)
Link measurement efforts to PDSA cycles
Build measurement plans into implementation and spread activities
• Outcome Measures: Voice of the customer or patient. How is the system performing? What is the result?
• Process Measures: Voice of the workings of the system. Are the parts/steps in the system performing as planned?
• Balancing Measures: Looking at a system from different directions/dimensions. What happened to the system as we improved the outcome and process measures? (e.g. unanticipated consequences, other factors influencing outcome)
WHAT SPECIFIC MEASURE DID YOU SELECT FOR THIS PROCESS?
OPERATIONAL DEFINITIONDefine the specific components of this measure. Specify the numerator and denominator if it is a percent or a rate. If it is an average, identify the calculation for deriving the average. Include any special equipment needed to capture the data. If it is a score (such as a patient satisfaction score) describe how the score is derived. When a measure reflects concepts such as accuracy, complete, timely, or an error, describe the criteria to be used to determine “accuracy.”
Operational Definition Worksheet
Source: R. Lloyd. Quality Health Care: A Guide to Developing and Using Indicators. Jones and Bartlett, 2004.
DATA COLLECTION PLANWho is responsible for actually collecting the data?How often will the data be collected? (e.g., hourly, daily, weekly or monthly?)What are the data sources (be specific)?What is to be included or excluded (e.g., only inpatients are to be included in this measure or only stat lab requests should be tracked).How will these data be collected?
Manually ______ From a log ______ From an automated system
BASELINE MEASUREMENTWhat is the actual baseline number? ______________________________________________What time period was used to collect the baseline? ___________________________________
TARGET(S) OR GOAL(S) FOR THIS MEASUREDo you have target(s) or goal(s) for this measure?Yes ___ No ___
Specify the External target(s) or Goal(s) (specify the number, rate or volume, etc., as well as the source of the target/goal.)
Specify the Internal target(s) or Goal(s) (specify the number, rate or volume, etc., as well as the source of the target/goal.)
Operational Definition Worksheet(cont’d)
Source: R. Lloyd. Quality Health Care: A Guide to Developing and Using Indicators. Jones and Bartlett, 2004.
Measure Name(Provide a specific name such as medication error
rate)
Operational Definition(Define the measure in very specific terms.Provide the numerator and the denominator if a percentage or rate. Indicate what is to be included and excluded. Be as clear and
unambiguous as possible)
Data Source(s)(Indicate the sources of the data. These could include
Percent of patients who have MI or Unstable Angina as diagnosis
Numerator = Patients entered into the NSCP path who have Acute MI or Unstable Angina as the discharge diagnosis
Denominator = All patients entered into the NSCP path
1.Medical Records
2.Midas
3.Variance Tracking Form
1.Discharge diagnosis will be identified for all patients entered into the NSCP pathway2.QA-URwill retrospectively review charts of all patients entered into the NSCP pathway. Data will be entered into MIDAS system
1.Currently collecting baseline data.
2.Baseline will be completed by end of 1st Q 2010
Since this is essentially a descriptive indicator of process volume, goals are not appropriate.
Number of patients who are admitted to the hospital or seen in an ED due to chest pain within one week of when we discharged them
Operational Definition:A patient that we saw in our ED reports during the call-back interview that they have been admitted or seen in an ED (ours or some other ED) for chest pain during the past week
All patients who have been managed within the NSCP protocol throughout their hospital stay
1.Patients will be contacted by phone one week after discharge
2.Call-back interview will be the method
1.Currently collecting baseline data.
2.Baseline will be completed by end of 1st Q 2010
Ultimately the goal is to have no patients admitted or seen in the ED within a week after discharge. The baseline will be used to help establish initial goals.
Total hospital costs per one cardiac diagnosis
Numerator =Total costs per quarter for hospital care of NSCP pathway patients
Denominator =Number of patients per quarter entered into the NSCP pathway with a discharge diagnosis of MI or Unstable Angina
1.Finance
2.Chart Review
Can be calculated every three months from financial and clinical data already being collected
1.Calendar year 2010
2.Will be computed in June 2010
The initial goal will be to reduce the baseline by 5%within the first six months of initiating the project.
Source: R. Lloyd. Quality Health Care: A Guide to Developing and Using Indicators. Jones and Bartlett, 2004.
How often and for how long do you need to collect data?
• Frequency – the period of time in which you collect data (i.e., how often will
you dip into the process to see the variation that exists?)• Moment by moment (continuous monitoring)?• Every hour?• Every day? Once a week? Once a month?
• Duration – how long you need to continue collecting data• Do you collect data on an on-going basis and not end until the measure is always
at the specified target or goal?• Do you conduct periodic audits?• Do you just collect data at a single point in time to “check the pulse of the
process”
• Do you need to pull a sample or do you take every occurrence of the data (i.e., collect data for the total population)
Exercise: Data Collection Strategies(frequency, duration and sampling)
The need to know, the criticality of the measure and the amount of data required to make a conclusion should drive the frequency, duration and whether you need to sample decisions.
MeasureFrequency and
DurationPull a sampling or take
every occurrence?
Vital signs for a patient connected to full telemetry in the ICU
Blood pressure (systolic and diastolic) to determine if the newly prescribed medication and dosage are having the desired impact
Percent compliance with a hand hygiene protocol
Cholesterol levels (LDL, HDL, triglycerides) in a patient recently placed on new statin medication
Patient satisfaction scores on the inpatient units
Central line blood stream infection rate
Percent of inpatients readmitted within 30 days for the same diagnosis
Percent of surgical patients given prophylactic antibiotics within 1 hour prior to surgical incision
“Thin-slicing refers to the ability of our unconscious to find patterns in situations and behavior based on very narrow slices
of experience.” Malcolm Gladwell, blink, page 23
When most people look at data they thin-slice it. That is, they basically use their unconscious to find patterns and trends in the data. They look for extremely high or
low data points and then make conclusions about performance based on limited data. R. Lloyd
to understanding quality performance, therefore, lies in understanding variation over time not in preparing aggregated data and calculating summary statistics!
Common Cause does not mean “Good Variation.” It only means that the process is stable and predictable. For example, if a patient’s systolic blood pressure averaged around 165 and was usually between 160 and 170 mmHg, this might be stable and predictable but completely unacceptable.
Similarly Special Cause variation should not be viewed as “Bad Variation.” You could have a special cause that represents a very good result (e.g., a low turnaround time), which you would want to emulate. Special Cause merely means that the process is unstable and unpredictable.
You have to decide if the output of the process is acceptable!
Rule #3: Too few or too many runsUse this table by first calculating the number of "useful observations" in your data set. This is done by subtracting the number of data points on the median from the total number of data points. Then, find this number in the first column. The lower number of runs is found in the second column. The upper number of runs can be found in the third column. If the number of runs in your data falls below the lower limit or above the upper limit then this is a signal of a special cause.
# of Useful Lower Number Upper Number Observations of Runs of Runs15 5 1216 5 1317 5 1318 6 1419 6 1520 6 1621 7 1622 7 1723 7 1724 8 1825 8 1826 9 1927 10 1928 10 2029 10 2030 11 21
Because Control Charts…1. Are more sensitive than run charts
� A run chart cannot detect special causes that are due to point-to-point variation (median versus the mean)
� Tests for detecting special causes can be used with control charts
2. Have the added feature of control limits, which allow us to determine if the process is stable (common cause variation) or not stable (special cause variation).
3. Can be used to define process capability.
4. Allow us to more accurately predict process behavior and future performance.
Rule #3: 6 or more consecutive points steadily increasing or decreasing
Rule #4: 2 out of 3 successive points in Zone A or beyond
Rule #5: 15 consecutive points in Zone C on either side of the centerline
There are many rules to detect special cause. The following five rules are recommended for general use and will meet most applications of control charts in healthcare.
Rule #1: 1 point outside the +/- 3 sigma limits A point exactly on a control limit is not considered outside the limit . When there is not a
lower or upper control limit Rule 1 does not apply to the side missing the limit.
Rule #2: 8 successive consecutive points above (or below) the centerlineA point exactly on the centerline does not cancel or count towards a shift.
Rule #3: 6 or more consecutive points steadily increasing or decreasingTies between two consecutive points do not cancel or add to a trend. When control charts
have varying limits due to varying numbers of measurements within subgroups, then rule
#3 should not be applied.
Rule #4: 2 out of 3 successive points in Zone A or beyond When there is not a lower or upper control limit Rule 4 does not apply to the side missing a
limit.
Rule #5: 15 consecutive points in Zone C on either side of the centerlineThis is known as “hugging the centerline”
Source: Carey, R. and Lloyd, R. Measuring Quality Improvement in Healthcare: A Guide to Statistical Process Control Applications. ASQ Press, Milwaukee, WI, 2001.
*All three components MUST be viewed together. Focusing on one or even two of the components will guarantee suboptimized performance. Systems thinking lies at the heart of CQI!
Summary of Key Points• Understand why you are measuring
• Improvement, Accountability, Research
• Build skills in the area of data collection• Operational definitions
• Stratification & Sampling (probability versus non-probability)
• Build knowledge on the nature of variation
• Understanding variation conceptually
• Understanding variation statistically
• Know when and how to use Run & Control Charts • Numerous types of charts (based on the type of data)• The mean is the centerline (not the median which is on a run chart)• Upper and Lower Control Limits are sigma limits• Rules to identify special and common causes of variation• Linking the charts to improvement strategies
• The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. G. Langley, K. Nolan, T. Nolan, C. Norman, L. Provost. Jossey-Bass Publishers., San Francisco, 1996.
• Quality Improvement Through Planned Experimentation. 2nd edition. R. Moen, T. Nolan, L. Provost, McGraw-Hill, NY, 1998.
• The Improvement Handbook. Associates in Process Improvement. Austin, TX, January, 2005.
• A Primer on Leading the Improvement of Systems,” Don M. Berwick, BMJ, 312: pp 619-622, 1996.
• “Accelerating the Pace of Improvement - An Interview with Thomas Nolan,” Journal of Quality Improvement, Volume 23, No. 4, The Joint Commission, April, 1997.
• Brook, R. et. al. “Health System Reform and Quality.” Journal of the American Medical Association 276, no. 6 (1996): 476-480.
• Carey, R. and Lloyd, R. Measuring Quality Improvement in healthcare: A Guide to Statistical Process Control Applications. ASQ Press, Milwaukee, WI, 2001.
• Langley, G. et. al. The Improvement Guide. Jossey-Bass Publishers, San Francisco, 1996.
• Lloyd, R. Quality Health Care: A Guide to Developing and Using Indicators. Jones and Bartlett Publishers, Sudbury, MA, 2004.
• Nelson, E. et al, “Report Cards or Instrument Panels: Who Needs What? Journal of Quality Improvement, Volume 21, Number 4, April, 1995.
• Solberg. L. et. al. “The Three Faces of Performance Improvement: Improvement, Accountability and Research.” Journal of Quality Improvement 23, no.3 (1997): 135-147.
• Gladwell, M. The Tipping Point. Boston: Little, Brown and Company, 2000.
• Kreitner, R. and Kinicki, A. Organizational Behavior (2nd ed.) Homewood, Il: Irwin, 1978.
• Lomas J, Enkin M, Anderson G. Opinion Leaders vs Audit and Feedback to Implement Practice Guidelines. JAMA, Vol. 265(17); May 1, 1991, pg. 2202-2207.
• Myers, D.G. Social Psychology (3rd ed.) New York: McGraw-Hill, 1990.
• Prochaska J., Norcross J., Diclemente C. In Search of How People Change, American Psychologist, September, 1992.
• Rogers E. Diffusion of Innovations. New York: The Free Press, 1995.
• Wenger E. Communities of Practice. Cambridge, UK: Cambridge University Press, 1998.
Appendix DReferences for Rare Events Control Charts
• Benneyan, JC, Lloyd, RC and Plsek, PE. “Statistical process control as a tool for research and healthcare improvement”, Quality and Safety in Health Care12:458-464.
• Benneyan JC and Kaminsky FC (1994): "The g and h Control Charts: Modeling Discrete Data in SPC", ASQC Annual Quality Congress Transactions, 32-42.
• Jackson JE (1972): "All Count Distributions Are Not Alike", Journal of Quality Technology 4(2):86-92.
• Kaminsky FC, Benneyan JC, Davis RB, and Burke RJ (1992): "Statistical Control Charts Based on a Geometric Distribution", Journal of Quality Technology24(2):63-69.
• Nelson LS (1994): "A Control Chart for Parts-Per-Million Nonconforming Items", Journal of Quality Technology 26(3):239-240.