10/20/2015 1 IHI Expedition Putting your Patient Experience Data to Work Session 2: Understanding Potential Pitfalls and How to Avoid Data Craziness: Formal Survey Data October 20, 2015 These presenters have nothing to disclose Kevin Little PhD Kristine White RN, BSN, MBA Angela Zambeaux Today’s Host 2 Rebecca Goldberg, Project Coordinator, Institute for Healthcare Improvement (IHI), coordinates multiple projects focused on increasing value in health care by improving quality and reducing costs. Currently, Rebecca’s primary responsibility is coordinating and hosting IHI’s Expeditions, monthly virtual support programs focused on specific topic areas. Rebecca is a recent graduate of Georgetown University in Washington, D.C., where she obtained her Bachelor of Science degree in human science with a minor in public health.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
10/20/2015
1
IHI ExpeditionPutting your Patient Experience Data to Work
Session 2: Understanding Potential Pitfalls and
How to Avoid Data Craziness: Formal Survey Data
October 20, 2015
These presenters have
nothing to disclose
Kevin Little PhDKristine White RN, BSN, MBAAngela Zambeaux
Today’s Host2
Rebecca Goldberg, Project Coordinator, Institute for
While "correlation does not imply causation", "Causation does not
exist without correlation."
45
Why correlations matter
Correlations are a first step to making sense of
relations among multiple survey questions
The lowest score on a panel of questions may not
be strongly associated with overall evaluation
Tackling the lowest score may not be good use of
organization resources
46
See the Data Tools Self Assessment for more details about correlation
10/20/2015
23
What is correlation?
Correlation, based on either scores or ranks, measures strength of association and ranges from 1 (perfect positive linear or rank order relationship) to 0 (no linear or rank relationship) to -1 (perfect negative linear or reverse rank order relationship.)
Here’s a picture that shows some invented data, with the correlation coefficient ranging from 0.96 to 0.55
47
Specific Case of General Pattern48
http://circoutcomes.ahajournals.org/content/3/2/188.full.pdf+html accessed 15 Oct 2015
…we found that hospitals that score high on questions such as “skill of nurses (physician),” “how well the nurses (physician) kept you informed,” “amount of attention paid to your special or personal needs,” “how well your pain was controlled,” “the degree to which the hospital staff addressed your emotional needs,” “physician’s concern for your questions and worries,” “time physician spent with you,” and “staff efforts to include you in decisions about your treatment” also tended to score high on patient overall satisfaction. In contrast, there was no association with scoring high on questions concerned with the room (eg, “room temperature and pleasantness of room decor”), meals (eg, “quality of food, temperature of food”), tests (eg, “waiting time for tests or treatment”), and discharge (eg, “speed of discharge process”) and the patient overall
satisfaction score. Moreover, patient satisfaction with nursing care was the most important determinant of patient overall satisfaction,
thus highlighting an important area for further quality improvement efforts and underscoring the role of the entire health care team in the in-hospital treatment of patients with AMI.” (p. 193, emphasis added.)
– This is in addition to the variation that arises from different
experiences of care, the variation you need to control
When you examine formal survey responses for
departments or units within your organization, n can get
"small."
Knowledge of n should inform comparisons, month to
month or across units or providers ("control chart
thinking")
10/20/2015
26
Web app:https://iecodesign.shinyapps.io/survey_simulator
53
Knowledge of the “n effect” should
Dampen or eliminate management cycles of
despair or celebration, based on a single
reported percent.
Cause you to interpret one month unit-level
results (really small n!?) with caution
Help make the case for plotting survey results in
time order
Inspire you to learn and use control charts--see
Provost and Murray (2012)
54
10/20/2015
27
Return to UnityPoint Example
For June 2015, 18 surveys were returned in the latest report for a med/surg acute care unit.
One of the questions of the nursing communications composite is "During this hospital stay, how often did nurses listen carefully to you?"
17 of 18 patients responded "Always" (Top Box) = 94.4%.
If the Top Box average for this question for the previous year was 82%, how unusual is it to observe 94.4%?
55
How do you interpret these data?Table for 4 Quarters
Chart of Top Box ValuesPhys IDN
surveys
count of
Top BoxTB %
4 2 1 50.0
13 2 2 100.0
16 3 1 33.3
17 4 4 100.0
3 5 5 100.0
5 7 6 85.7
9 8 7 87.5
6 9 7 77.8
11 11 9 81.8
2 11 10 90.9
19 13 11 84.6
14 15 11 73.3
8 21 16 76.2
1 22 18 81.8
7 22 20 90.9
12 23 15 65.2
18 23 18 78.3
15 25 21 84.0
10 27 19 70.4
10/20/2015
28
Plot survey data in time order
Start with run charts, move on to control charts
57
Why plot data in time order?
Single survey numbers provide very little useful
guidance for improvement
You need time order to make “before and after”
comparisons to assess progress
If “n” is about the same for each survey number
in the series, you can look for striking patterns
over time to signal improvement with run charts
See Perla et al. (2011) for run chart shift and trend “rules” useful for fewer than 20 time periods
10/20/2015
29
59
n = 20 each month, sampled from adjusted model, 10% sampling fraction
No
change,
points
just
bounce
around!
Plot survey data in time order 60
Collabstart
Baseline median
10/20/2015
30
61
Compressed
Percentile
scale is
good
news/bad
news.
Do you know
which is
which?
Return to UnityPoint Example
Strengths Enhancements to consider
Showed three survey items plotted in one page
You can show many items plotted on one page; use medians.
Showed 12 monthly values Show 24 months to help you understand seasonal effects
Asked for survey data by month of service not month of survey completion
Annotate chart with notes of interventions
Recorded the number of surveys, month by month
Construct a control chart to account for variation in sample sizes
10/20/2015
31
To learn more
Perla, R., Provost, L., and Murray, S. (2011), “The run chart: a simple analytical tool for learning from variation in healthcare processes”, BMJ Quality & Safety. 2011 Jan; 20(1):46-51. Note: the shift and trend rules cited are appropriate only for data series no more than 20 points long. See http://www.iecodesign.com/index.php/217-run-charts-in-quality-improvement-work
Provost, L. and Murray, S. The Health Care Data Guide: Learning from Data for Improvement. Jossey-Bass Publishers, 2011, especially chapter 12.
Government-defined patient experience survey data (e.g. CAHPS in the U.S.)
80.00%28
17.14%6
2.86%1
Externally designed and managed patient experience data other than government defined surveys (e.g. Press-Ganey surveys)
52.94%18
38.24%13
8.82%3
Internally designed and managed patient survey data (e.g. 90-day follow up phone call on patient experience)
68.57%24
20.00%7
11.43%4
Formal patient complaints 100.00%35
0.00%0
0.00%0
Patient letters 85.71%30
2.86%1
11.43%4
Leadership rounding/direct observation 71.43%25
17.14%6
11.43%4
Spoken patient complaints or comments
83.33%30
2.78%1
13.89%5
Survey ResultsIs the information organized and accessible to you?
Yes– No– I'm not sure–
Government-defined patient experience survey data (e.g. CAHPS in the U.S.)
83.33%25
6.67%2
10.00%3
Externally designed and managed patient experience data other than government defined surveys (e.g. Press-Ganey surveys)
60.00%15
28.00%7
12.00%3
Internally designed and managed patient survey data (e.g. 90-day follow up phone call on patient experience)
41.94%13
45.16%14
12.90%4
Formal patient complaints 73.53%25
20.59%7
5.88%2
Patient letters 63.64%21
21.21%7
15.15%5
Leadership rounding/direct observation
39.39%13
57.58%19
3.03%1
Spoken patient complaints or comments
61.76%21
26.47%9
11.76%4
10/20/2015
37
Survey ResultsIs the information summarized and understood by senior leaders?
Yes– No– I'm not sure–
Government-defined patient experience survey data (e.g. CAHPS in the U.S.)
63.33%19
6.67%2
30.00%9
Externally designed and managed patient experience data other than government defined surveys (e.g. Press-Ganey surveys)
48.00%12
28.00%7
24.00%6
Internally designed and managed patient survey data (e.g. 90-day follow up phone call on patient experience)
32.26%10
35.48%11
32.26%10
Formal patient complaints 67.65%23
11.76%4
20.59%7
Patient letters 53.13%17
21.88%7
25.00%8
Leadership rounding/direct observation
50.00%16
25.00%8
25.00%8
Spoken patient complaints or comments
51.52%17
18.18%6
30.30%10
Is the information linked back to front-line staff?
–Yes– No– I'm not sure–
Government-defined patient experience survey data (e.g. CAHPS in the U.S.)
58.06%18
25.81%8
16.13%5
Externally designed and managed patient experience data other than government defined surveys (e.g. Press-Ganey surveys)
46.15%12
34.62%9
19.23%5
Internally designed and managed patient survey data (e.g. 90-day follow up phone call on patient experience)
31.25%10
40.63%13
28.13%9
Formal patient complaints 34.29%12
22.86%8
42.86%15
Patient letters 42.42%14
27.27%9
30.30%10
Leadership rounding/direct observation
42.42%14
30.30%10
27.27%9
Spoken patient complaints or comments
42.42%14
24.24%8
33.33%11
Survey Results
10/20/2015
38
Additional Data SourcesPress Ganey, Leadership Rounds, looking into a new administration and management patient experience rounding app.
NSQIP data for 30 day patient outcomes after surgery
Community Health Assessment-informs programs and services regarding population health needs.
Patient satisfaction surveys
Discharge phone calls 24-48 hrs post discharge include service issues and names of staff
Patient focus groups run regularly with inpatients on subacute site, information gathered is themed and fed back to frontline teams. Patient stories are also used in a similar way.
We have a Patient Advisory Council that meets every other month who provide us with feedback on "focus areas" and we have consumer board members who also provide feedback.
A Incident-reporting system
Additional mock CAHPS/HOS Survey data
We accept comments (complaints and commendaitons) via a web application from all pts and families - managed similar to patient letters
Automated discharge phone calls to all inpatients
FaceBook comments, local press
The information may be inconsistently shared among front line staff.
78Survey ResultsTime Period: How frequently are these surveys reviewed by the groups/people in the left column?
Annual– Quarterly– Monthly–
Whole organization 29.03%9
22.58%7
48.39%15
Unit or department 9.38%3
15.63%5
75.00%24
Care team or individual provider
7.41%2
29.63%8
62.96%17
1-10– 11-20– 21-30– More than 30–
Whole organization 12.90%4
3.23%1
0.00%0
83.87%26
Unit or department 16.13%5
6.45%2
12.90%4
64.52%20
Care team or individual provider
30.77%8
7.69%2
11.54%3
50.00%13
Typical number of surveys completed and analyzed in each time period