-
www.jcrinc.com
Implementing andEvaluating Team Training FeaturesTeamwork and
Communication
Team Training Editorial: Team Training Can Enhance Patient
Safetythe Data, the
Challenge Ahead
Twelve Best Practices for Team Training Evaluation in Health
Care
On the Front Lines of Patient Safety: Implementation and
Evaluationof Team Training in Iraq
Didactic and Simulation Nontechnical Skills Team Training
toImprove Perinatal Patient Outcomes in a Community Hospital
Evaluating Efforts to Optimize TeamSTEPPS Implementation
inSurgical and Pediatric Intensive Care Units
Operations Management
Investigating a Pediatric Hospitals Response to an Inpatient
CensusSurge During the 2009 H1N1 Influenza Pandemic
August 2011Volume 37 Number 8
Improvement fromFront Office to Front Line
[Getting] the wounded to a facility with the appropriate
capabilities for providing definitive care . . . often
involves
rapid and frequent transitions of carefor critically injured
patients, which
requires high degrees of communication and coordination
among team members within as wellas between levels of care. As
in
civilian health care, effective teamwork is crucial for
success.
Deering et al. (p. 350)
Ph
oto
co
urt
esy o
f C
ol. P
ete
r G
. N
ap
olit
an
o,
M.D
.,
Ma
dig
an
Arm
y M
ed
ica
l C
en
ter.
http://www.jcrinc.com
-
FeaturesTEAMWORK AND COMMUNICATION
TEAM TRAINING339 Editorial: Team Training Can Enhance
Patient Safetythe Data, the Challenge Ahead
Eduardo Salas, Ph.D.; Megan E. Gregory, B.S.; Heidi B.
King,M.S.
341 Twelve Best Practices for Team Training
Evaluation in Health Care
Sallie J. Weaver, M.S.; Eduardo Salas, Ph.D.; Heidi B.
King,M.S.Twelve best practices, extrapolated from the science of
eval-uation and measurement, can serve as a roadmap for healthcare
organizations in developing, implementing, and evalu-ating their
own team training interventions.
350 On the Front Lines of Patient Safety:
Implementation and Evaluation of Team Training
in Iraq
Shad Deering, M.D.; Michael A. Rosen, Ph.D.; Vivian Ludi,R.N.;
Michelle Munroe, C.N.M.; Amber Pocrnich, R.N.C.;Christine Laky,
M.D.; Peter G. Napolitano, M.D.In the first evaluation of team
training in a combat theaterof operations, a review of patient
safety reports indicatedsignificant decreases in the rates of
communication-relatederrorsmedication/transfusion errors and
needlestickincidents.
357 Didactic and Simulation Nontechnical Skills
Team Training to Improve Perinatal Patient
Outcomes in a Community Hospital
William Riley, Ph.D.; Stanley Davis, M.D.; Kristi Miller,
R.N.,M.S.; Helen Hansen, Ph.D., R.N.; Francois Sainfort,
Ph.D.;Robert Sweet, M.D. Nontechnical skills (NTS), such as
communication andteamwork, are the cognitive and interpersonal
skills thatsupplement clinical and technical skills and are
necessary toensure safe patient care. At one community hospital,
onlywhen didactic NTS training (in a customized TeamSTEPPSprogram)
was combined with practice in simulated scenar-ios was there any
improvement in outcome dataa signifi-cant and persistent
improvement of 37% in perinatal morbidity.
365 Evaluating Efforts to Optimize TeamSTEPPS
Implementation in Surgical and Pediatric
Intensive Care Units
Celeste M. Mayer, Ph.D., R.N.; Laurie Cluff, Ph.D.; Wei-TingLin,
R.N., Ph.D.; Tina Schade Willis, M.D.; Renae E. Stafford,M.D.,
M.P.H.; Christa Williams, R.N., B.S.N.; Roger Saunders,R.N.,
M.S.N., N.E.A.-B.C.; Kathy A. Short, R.R.T., R.N.;Nancy Lenfestey,
M.H.A.; Heather L. Kane, Ph.D.; Jacqueline B.Amoozegar, M.S.P.H. At
an academic medical center, a multidisciplinary changeteam of unit-
and department-based leaders deployed imple-mentation strategies
tailored to seven team training successfactorsranging from aligning
team training objectives andsafety aims with organizational goals
to measuring the teamtraining programs success. Observed team
performance sig-nificantly improved for all core areas of
competency, withimprovements largely sustained at 6 and 12
months.
OPERATIONS MANAGEMENT
376 Investigating a Pediatric Hospitals
Response to an Inpatient Census Surge During
the 2009 H1N1 Influenza Pandemic
William C. Van Cleve, M.D., M.P.H.; Pat Hagan, M.H.S.A;Paula
Lozano, M.D., M.P.H.; Rita Mangione-Smith, M.D.,M.P.H.On November
4, 2009, during a pandemic of H1N1influenza, a pediatric hospital
experienced a brief and intensesurge in inpatient census that
rapidly reached 98% of itstotal capacity. The hospital activated
its surge plan, whichincluded the discharge of hospitalized
patients. A review ofthis reverse triage process raised questions
about the successof this efforts and led the hospital to reevaluate
the ways itmanages patient flow and responds to inpatient
surges.
Reader Services384 Information for Authors and Readers
The blog of The Joint Commission Journal on Quality and
PatientSafety features news and
comments:http://www.jcrinc.com/Blogs-All-By-Category/Journal-Blog/
337August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
Table of Contents
http://www.jcrinc.com/Blogs-All-By-Category/Journal-Blog/
-
338
Elizabeth H. Bradley, Ph.D. Yale UniversityNew Haven,
Connecticut
Dale W. Bratzler, D.O., M.P.H. University of Oklahoma Health
Sciences CenterOklahoma City
Marcy Gleit Carty, M.D., M.P.H. Brigham and Womens
HospitalBoston
John Degelau, M.D., M.S.Partners Medical Group Bloomington,
Minnesota
Kelly J. Devers, Ph.D.Urban InstituteWashington, DC
Nancy C. Elder, M.D., M.S.P.H.University of Cincinnati College
of MedicineCincinnati
Rhona Flin, B.Sc., Ph.D, C.Psychol.University of
AberdeenAberdeen, Scotland, United Kingdom
Richard C. Hermann, M.D., M.S. TuftsNew England Medical
CenterBoston
Tanya Huehns, D.M., M.R.C.P. National Patient Safety Agency
London
Rainu Kaushal, M.D., M.P.H.New YorkPresbyterian Hospital New
York City
Janne Lehman Knudsen, M.D., Ph.D.,M.H.M.Danish Cancer
SocietyCopenhagen
Peter Kyle Lindenauer, M.D., M.Sc. Baystate Medical
CenterSpringfield, Massachusetts
Jorg Cesar Martinez, M.D.Mother and Infant Hospital Ramn
Sarda,Buenos Aires
Ziad Memish, M.D., F.R.C.P.C., F.A.C.P. King Fahad National
Guard Hospital Riyadh, Kingdom of Saudi Arabia
Peter D. Mills, Ph.D., M.S. Veterans Health Affairs National
Center forPatient Safety White River Junction, Vermont
Janet M. Nagamine, R.N., M.D. Safe and Reliable HealthcareAptos,
California
Susan M. Noaker, Ph.D., L.P.University of Minnesota Medical
Center,FairviewMinneapolis
John vretveit, B.Sc. (Hons), MPhil., Ph.D.,C.Psychol., C.Sci.,
M.I.H.M. Karolinska Institutet Medical
ManagementCentreStockholm
Wilson D. Pace, M.D. University of ColoradoDenver
Emily S. Patterson, Ph.D., M.S.Ohio State UniversityColumbus,
Ohio
Peter J. Pronovost, M.D., Ph.D. Johns Hopkins Center for
Innovations inQuality Patient Care Baltimore
Roger Resar, M.D.Institute for Healthcare ImprovementCambridge,
Massachusetts
Matthew Scanlon, M.D.Childrens Hospital of
WisconsinMilwaukee
Lisa Schilling, R.N., M.P.HKaiser PermanenteOakland,
California
James G. Stevenson, Pharm.D.University of Michigan HospitalsAnn
Arbor, Michigan
Nancy L. Szaflarski, Ph.D., R.N. F.C.C.M.Stanford Hospital &
ClinicsStanford, California
Mark Van Kooy, M.D.Aspen Advisors, L.L.C.Pittsburgh
Brook Watts, M.D., M.S.Louis Stokes Cleveland VA Medical
CenterCleveland
Saul N. Weingart, M.D., Ph.D. Dana-Farber Cancer Institute for
Patient SafetyBoston
Albert W. Wu, M.D., M.P.H. Johns Hopkins Bloomberg School of
Public HealthBaltimore
Executive Editor: Steven Berman Executive Director,
Publications: Catherine Chopp Hinckley, Ph.D.
Senior Project Manager: Cheryl Firestone Manager, Publications:
Paul Reis
Statistical Consultant: Stephen Schmaltz, Ph.D.
The Joint Commission Journal on Quality and Patient Safety
August 2011 Volume 37 Number 8
2011 Editorial Advisory Board
The Joint Commission Journal on Quality and Patient Safety
serves as a peer-reviewed forum for practical approaches to
improving quality and safety
in health care. For more information about The Joint Commission,
visit
http://www.jointcommission.org. For more information about Joint
Commission
Resources, visit http://www.jcrinc.com.
Journal content. Published monthly, The Joint Commission Journal
on Quality andPatient Safety is a peer-reviewed publication
dedicated to providing health profession-als with the information
they need to promote the quality and safety of health care.
TheJoint Commission Journal on Quality and Patient Safety invites
original manuscripts onthe development, adaptation, and/or
implementation of innovative thinking, strategies,
and practices in improving quality and safety in health care.
Case studies, program or
project reports, reports of new methodologies or new
applications of methodologies,
research studies on the effectiveness of improvement
interventions, and commentaries
on issues and practices are all considered.
No statement in The Joint Commission Journal on Quality and
Patient Safety shouldbe construed as an official position of The
Joint Commission or Joint Commission
Resources unless otherwise stated. In particular, there has been
no official review
with regard to matters of standards interpretation or
compliance.
August 2011. The Joint Commission Journal on Quality and Patient
Safety (ISSN1553-7250) is published monthly (12 issues per year, 1
volume per year) by Joint
Commission Resources, One Renaissance Boulevard, Oakbrook
Terrace, IL 60181.
Third-class nonprofit postage paid at Oakbrook Terrace, IL, and
at additional mailing
offices. POSTMASTER: Send address changes to The Joint
Commission Journal onQuality and Patient Safety, Superior
Fulfillment, 131 W. 1st Street, Duluth, MN 55802-2065. Annual
subscription rates for 2010: United States/Canada, $319 for print
and
online, $299 for online only; ROW, $410 for print and online,
$299 for online only. For
more information, visit our Web site at
http://www.jcrinc.com/journal. Printed in the
USA. Copyright 2011 by the Joint Commission on Accreditation of
Healthcare
Organizations.
Joint Commission Resources, Inc. (JCR), an affiliate of the The
Joint Commission,
has been designated by The Joint Commission to publish
publications and multime-
dia products. JCR reproduces and distributes these materials
under license from The
Joint Commission.
The mission of JCR is to continuously improve the safety and
quality of care in theUnited States and in the international
community through the provision of educationand consultation
services and international accreditation.
http://www.jointcommission.orghttp://www.jcrinc.comhttp://www.jcrinc.com/journal.Printedinthehttp://www.jcrinc.com/journal.Printedinthehttp://www.jcrinc.com/journal.Printedinthehttp://www.jcrinc.com/journal.Printedinthe
-
339August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
Eduardo Salas, Ph.D.; Megan E. Gregory, B.S.; Heidi B. King,
M.S.
Teamwork has become a recurrent theme in health carerightfully
so, since patients lives depend on it. As healthcare delivery
systems strive to be high-reliability organizations,1
team training and other quality improvement interventions
(forexample, coaching, checklists) are fundamental to achieving
aculture that is mindful of safety.2
Team training strategies are now being considered more andmore
in hospitals and learning institutionsa giant step for-ward. But,
do they work? What are the team training evalua-tions telling us?
Are the evaluations robust, credible? Does teamtraining result in
the behaviors, cognitions, and attitudes needed?Do they last? These
are the questions that CEOs, chief financialofficers, and deans,
among others, are asking, and the good newsis that some answers are
emerging.
The DataIn this issue of The Joint Commission Journal on Quality
and Pa-tient Safety, three articles directly address the question
of whetherteam training works. In each article, training took the
form ofthe TeamSTEPPS implementation system, which focuses
ontraining health care professionals on team competencies such
asleadership, situation monitoring, mutual support, and
commu-nication.3 Deering et al. found, for example, that
TeamSTEPPS
in U.S. military combat support hospitals significantly
decreasedrates of medication and transfusion errors, as well as
needlestickinjuries and exposures.4 However, this study was a
pre-post de-sign, with no control group, making it difficult to
fully attributeresults to the training intervention.
Both of the remaining two articlesRiley et al.5 andMayer
etal.6used their own customized versions of the TeamSTEPPSprogram
(as it was designed). Mayer et al. found that observa-tions of team
performance were significantly higher at 1 and 12months
postimplementation.5 Observations at 6 months postim-plementation,
however, trended back toward baseline observa-tionsand served as an
impetus for reinforcement of thetraining principles. In addition,
clinical outcome data showedsignificant improvements on two
dimensions (nosocomial infec-
tions and time for placing patients on extracorporeal
membraneoxygenation). Improvements were also seen through surveys
and interviews. Riley et al., who conducted a small-cluster
randomized clinical trial with three hospitals to evaluatethe
effects of TeamSTEPPS training on perinatal harm and aculture of
safety, found that only the didactic and simulationgroup showed a
significant decrease in perinatal morbidity athospitals receiving
the didactic training program. It also foundthat a series of
simulation training exercises showed a significantdecrease in
perinatal morbidity.6 The article provides the first ev-idence that
adding simulation to a TeamSTEPPS intervention(albeit customized)
improves outcomes.
These three team training evaluations are consistent withother
recent evaluations7,8 and, furthermore, support findingsfrom a
recent meta-analysis of team training conducted by Salaset al.9 The
meta-analysis indicated that, in general, across severaldomains,
team training accounted for 20% of the variance inteam performance,
which was judged to be a decent effect size.
Moreover, we are seeing more evaluations in health careagood
sign.10 We acknowledge, of course, that no evaluation isperfect.
They all have their strengths and weaknesses. However,the
(credible) evaluations give us a glimpse of what works andwhy. Also
in this issue of the Journal, Weaver, Salas, and King de-scribe 12
best practices to improve the implementation and eval-uation of
team training efforts in health care.11 Following thesepractices
may help health care providers to design and delivereffective team
training strategies. For example, the best practicesrefer to
including frontline staff in the training design phase(Best
Practice 3), gaining support from socially powerful individ-uals,
such as physician leaders (Best Practice 8), and giving feed-back
to and coaching trainees (Best Practice 11). The three teamtraining
articles in this issue of the Journal each followed someof the best
practices. For example, Deering et al. followed BestPractice 3 by
including the patient safety officer on their devel-opment team.
Mayer et al. aligned with Best Practice 4do notreinvent the wheel;
leverage existing data relevant to training objectivesby using
hospitalwide surveys, independent of the
Teamwork and Communication
Team Training Can Enhance Patient Safetythe Data, the Challenge
Ahead
Copyright 2011 The Joint Commission
-
340 August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
training, as part of their evaluation. Riley et al. followed
BestPractice 11 by conducting debriefings after training
simula-tionsbut, by failing to evaluate for the effects of
turnover, fellshort on Best Practice 9.
Although the three team training evaluations, like many oth-ers,
suggest that team training works, moreand bettereval-uations are
needed. These will be reported (we hope) as moreteam
trainingadjusted to the conditions and culture of theparticular
setting (for example, ICU or operating room)is implemented. The
concern remains that relevant and crediblemetrics for Kirkpatricks
four levels of training evaluationnotjust Level 1 (trainee
reactions) and Level 2 (trainee learning) butalso Level 3 (behavior
on the job) and Level 4 (results)12needto be deployed. A related
concern is that progress is made indeveloping reliable, valid,
relevant, and quantifiable measures ofteamwork in the
fieldknowledge, skills, and attitudes (oftentermed KSAs), such as
team leadership, shared mental models,and backup behavior.13 In
addition, directly correlating teamtraining to clinical outcomes
remains challenging.
In summary, data from the three articles provide some
en-couragement that a well-designed, scientifically rooted
teamtraining intervention can positively affect clinical outcomes
andpatient safety. In general, we know that health care providers
likethe team trainingthey have positive reactions and attitudes
toit. They learn the concepts. They exhibit behaviors,
cognitions,and attitudes back on the job. And it has some impact on
patientsafety.7,8 As noted in the previously cited meta-analysis,
teamtraining alone cannot do it. The organization must be ready
andable to facilitate the infusion of teamwork. Therein lies the
chal-lenge.
The Challenge Ahead If team training accounts for about 20% of
the team perfor -mance variance, as stated,9 then we also know that
80% must beaddressed through other organizational interventions.
Perhapsthe key challenge now is organizational sustainment; that
is, howcan health care organizations sustain the desired effects of
teamtraining over time? The greatest contributor to the
long-termsuccess of team training (or for any human resource
interven-tion) is what the organization does. The organizational
systemmatters. What the top leadership does, matters. What
policiesand procedures are in place to support teamwork, matters.
Theformal and informal signs and symbols of what is important inthe
organizationas conveyed through the norms, conditions,policies,
procedures, metrics in place, and the messages that topleadership
sendsmake or break transformational culture
change. One cannot forget that organizations tend to obtain
thebehaviors, cognitions, and attitudes that they measure and
rein-force. We need to shift from thinking about a team training
in-tervention to creating and sustaining an organizational
systemthat supports teamwork. The best team training in the
worldwill not yield the desired outcomes unless the organization
isaligned to support it. The next frontier lies in making
effectiveteamwork, as seen in high-performance teams, an essential
ele-ment in high-reliability organizations.2,14
The views expressed in this editorial do not necessarily
represent the views of the
U.S. Department of Defense or the University of Central
Florida.
References1. Chassin M.R., Loeb J.M.: The ongoing quality
improvement journey: Nextstop, high reliability. Health Aff
(Millwood) 4:559568, Apr. 2011.2. Baker D.P., Day R., Salas E.:
Teamwork as an essential component of high-reliability
organizations. Health Serv Res 41(4 pt. 2):15761598, Aug. 2006.3.
Alonso A., et al.: Reducing medical error in the Military Health
System: Howcan team training help? Human Resource Management Review
16:396415,2006.4. Deering S., et al.: On the front lines of patient
safety: Implementation andevaluation of team training in Iraq. Jt
Comm J Qual Patient Saf 37:350356,Aug. 2011.5. Riley W., et al.:
Didactic and simulation nontechnical skills team training toimprove
perinatal patient outcomes in a community hospital. Jt Comm J
QualPatient Saf 37:357364, Aug. 2011.6. Mayer C.M., et al.:
Evaluating efforts to optimize TeamSTEPPS implemen-tation in
surgical and pediatric intensive care units. Jt Comm J Qual Patient
Saf37:365374, Aug. 2011.7. Weaver S.J., et al.: Does teamwork
improve performance in the operatingroom? A multilevel evaluation.
Jt Comm J Qual Patient Saf 36:133142, Mar.2010. 8. Neily J., et
al.: Association between implementation of a medical team train-ing
program and surgical mortality. JAMA 304:16931700, Oct. 20, 2010.
9. Salas E., et al.: Does team training improve team performance? A
meta-analy-sis. Hum Factors 50:903933, Dec. 2008.10. Weaver S.J.,
et al.: The anatomy of health care team training and the stateof
practice: A critical review. Acad Med 85:17461760, Nov. 2010.11.
Weaver S.J., Salas E., King H.B.: Twelve best practices for team
trainingevaluation in health care. Jt Comm J Qual Patient Saf
37:341349, Aug. 2011.12. Kirkpatrick D.L.: Evaluating Training
Programs: The Four Levels. San Fran-cisco: Berrett-Koehler,
1994.13. Baker D.P., et al.: The role of teamwork in the
professional education ofphysicians: Current status and assessment
recommendations. Jt Comm J QualPatient Saf 31:185202, Apr. 2005.14.
Wilson K.A., et al.: Promoting health care safety through training
highreliability teams. Quality Saf Health Care 14:303309, Aug.
2005.
J
Eduardo Salas, Ph.D., is Pegasus Professor and Trustee Chair
and
Megan E. Gregory, B.S., is Research Assistant, Department of
Psy-
chology and Institute for Simulation & Training, University
of Central
Florida, Orlando, Florida. Heidi B. King, M.S., is Deputy
Director,
U.S. Department of Defense (DoD) Patient Safety Program, and
Di-
rector, Patient Safety Solutions Center, Office of the Assistant
Sec-
retary of Defense (Health Affairs) TRICARE Management
Activity,
Falls Church, Virginia. Please address correspondence to
Eduardo
Salas, [email protected].
Copyright 2011 The Joint Commission
http://www.ingentaconnect.com/content/external-references?article=0018-7208(2008)50L.903[aid=9088605]
-
341August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
Sallie J. Weaver, M.S.; Eduardo Salas, Ph.D.; Heidi B. King,
M.S.
Improving communication, a critical component of
effectiveteamwork among caregivers, is the only dimension of
team-work explicitly targeted in the current Joint Commission
Na-tional Patient Safety Goals (Goal 2, Improve the effectiveness
ofcommunication among caregivers).1 Yet dimensions of
teamworkunderlie nearly every other National Patient Safety Goal in
someform. For example, improving the safe use of medications
(Goal3), reducing the risk of hospital infections (Goal 7), and
accu-rately reconciling medication (Goal 8) all require much
morethan communication. To achieve these goals, providers acrossthe
continuum of care must engage in mutual performancemonitoring and
backup behaviors to maintain vigilant situa-tional awareness. They
must speak up with proper assertivenessif they notice
inconsistencies or potentially undesirable interac-tions, and they
must engage the patient and his or her family todo the same. They
must share complementary mental modelsabout how procedures will be
accomplished, the roles and com-petencies of their teammates, and
the environment in which theyare functioning. There must be
leadership to guide and alignstrategic processes both within and
across teams in order for careto be streamlined, efficient, and
effective. In addition, providers,administrators, and patients and
their families must want towork with a collective orientation,
recognizing that they are allultimately playing for the same
teamthat of the patient.
Thanks to the expanding wealth of evidence dedicated to
de-veloping our understanding of the role teamwork plays in
pa-tient care quality26 and provider well-being,7 strategies to
developthese skills, such as team training, have been integrated
into thevocabulary of health care in the 21st century. Considerable
effortand resources have been dedicated to developing and
imple-menting team training programs across a broad spectrum of
clin-ical arenas and expertise levels. For example, anesthesia
CrewResource Management810 and TeamSTEPPS11,12 represent
theculmination of more than 10 years of direct research and
devel-opment built on nearly 30 years of science dedicated to the
studyof team performance and training.13
Teamwork and Communication
Twelve Best Practices for Team Training Evaluation in Health
Care
Article-at-a-Glance
Background: Evaluation and measurement are the build-ing blocks
of effective skill development, transfer of train-ing, maintenance
and sustainment of effective teamperformance, and continuous
improvement. Evaluation ef-forts have varied in their methods, time
frame, measures, anddesign. On the basis of the existing body of
work, 12 bestpractice principles were extrapolated from the science
of eval-uation and measurement into the practice of team
trainingevaluation. Team training evaluation refers to efforts
dedi-cated to enumerating the impact of training (1) across
mul-tiple dimensions, (2) across multiple settings, and (3)
overtime. Evaluations of efforts to optimize teamwork are
oftenafterthoughts in an industry that is grounded in
evidence-based practice. The best practices regarding team
trainingevaluation are provided as practical reminders and
guidancefor continuing to build a balanced and robust body of
evi-dence regarding the impact of team training in health care. The
12 Best Practices: The best practices are organizedaround three
phases of training: planning, implementation,and follow-up. Rooted
in the science of team training eval-uation and performance
measurement, they range from BestPractice 1: Before designing
training, start backwards: thinkabout traditional frameworks for
evaluation in reverse toBest Practice 7: Consider organizational,
team, or other fac-tors that may help (or hinder) the effects of
training and thento Best Practice 12: Report evaluation results in
a meaning-ful way, both internally and externally. Conclusions:
Although the 12 best practices may be per-ceived as intuitive, they
are intended to serve as remindersthat the notion of evidence-based
practice applies to qualityimprovement initiatives such as team
training and team de-velopment as equally as it does to clinical
intervention andimprovement efforts.
Copyright 2011 The Joint Commission
-
342 August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
Overall, evaluation studies in health care suggest that
teamtraining can have a positive impact on provider behavior and
at-titudes, the use of evidence-based clinical practices, patient
out-comes, and organizational outcomes.9,1421 Such evaluations
havebegun to build the critical base of evidence necessary to
answerquestions regarding the overall effectiveness of team
training inhealth care, as well as questions of
intra-organizational validity(that is, would the strategy achieve
similar, better, or worse out-comes in other units in the same
organization?), and inter-orga-nizational validity (that is, would
similar, better, or worseoutcomes be achieved using the strategy in
other organizations?).
Evaluation and measurement are the building blocks of ef-fective
skill development, transfer of training, maintenance andsustainment
of effective team performance, and continuous im-provement.22
Evaluation efforts have varied greatly in their meth-ods, time
frame, measures, and design.23,24 The evidence-to-datesurrounding
team training evaluation underscores the need toapproach the
development and maintenance of expert team per-formance from a
holistic systems perspective that explicitly ad-dresses training
development, implementation, and sustainmentthrough the lens of
continuous evaluation.25 This in turn requiresearly consideration
of the factors from the individual team mem-ber level to the
organizational system level that will help (or hin-der) the
transfer, generalization, and sustainment of the
targetedcompetencies addressed in training. Human factors models
oferror underscore that significant events are rarely the cause of
asingle individual acting alone.26,27 This same systems
perspectivemust be applied to evaluating the interventions
dedicated to de-veloping the knowledge, skills, and attitudes
(KSAs) that are thehallmarks of effective teams.28
This article builds on the existing body of work dedicated
toteam training evaluation in health care by extrapolating
princi-ples from the science of evaluation and measurement into
thepractice of team training evaluation. Our goal is not to
presenta new methodology for evaluation but to distill principles
fromthe science and temper them with the practical
considerationsfaced on the front lines, where evaluation efforts
compete withlimited human, financial, and time resources. We
provide guid-ance for expanding our definition of evidence-based
practice toteam-based training interventions that have been
designed tosupport and maintain patient safety.
What is Team Training Evaluation?At the simplest level, team
training evaluation refers to assess-ment and measurement
activities designed to provide informa-tion that answers the
question, Does team training work?29 The
purpose of evaluation is to determine the impact of a given
train-ing experience on both learning and retention, as well as
howwell learners can (and actually do) generalize the KSAs
devel-oped in training to novel environments and situations
overtime.28 Transfer of training is the critical mechanism
throughwhich training can affect patient, provider, and/or
organizationaloutcomes.
The Science of Team Performance MeasurementThere is a science of
evaluation and measurement available toguide evaluation both in
terms of what to evaluate and how tocarry out evaluation efforts.
Although a comprehensive review ofteam performance measurement is
outside the scope of this ar-ticle (see, for example, Salas, Rosen,
and Weaver30 and Jeffcottand Mackenzie31), we briefly summarize
several of the criticaltheoretical considerations found to underlie
effective measure-ment and evaluation to provide a background for
the 12 bestpractices presented in this article. For example,
conceptual mod-els of team performance measurement differentiate
between twobroad dimensions: levels of analysis (individual task
work versusteamwork) and type of measure (process versus
outcome).32
In terms of levels of analysis, task work refers to the
individ-ual level technical requirements and processes of a given
task thatare usually specific to a given position, such as how to
read andinterpret an EKG (electrocardiogram) readout. Teamwork
refersto the specific knowledge, behaviors, and attitudesfor
exam-ple, communication, backup behavior, and
cross-monitoring33that individuals use to coordinate their efforts
toward a sharedgoal. In terms of evaluation, measuring teamwork and
task workcan support instructional processes by allowing for a more
fine-grained distinction regarding opportunities for improvementand
can support a just culture of safety.27,34 Within health
care,recent studies of near misses and recovered errors also
highlightthe role that communication, backup behavior, and
cross-check-ingcore components of teamworkplay in mitigating
andmanaging unintentional technical errors.35
In terms of types of measures, process measures capture
thespecific behaviors, steps, and procedures that a team uses to
com-plete a particular task. For example, evaluations of health
careteam training have examined behavioral measures of informa-tion
sharing, information seeking, assertion, backup behavior,and other
behavioral aspects of teamwork.5,9,19,20 Such metricscapture the
human factor involved in complex care systems.34
Conversely, outcome measures capture the results of these
behav-iors, often in the form of an evaluative judgment regarding
the
Copyright 2011 The Joint Commission
-
343August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
effectiveness or quality of a given outcome. Thus, outcome
meas-ures are additionally influenced by the context in which a
teamis performing. Whereas process measures are highly
controllableby team members, outcomes are often the product of a
constel-lation of factors, rendering them much less under the teams
di-rect control.36 In health care, patient outcomes or indicators
ofcare quality are undoubtedly the gold standard for measurementin
quality improvement (QI) evaluations. Although patient out-comes
have been considered the ultimate criteria for evaluationof team
training in health care, empirical evidence on the scalenecessary
to draw statistical conclusions regarding the impact ofteam
training on patient outcomes is only beginning toemerge.4,15,17,18
Although it is critical to measure such outcomes toascertain the
validity of team training effectiveness, they are de-ficient
indicators for diagnosing team training needs or for pro-viding
developmental feedback to care team members. Thus, ifthe purpose of
evaluation efforts is to support continuous im-provement, it is
important for outcome measures to be pairedwith process
measures.
Within health care, the science of measurement and evalua-tion
is also integrated into the disciplines of implementation sci-ence
and improvement science. These disciplines underscore aneed for the
science of teamwork and training evaluation to takea systems view
of teams and team training.
A Systems View of Team Training EvaluationAs understanding of
complex systems has evolved, the defini-tion of teams has also
evolved, as reflected in the following definition:
Complex and dynamic systems that affect, and are affectedby, a
host of individual, task, situational, environmental,
andorganizational factors that exist both internal and external
tothe team.37(p.604)
As such, the systems perspective advocates that training is buta
single component in a broader constellation of organizational,task,
and individual factors that affect team performance.38
Therefore, to provide valid and reliable indicators of the
effec-tiveness of team training, evaluation must also strive to
accountfor factors that can moderate the effects of team training
before,during, and after the actual training event(s). This notion
of asystems-approach to evaluation is depicted in Figure 1
(right).
The Practice of Team Training EvaluationA systems perspective on
developing expert teams assumes thateffective training does not
exist without effective evaluation. Thecomplexities of practice,
however, can present hurdles to gather-
ing the data, support, and buy-in necessary for effective
evalua-tion. Table 1 (page 344) summarizes some of the pitfalls
andwarning signs related to team training evaluation, although
thereare undoubtedly many more.
In an attempt to provide some mechanisms for mitigatingand
managing these pitfalls, we present 12 best practices (Table2, page
345), organized under the categories of planning, imple-mentation,
and follow-up, regarding the evaluation of team train-ing in health
care. Although we recognize that many other bestpractices could be
added to this list, we have attempted to specif-ically target
issues vital for consideration before, during, and aftertraining
that facilitate transfer and sustainment. Many of thesebest
practices are generalizable across a multitude of
trainingstrategies and may be intuitive to experts in training and
adultlearning. However, we specifically offer the best practices as
re-minders oriented toward team training. The insights reflected
inthe best practices are built on the nearly 30 years of science
ded-icated to understanding the measurement and assessment ofteam
performance and adult learning,39,40 as well as the work dur-ing
the last decade or so specifically dedicated to evaluating
theimpact of team training in health care.
A Systems-Oriented Approach to Evaluation
Figure 1. Effective evaluation demands a systems-oriented
approach, with eval-uation objectives and specific training
objectives aligned across multiple levelsof analysis.
Copyright 2011 The Joint Commission
-
344 August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
PLANNINGTo design a training program that meets the
organizational
definition of effectiveness means first defining what
effective-ness means to youin your organization and for your
providersand patients. That means beginning to think about
evaluationlong before the first slide or simulation scenario is
designed. Tra-ditional models of training evaluation such as
Kirkpatricks41
multilevel evaluation framework have been framed from a
bot-tom-up perspective that begins with participant reactions
andmoves upward through the various levels of learning,
behavior,and outcomes. However, clearly linking training objectives
todesired outcomes requires a reverse approach that begins byfirst
defining the desired outcomes and the specific behaviors thatwould
enable these outcomes to occur. That means first opera-tionally
defining return on investment (ROI). Would a teamtraining program
be considered viable if patient falls decreasedby 10%; if central
lines were replaced every 48 hours reliably; orif providers began
to reliably engage in discussions regarding nearmisses that were
observably open, honest, and framed as learn-ing experiences? In
this sense, ROI must be approached from aperspective that extends
traditional financial indicators to con-sider both human and
patient safety capital. The defining aspectof the human capital
perspective is the view of the people andteams who comprise the
organization as the ultimate organiza-tional asset. This
underscores the principle that investing re-sources into their
development can positively affect quality ofcare and organizational
outcomes. Therefore, when consideringteam training evaluation, ROI
should be conceptualized in a
manner consistent with this perspective.For evidence regarding
the effectiveness of a training program
to be meaningfully related to outcomes, it is also critical that
allcore stakeholders, from frontline providers to managers to
pa-tients, have ownership in both training design and
evaluation.These stakeholders should be asked to complete the
followingsentence during the earliest stages of training
development: Iwould consider this training program a success if . .
. Thisprocess will help to not only map out specific evaluation
metricsand processes for data collection but also to define and
refinethe ultimate objectives of the training program itself.
This also means evaluating along the way during the train-ing
design process; that is, applying the principles of measure-ment
and assessment to the actual training development andplanning
process. For example, several critical questions shouldbe addressed
throughout planning and development, including:Are desired outcomes
really a training problem? Is contentmapped directly to training
objectives? and Do training strate-gies and methods teach learners
how to mimic behavior or ac-tively apply new KSAs in novel
situations?
Best Practice 1. Before Designing Training, Start
Backwards:Think About Traditional Frameworks for Evaluation in
Reverse.Imagine trying to describe an evaluation of a new,
experimentaldrug to the U.S. Food and Drug Administration (FDA) on
thebasis of a small field study with no control group and none of
theother hallmarks of basic experimental design. We would not
dareto use anything less than the most robust experimental
designsand scientific protocols when evaluating pharmaceutical or
sur-
Pitfalls Warning Signs
Evaluation efforts do not account for or are not aligned with
other The training program is well-received, but indicators of
training
events or QI initiatives. outcomes are not meaningfully
changing.
If surveys are used, protected time is not provided for training
Evaluation data collected from providers is coming back
incomplete
participants to complete evaluation measures. or has been rushed
through.
Evaluation planning occurs after training has been designed
and/or Administrators, training team members, and providers
assume
implemented. evaluation requires a great amount of time and
monetary resources to
be useful.
Learning measures only measure declarative knowledge or Measures
collected after training suggests that training content has
attitudes toward teamwork. been learned; however, behavior on
the job remains the same.
Transfer of training is not supported beyond the initial
learning Evaluation data show an increase in performance
immediately after
experiencethat is, beyond the classroom. training but decline
relatively quickly back to baseline levels.
Evaluation results are not reported back to the front line
Providers express a sense that nothing is done with the evaluation
data
in a meaningful way. once collectedthat they do not know the
results of the evaluation they
participated in or what actions were implemented as a
result.
* QI, quality improvement.
Table 1. Some Team Training Evaluation Pitfalls and Warning
Signs*
Copyright 2011 The Joint Commission
-
345August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
gical treatments for patients. So why would we accept less
whenevaluating a training program that directly affects how
providersand administrators interact with and care for
patients?
Poor evaluation designs can either make it harder to detectthe
true effects of training on important outcomes or can
skewevaluation results, suggesting that training had an impact on
im-portant outcomes, when in reality it did not. However, the
mostcommon issues associated with training evaluation in the field
re-late to small samples, inconsistencies, and poorly mapped
out-comesall of which make it more difficult to detect the
trueeffect of training. Integrating as many elements of robust
exper-imental design as possible (when tempered with practical
con-straints) strengthens the inferences that can be drawn
fromevaluation results. Although the evidence to date
generallydemonstrates that team training strategies are effective,
such con-
clusions are muddled by extreme variation across studies, a
lackof comparative approaches, uncontrolled sampling variation,
andconfounding, as pinpointed in reviews of team training22,23
andsimulation-based training.42,43
The need for robust evaluation efforts must be
undoubtedlytempered with realistic constraints of both monetary and
humanresources. Thus, while calling for robust evaluation, our goal
isto not oversimplify the how of implementing such efforts.
Forexample, one of the most robust evaluations of team training
inthe surgical service line to date found that the reduction in
risk-adjusted surgical mortality was nearly 50% greater in U.S.
De-partment of Veterans Affairs (VA) facilities that participated
inteam training (18% reduction) compared to a nontrained con-trol
group (7% reduction, risk ratio [R.R.], 1.49, p = .01).17
However, this study included a sample of more than
182,000surgical procedures, 108 facilities (74 treatment, 34
control),and a comprehensive training program that included
quarterlycoaching support and checklists to support transfer of
trainedteamwork skills to the operational environment. As noted
byPronovost and Freischlang,44 the study was possible only
becauseof substantial investment by the VA system, both monetarily
andin terms of leadership and the human resources to conduct
train-ing and analyze the data.
Undoubtedly, more studies of this caliber are needed. Givencalls
for quality and safety improvement at the federal level, thereis
support available for facilities to engage in robust evaluation
ef-forts. For example, the Agency for Healthcare Research
andQuality has funding programs for both research and
demonstra-tion projects dedicated to improving team functioning and
in-terdisciplinary care. Similarly, many organizations and
someprivate foundations offer mechanisms to support evaluation
ef-forts. Partnering with local academic institutions can also
pro-vide a mechanism for finding manpower resources to
collect,analyze, and report evaluation data. Nonetheless, although
weencourage comprehensive approaches to evaluation, we are notso
nave as to believe or advocate that all efforts to optimize
teamperformance can or should be the target of large-scale
evalua-tion efforts. What we do argue is that all evaluation
effortsnomatter their size or scopecan be and should be based in
thetenants of good experimental inquiry. At a local level, QI
lead-ers can invoke the Plan-Do-Study-Act (PDSA) Model for
Im-provement,45 which, at its core, is a model of evaluation.
Itprovides questions that consider both program implementationand
evaluation simultaneously, as follows:
1. What are we trying to accomplish? For example, what
be-haviors, attitudes, knowledge, patient outcomes, or provider
out-comes are we hoping to change?
Planning
Best Practice 1. Before designing training, start backwards:
Think about traditional frameworks for evaluation in
reverse.
Best Practice 2. Strive for robust, experimental design in
your
evaluation: It is worth the headache.
Best Practice 3. When designing evaluation plans and
metrics, ask the expertsyour frontline staff.
Best Practice 4. Do not reinvent the wheel; leverage
existing
data relevant to training objectives.
Best Practice 5. When developing measures, consider multiple
aspects of performance.
Best Practice 6. When developing measures, design for vari-
ance.
Best Practice 7. Evaluation is affected by more than just
training
itself. Consider organizational, team, or other factors that may
help
(or hinder) the effects of training (and thus evaluation
outcomes).
Implementation
Best Practice 8. Engage socially powerful players early.
Physi-
cian, nursing, and executive engagement is crucial to
evaluation
success.
Best Practice 9. Ensure evaluation continuity: Have a plan
for
employee turnover at both the participant and evaluation
adminis-
tration team levels.
Best Practice 10. Environmental signals before, during, and
after training must indicate that the trained KSAs and the
evalua-
tion itself are valued by the organization.
Follow-up
Best Practice 11. Get in the game, coach! Feed evaluation
re-
sults back to frontline providers and facilitate continual
improve-
ment through constructive coaching.
Best Practice 12. Report evaluation results in a meaningful
way,
both internally and externally.
* KSAs, knowledge, skills, and attitudes.
Table 2. 12 Best Practices for Team Training Evaluation*
Copyright 2011 The Joint Commission
-
346 August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
2. How will we know that a change is an improvement?
Whatindicators will tell us that team training is having the
desired ef-fect?
3. What change can we make that will result in an improve-ment?
What training strategies, methods, and tools to supporttransfer
will affect the indicators identified in Question 2?
4. How will we know that improvement is related to the
im-plemented changes? Do the improvements we see occur
aftertraining is implemented? If we were to vary our training
wouldwe likely see variation in our outcomes? Do the processes or
out-comes of trained providers differ from those of
untrainedproviders? What factors outside of training could have
causedthe improvement or lack of improvement?
Training programs and evaluation efforts in any capacity
inhealth care are an investmentin patient care quality and in
thequality of the working environment for providers.
Consideringthat resources are finite, organizational initiatives
must be eval-uated to make data-based decisions. Why expend these
re-sourceson either the training itself and/or on the
evaluationunless the results of these efforts are a valid and
reliable indica-tion of true effects? To garner valid, defensible
data regardingthe impact of team training, we must strive to apply
the princi-ples of experimental design that underlie our most basic
clinicalstudies, within the constraints inherent in the field
context. AsBerwick noted, measurement helps to know whether
innova-tions should be kept, changed, or rejected; to understand
causes;and to clarify aims.46(p. 312) The cost of not having this
informa-tion arguably outweighs the effort and cost invested to
obtaingood data on the effects of quality and process improvement
ef-forts such as team training.
Best Practice 2. Strive for Robust, Experimental Design
inEvaluation Efforts: It Is Worth the Headache. To create valid
andreliable indicators of effectiveness, it is important to build
eval-uation procedures and measures based on the science of
trainingevaluation; however, the procedures and measures must be
inte-grated with relevant contextual expertise. Frontline staff
knowthe intricacies of daily work on the floor, they know what will
beused and what will not be used, when certain measures can
orshould be collected and when they should not, as well as whatwill
motivate participation in the evaluation efforts and what
willhinder it. So ask them and do it early in the training
develop-ment phase. The evaluation design team should represent a
mixof administrators at multiple levelsfrontline providers of
mul-tiple levels (who work multiple shifts), and system-level (or
exter-nal) individuals well versed in measurement and QI.
Best Practice 3. When Designing Evaluation Plans and Met-rics,
Ask the ExpertsYour Frontline Staff. Robust training eval-
uation, however, does not mean starting from scratch.
Hospitalsand other health care environments are virtual data gold
mines,considering the breadth and depth of metrics already
calculatedand reported for accreditation, external monitoring, and
QI. Ifexisting data points align directly with targeted training
objec-tives, leverage them as indicators in the battery of relevant
eval-uation metrics. If a relevant measure has been tracked for
apreceding period of time, retrospective analyses allows for
longi-tudinal analyses that quantify the degree of change
attributableto training.
Best Practice 4. Do Not Reinvent the Wheel; Leverage Exist-ing
Data Relevant to Training Objectives. Best Practice 3 mustbe
tempered with the fact that perhaps the most extensive mis-take in
training evaluation relates to efforts that measure onlythose
indicators for which the data are the easiest to collect andtrack.
Team training is not a single-dose drug whose effect canbe
immediately identified through one or two patient
outcomeindicators. Although teamwork has been related to patient
out-comes,17 as stated, teamwork also affects patient safety
throughindirect pathways, such as creating the psychological safety
thatis necessary for providers to speak up when they notice an
incon-sistency. Evaluation protocols must be designed to assess the
im-pact of training by using multiple indicators across
multiplelevels of analysis. For example, assessments of trainee
reactionsshould capture satisfaction (for example, with trainer and
mate-rials/exercises), perceived utility, and perceived viability
of boththe strategies and methods used in training. Measures of
learn-ing should go beyond declarative knowledge to evaluate
changesin knowledge structure (that is, mental models) and
proceduralknowledge. Measures of behavior should assess both
analoguetransfer (transferring learned KSAs into situations highly
similarto those encountered in training) and adaptive transfer
(degreeto which KSAs are generalized to novel situations). This
includesanalyses of the barriers and challenges that providers
encounteron the job which inhibit transfer of desirable skills.
Finally, out-comes of training should be represented by indicators
at the levelof the patient (for example, safety, care quality,
satisfaction),provider (satisfaction, turnover intentions), and
organization(quality and safety, turnover, financial).
Best Practice 5. When Developing Measures, Consider Mul-tiple
Aspects of Performance. It has undoubtedly been difficult
toquantify the relationship between teamwork, team training,
andcritical outcomes.24 The base rate for outcomes such as
adverseevents is low. Many outcome measures collected as indicators
inteam training evaluation may show little to no variance,
whichlimits the power of traditional statistical tests used to
assesschange. The very nature of statistical testing requires
variance in
Copyright 2011 The Joint Commission
-
347August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
both predictors and outcomes. Therefore, evaluation metricsmust
be designed with variance in mind.
An innovative approach that helps in creating this muchneeded
variance is the Adverse Outcome Index (AOI).47,48 Theindex combines
several key outcomes, assigns a weight to eachoutcome, and then
combines them into scores (usually out of1,000) to track
performance over time. It simultaneously cap-tures multiple,
important outcomes and helps create the vari-ance necessary for
statistical testing.
Best Practice 6. When Developing Measures, Design for Vari-ance.
Because training does not occur in a vacuum, consider howlearning
climate, leadership and staff support, opportunities topractice,
reinforcement, feedback systems, sustainment plans,and resources
will affect the degree to which trained KSAs are ac-tually used on
the job. Consider how these factors are reflectedin your evaluation
measures. If such confounding factors aremeasured, they can be
accounted for statistically, improving thepower of your statistical
tests and heightening the validity of con-clusions.
Best Practice 7. Evaluation Is Affected by More Than
JustTraining Itself: Consider Organizational, Team, or Other
Fac-tors That May Help (or Hinder) the Effects of Training (and
thusEvaluation Outcomes).
IMPLEMENTATIONA structured approach to training design built on
early con-
sideration of evaluation lays a foundation for successful
trainingimplementation. However, even the most well-planned
teamtraining programs using the most advanced training methodswill
fail if a systems-oriented approach is lacking during
imple-mentation. Organizational, leader, and peer support for
train-ing significantly affects trainee motivation, the degree to
whichtraining is transferred into daily practice, and participation
inevaluation efforts. Socially powerful individualsrespected
of-ficial and unofficial leaders viewed as positive role
modelsarevital mechanisms for creating trainee investment and
ownershipin both the training itself and related evaluation
processes. Evenif staff have and want to use targeted teamwork
skills, they willhesitate to use these skills if their doing so is
not supported bytheir immediate physician leaders and peers.
Similarly, they willhesitate to participate in evaluation efforts
if a climate of sup-port and learning is not adopted. Staff must be
able to trust thatdata collected for training evaluation efforts
will be used for thatpurpose alonenot to judge them personally,
judge their com-petence, or for reporting purposes. Training
evaluation is aboutjust thattrainingnot for evaluating individuals
or teams.
Best Practice 8. Engage Socially Powerful Players Early;
Physician, Nursing, and Executive Engagement Is Crucial
toEvaluation Success. Turnover can be high for frontline
providersand members of the evaluation planning team, especially as
ad-ministrative members get pulled onto other projects. This lackof
continuity creates inherent problems for training
evaluationefforts. It is important to consider contingency plans
early thatexplicitly deal with turnover at both the trainee and
planningteam level. In the planning stages, it is vital to decide
how new,untrained individuals needs will be addressed and how
refreshertraining for staff and physicians will play out.
Furthermore, it isimportant to consider how turnover will be
accounted for in sta-tistical evaluation analyses. Although a
traditional intent to treatapproach can be used,49 metrics such as
team training loadanindex of the proportion of trained team
members20,50can alsobe used to account for turnover of trained team
members.
To preserve continuity at the evaluation planning team
level,create an evaluation briefing book that details the purpose,
aims,and value of the evaluation; the explicit data collection
proto-col; measures collected; and time line to bring new members
upto speed. This also creates a historical record of final
evaluationefforts, which can help in developing future briefings
and publications, as well as offering a template for future
trainingprojects.
Best Practice 9. Ensure Evaluation Continuity: Have a Planfor
Employee Turnover at Both the Participant and
EvaluationAdministration Team Levels. Given the ultimately precious
re-source of time, evaluation efforts, including filling out
measures,observation, and providing feedback, can easily be seen as
lowpriorities and hassles. This can lead to overly quick filling
out ofmeasures without much thought (or not completed at all),
thuslimiting the integrity of evaluation data. Measures filled out
care-lessly can be more detrimental to generalization and
sustainmentthen conducting no training evaluation at all.
To optimize the integrity of the evaluation data
collected,dedicated time and resources must be provided for
participatingin evaluation efforts. In addition, evaluation should
be explicitlyconsidered to be part of the training program itself.
The systemsapproach means that participation in training is really
only justbeginning when trainees walk out of the training
environmentand into their daily practices. The experiential
learning neces-sary for generalizing and sustaining trained KSAs in
the actualcare environment is arguably more influential on training
successthan what actually happens in the classroom or simulation
lab-oratory.
Best Practice 10. Environmental Signals Before, During, andAfter
Training Must Indicate That the Trained KSAs and theEvaluation
Itself Are Valued by the Organization.
Copyright 2011 The Joint Commission
-
348 August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
FOLLOW-UPSpreading and sustaining QI initiatives (such as team
train-
ing) have been identified as two of the greatest challenges
facedby health care leadership.51,52 The science of training and
adultlearning underscores the principle that team training is not
sim-ply a place where clinicians go or necessarily a single
programor intervention.13 Therefore, what happens after training in
theactual practice environment is more important than what hap-pens
in the classroom. Developing and implementing a
strategicsustainment plan is critical for both valid evaluation and
spread.
Inherent in the definition of evaluation is the importance
ofusing what was learned from evaluation data in a meaningfulway.24
Feeding data back to frontline providers and mapping ac-tionable
changes that results from evaluation findings can be im-portance
catalysts for sustainment and maintenance of teamworkskills
developed in training. In addition, coaching is one mech-anism for
implementing direct support for trainees as they at-tempt to
generalize and sustain trained KSAs in their dailypractice
environment. Constructive on-the-floor coachingdemonstrates
supervisory and peer support for appropriate useof the trained KSAs
and can also cue providers as to when/whereit is appropriate to use
trained KSAs in their actual daily work.Furthermore, simple
recognition and reinforcement for using ef-fective teamwork skills
on the job can be a powerful motivatorfor integrating training
concepts into daily practice.
Best Practice 11. Get in the Game, Coach! Feed EvaluationResults
Back to Frontline Providers and Facilitate Continual Im-provement
Through Constructive Coaching. As evaluation dataare collected, it
is important to recognize that statistical signifi-cance may not
capture practical significance; therefore, it is im-portant to
report the results of evaluation efforts in multiple waysthat are
practically meaningful in terms of the training objec-tives. This
may mean including traditional statistical analysis oftargeted
indicators, a more qualitative approach, or a methodthat mixes both
quantitative and qualitative analyses. For exam-ple, statistical
results can be combined with explicit stories aboutthe effects of
training compiled directly from trainees.
Most importantly, evaluation efforts must be reported
withthoroughness and rigor. This means adhering to the Standardsfor
Quality Improvement Reporting Excellence guidelines forQI
reporting.53 These guidelines are also helpful to consider dur-ing
early planning and development phases to ensure that criti-cally
important elements of evaluation design and analysis areaddressed
and planned for.
Best Practice 12. Report Evaluation Results in a MeaningfulWay,
Both Internally and Externally.
ConclusionsAlthough the 12 best practices may be perceived as
intuitive tothose working in quality development and improvement on
adaily basis, they are intended to serve as reminders that the
no-tion of evidence-based practice applies to QI initiatives such
asteam training and team development as equally as it does to
clin-ical intervention and treatment. Robust evaluation designs
andassessment metrics are the critical foundation for valid,
effectiveQI efforts and are necessary components for continuing to
buildthe body of evidence regarding what works (and what does not)
to optimize patient safety within complex care delivery systems.
This work was supported by funding from the Department of Defense
(Award Number
W81XWH-05-1-0372). All opinions expressed in this paper are
those of the authors and
do not necessarily reflect the official opinion or position of
the University of Central
Florida, the University of Miami,TRICARE Management, or the U.S.
Department of
Defense. A portion of this work was presented at the U.S. Agency
for Healthcare
Research and Quality Annual Conference, Bethesda, Maryland, Sep.
15, 2009. PatientSafety Training Evaluations: Reflections on Level
4 and More. http://www.ahrq.gov/about/annualconf09/salas.htm
(accessed Jun. 21, 2011).
References1. The Joint Commission: 2011 Comprehensive
Accreditation Manual for Hos-pitals: The Official Handbook. Oak
Brook, IL: Joint Commission Resources,2010.2. Klein K.J., et al.:
Dynamic delegation: Shared, hierarchical, and deindivid-ualized
leadership in extreme action teams. Administrative Science
Quarterly51:590621, Dec. 2006.3. Manser T.: Teamwork and patient
safety in dynamic domains of healthcare:A review of the literature.
Acta Anaesthesiol Scand 53:143151, Feb. 2009.4. Sorbero M.E., et
al.: Outcome Measures for Effective Teamwork in InpatientCare. RAND
technical report TR-462-AHRQ. Arlington, VA: RAND Corpo-ration,
2008.5. Thomas E.J., et al.: Teamwork and quality during neonatal
care in the deliv-ery room. J Perinatol 26:163169, Mar. 2006.6.
Williams A.L., et al.: Teamwork behaviours and errors during
neonatal resus-citation. Qual Saf Health Care 19:6064, Feb. 2010.7.
Fassier T., Azoulay E.: Conflicts and communication gaps in the
intensivecare unit. Curr Opin Crit Care 16:654665, Oct. 2010.8.
Gaba D.M.: Crisis resource management and teamwork training in
anesthe-sia. Br J Anaesth 105:36, Jul. 2010.9. Howard S.K., et al.:
Anesthesia crisis resource management training: Teach-ing
anesthesiologists to handle critical incidents. Aviat Space Environ
Med63:763770, Sep. 1992.10. Holzman R.S., et al.: Anesthesia crisis
resource management: Real-life sim-ulation training in operating
room crises. J Clin Anesth 7:675687, Dec. 1995.
J
Sallie J. Weaver, M.S., is Doctoral Candidate and Eduardo
Salas,
Ph.D., is Pegasus Professor and Trustee Chair, Department of
Psy-
chology and Institute for Simulation & Training, University
of Central
Florida, Orlando, Florida. Heidi B. King, M.S., is Deputy
Director,
U.S. Department of Defense (DoD) Patient Safety Program, and
Di-
rector, Patient Safety Solutions Center, Office of the Assistant
Sec-
retary of Defense (Health Affairs) TRICARE Management
Activity,
Falls Church, Virginia. Please address correspondence to Sallie
J.
Weaver, [email protected].
Copyright 2011 The Joint Commission
http://www.ingentaconnect.com/content/external-references?article=0952-8180(1995)7L.675[aid=9308524]http://www.ingentaconnect.com/content/external-references?article=0095-6562(1992)63L.763[aid=1337429]http://www.ingentaconnect.com/content/external-references?article=0095-6562(1992)63L.763[aid=1337429]http://www.ingentaconnect.com/content/external-references?article=0007-0912(2010)105L.3[aid=9613391]http://www.ingentaconnect.com/content/external-references?article=0743-8346(2006)26L.163[aid=8595363]http://www.ingentaconnect.com/content/external-references?article=0001-5172(2009)53L.143[aid=9199165]http://www.ahrq.gov/about/annualconf09/salas.htmhttp://www.ahrq.gov/about/annualconf09/salas.htm
-
349August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
11. U.S. Agency for Healthcare Research and Quality (AHRQ):
TeamSTEPPS
Guide to Action. AHRQ publication no. 06-0020-4. Rockville, MD:
AHRQ,Sep. 2006.12. Morey J.C., et al.: Error reduction and
performance improvement in theemergency department through formal
teamwork training: Evaluation results ofthe MedTeams project.
Health Serv Res 37:15531581, Dec. 2002.13. Salas E., Cannon-Bowers
J.A.: The science of training: A decade of progress.Annu Rev
Psychol 52:471499, Feb. 2001.14. DeVita M.A., et al.: Improving
medical emergency team (MET) perform-ance using a novel curriculum
and a computerized human patient simulator.Qual Saf Health Care
14:326331, Oct. 2005.15. Farley D.O., et al.: Achieving Strong
Teamwork Practices in Hospital Laborand Delivery Units. RAND
Technical Report 842-OSD. Santa Monica, CA:RAND Corporation, 2010.
16. Flin R., et al.: Teaching surgeons about non-technical skills.
Surgeon5:8689, Apr. 2007.17. Neily J., et al.: Association between
implementation of medical team train-ing and surgical mortality.
JAMA 304:16931700, Oct. 20, 2010.18. Pratt S.D., et al.: John M.
Eisenberg Patient Safety and Quality Awards:Impact of CRM-based
training on obstetric outcomes and clinicians patientsafety
attitudes. Jt Comm J Qual Patient Saf 33:720725, Dec. 2007.19.
Thomas E.J., et al.: Team training in the neonatal resuscitation
program forinterns: Teamwork and quality of resuscitations.
Pediatrics 125:539546, Mar.2010.20. Weaver S.J., et al.: Does
teamwork improve performance in the operatingroom? A multilevel
evaluation. Jt Comm J Qual Patient Saf 36:133142, Mar.2010.21. Wolf
F.A., Way L.W., Stewart L.: The efficacy of medical team
training:Improved team performance and decreased operating room
delays: A detailedanalysis of 4,863 cases. Ann Surg 252:477483,
Sep. 2010.22. Russ-Eft D., Preskill H.: Evaluation in
Organizations, 2nd ed. Philadelphia:Basic Books, 2009.23. Weaver
S.J., et al.: The anatomy of health care team training and the
stateof practice: A critical review. Acad Med 85:17461760, Nov.
2010.24. McCulloch P., Rathbone J., Catchpole K.: Interventions to
improve team-work and communications among healthcare staff. Br J
Surg 98:469479, Feb.2011.25. Nolan K., et al.: Using a framework
for spread: The case of patient accessin the Veterans Health
Administration. Jt Comm J Qual Patient Saf 31:339347,Jun. 2005.26.
Reason R.: Human error: Models and management. West J
Med172:393396, Jun. 2000.27. Woods D.D., et al.: Behind Human
Error, 2nd ed. Burlington, VT: Ash-gate, 2010.28. Baldwin T.T.,
Ford J.K.: Transfer of training: A review and directions for
fu-ture research. Personnel Psychology 41:63105, Dec. 1988.29.
Goldstein I., Ford J.K.: Training in Organizations, 4th ed.
Belmont, CA:Wadsworth, 2002.30. Salas E., Rosen M.A., Weaver S.J.:
Evaluating teamwork in healthcare: Bestpractices for team
performance measurement. In McGaghie W.C. (ed.): Inter-national
Best Practices for Evaluation in the Health Professions. Abingdon,
UK:Radcliffe Publishing Ltd., forthcoming.31. Jeffcott S.A,
Mackenzie C.F.: Measuring team performance in healthcare: re-view
of research and implications for patient safety. J Crit Care
23:188196,Jun. 2008.32. Cannon-Bowers J.A., Salas E.: A framework
for developing team perfor -mance measures in training. In Brannick
M.T., Salas E., Prince C. (eds.): TeamPerformance Assessment and
Measurement: Theory, Methods, and Applications.Hillsdale, NJ:
Erlbaum, 1997, pp. 5662.
33. Salas E., et al.: The wisdom of collectives in
organizations: An update of theteamwork competencies. In Salas E.,
Goodwin G.F., Burke C.S. (eds.): Team Ef-fectiveness in Complex
Organizations. New York City: Routledge, 2009, pp.3979.34.
Smith-Jentsch K.A., Johnston J., Payne S.C.: Measuring team-related
ex-pertise in complex environments. In Cannon-Bowers J.A., Salas E.
(eds.): Mak-ing Decisions Under Stress: Implications for Individual
and Team Training.Washington, DC: American Psychological
Association, 1998, pp. 6187.35. Dykes P.C., Rothschild J.M., Hurley
A.C.: Medical errors recovered by crit-ical care nurses. J Nurs Adm
40:241246, May 2010.36. Wright N., et al.: Maximizing
Controllability in Performance Measures.Poster presented at the
25th Annual Conference of the Society for Industrial
andOrganizational Psychology, Atlanta, Apr. 810, 2010.37.
Cannon-Bowers J.A., Bowers C.: Team development and functioning.
InZedeck S. (ed.): APA Handbook of Industrial and Organizational
Psychology. Vol1: Building and Developing the Organization.
Washington, DC: American Psy-chological Association, 2010, pp.
597650.38. Salas E., et al.: Team training for patient safety. In
Carayon P. (ed.): Hand-book of Human Factors and Ergonomics in
Healthcare and Patient Safety. NewYork City: Francis & Taylor,
2006, forthcoming. 39. Ilgen D.R., et al.: Teams in organizations:
From input-process-output mod-els to IMOI models. Annu Rev Psychol
56:517543, Feb. 2005.40. Aguinis H., Kraiger K.: Benefits of
training and development for individ-uals and teams, organizations,
and society. Annu Rev Psychol 60:451474, Jan.2009.41. Kirkpatrick
D.L.: Evaluating Training Programs: The Four Levels. San
Fran-cisco: Berrett-Koehler, 2004.42. Cook D.A.: One drop at a
time: Research to advance the science of simu-lation. Simul Healthc
5:14, Feb. 2010.43. Gaba D.: The pharmaceutical analogy for
simulation: A policy perspective.Simul Healthc 5:57, Feb. 2010.44.
Pronovost P.J., Freischlag J.A.: Improving teamwork to reduce
surgical mor-tality. JAMA 304:17211722, Oct. 20, 2010.45. Langley
G.J., Nolan K.M., Nolan T.W.: The Foundation of Improvement.Silver
Spring, MD: API Publishing, 1992.46. Berwick D.M.: A primer on
leading the improvement of systems. BMJ312(7031):619622, Mar. 9,
1996.47. Nielsen P.E., et al.: Effects of teamwork training on
adverse outcomes andprocess of care in labor and delivery: A
randomized controlled trial. Obstet Gy-necol 109:4855, Jan.
2007.48. Pratt S.D., et al.: John M. Eisenberg Patient Safety and
Quality Awards:Impact of CRM-based training on obstetric outcomes
and clinicians patientsafety attitudes. Jt Comm J Qual Patient Saf
33:720725, Dec. 2007.49. Hillman K., et al.: Introduction of the
medical emergency team (MET) sys-tem: A cluster-randomized
controlled trial. Lancet 365(9477):20912097, Jun.1824, 2005.
Erratum in Lancet 366(9492):1164, Oct. 1, 2005. 50. Morgan B.B.
Jr., et al.: The team-training load as a parameter of
effective-ness for collective training in units (Lab Report No.:
A561360). Norfolk, VA:Old Dominion University Performance
Assessment. Sponsored by the Depart-ment of Defense, 1978. 51.
Schall M., Nolan K. (eds.): Spreading Improvement Across Your
Health CareOrganization. Oak Brook, IL: Joint Commission Resources,
2008.52. Berwick D.M.: The science of improvement. JAMA
299:11821184, Mar.12, 2008.53. SQUIRE: Standards for Quality
Improvement Reporting Excellence:SQUIRE Guidelines.
http://www.squire-statement.org/ (accessed Jun. 20, 2011).
Copyright 2011 The Joint Commission
http://www.ingentaconnect.com/content/external-references?article=0029-7844(2007)109L.48[aid=8083142]http://www.ingentaconnect.com/content/external-references?article=0029-7844(2007)109L.48[aid=8083142]http://www.ingentaconnect.com/content/external-references?article=0066-4308(2009)60L.451[aid=9292385]http://www.ingentaconnect.com/content/external-references?article=0066-4308(2009)60L.451[aid=9292385]http://www.ingentaconnect.com/content/external-references?article=0066-4308(2005)56L.517[aid=9216926]http://www.ingentaconnect.com/content/external-references?article=0007-1323(2011)98L.469[aid=9612939]http://www.ingentaconnect.com/content/external-references?article=0007-1323(2011)98L.469[aid=9612939]http://www.ingentaconnect.com/content/external-references?article=0066-4308(2001)52L.471[aid=6075936]http://www.ingentaconnect.com/content/external-references?article=0017-9124(2002)37L.1553[aid=7890488]http://www.ingentaconnect.com/content/external-references?article=0140-6736(2005)366L.1164[aid=8383642]http://www.ingentaconnect.com/content/external-references?article=0140-6736(2005)365L.2091[aid=7444073]http://www.ingentaconnect.com/content/external-references?article=0140-6736(2005)365L.2091[aid=7444073]http://www.ingentaconnect.com/content/external-references?article=0959-535x(1996)312L.619[aid=3862294]http://www.ingentaconnect.com/content/external-references?article=0959-535x(1996)312L.619[aid=3862294]http://www.squire-statement.org/http://www.squire-statement.org/
-
350 August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
Shad Deering, M.D.; Michael A. Rosen, Ph.D.; Vivian Ludi, R.N.;
Michelle Munroe, C.N.M.; Amber Pocrnich, R.N.C.;Christine Laky,
M.D.; Peter G. Napolitano, M.D.
Changes to the processes of delivering care to wounded sol-diers
in the modern military health care system have dras-tically
improved patient outcomes in the wars in Iraq andAfghanistan when
compared to other major conflicts.1 A funda-mental change
contributing to this improvement has been afocus on moving patients
quickly through levels (or echelons) ofcare to get the wounded to a
facility with the appropriate capa-bilities for definitive care.
This often involves rapid and frequenttransitions of care for
critically injured patients and consequentlyrequires high degrees
of communication and coordinationamong team members within as well
as between levels of care. Asin civilian health care, effective
teamwork is crucial for success.
In the decade since the Institute of Medicines (IOM)
ground-breaking report To Err Is Human,2 a wide variety of
teamwork-based interventions have been implemented.3 This article
docu-ments the implementation of the TeamSTEPPS programthroughout
medical facilities in Iraq between November 2007and December
2008one of the most intense phases of the con-flict. It also
reports on the interventions impact on the rate ofdifferent types
of patient safety events at the initial location of
im-plementationa combat support hospital (CSH) in Baghdad.
TeamSTEPPS, the Military Healthcare
System, and the TeamSTEPPS Implementation
In the following sections, background information on the
Team-STEPPS program, the organization of the deployed
MilitaryHealthcare System (MHS), and the TeamSTEPPS implementa-tion
initiative in Iraq is provided.
TEAMSTEPPSThe TeamSTEPPS program is an evidence-based
teamwork
system aimed at optimizing patient outcomes by
improvingcommunication and other teamwork skills among health
careprofessionals.4 An intervention designed to develop a culture
ofsafety through training teamwork skills, TeamSTEPPS was
de-veloped by the U.S. Department of Defense Patient Safety
Pro-
Teamwork and Communication
On the Front Lines of Patient Safety: Implementation and
Evaluation of Team Training in Iraq
Article-at-a-Glance
Background: Team training has been identified as a keystrategy
for reducing medical errors and building a culture ofsafety in
health care. Communication and coordination skillscan serve as
barriers to potential errors, as in the modern de-ployed U.S.
Military Healthcare System (MHS), which em-phasizes rapid movement
of critically injured patients tofacilities capable of providing
definitive care. A team traininginterventionTeamSTEPPSwas
implemented on a largescale during one of the most intense phases
of the conflict inIraq. This evaluation of the program constituted
the first un-dertaken in a combat theater of
operations.Implementing TeamSTEPPS in Iraq: The Baghdadcombat
support hospital (CSH) conducted continuous op-erations from a
fixed facility for a 13-month deploymentbetween November 2007 and
December 2008. TheTeamSTEPPS implementation in Iraq began at this
facilityand spread throughout the combat theater of
operations.Teamwork training was implemented in two primary
train-ing sessions, followed up with reinforcement of team
behav-iors on the unit by hospital leadership. Results: A total of
153 patient safety reports were submit-ted during the 13 months
reviewed, 94 before TeamSTEPPSimplementation and 59 afterwards.
After training, there weresignificant decreases in the rates of
communication-relatederrors, medication and transfusion errors, and
needlestick in-cidents. There was a significant decrease in the
rate of inci-dents coded communication as the primary teamwork
skillthat could have potentially prevented the event. Conclusions:
Process improvement programs such asTeamSTEPPS implementation can
be conducted under theextremely austere conditions of a CSH in a
combat zone.Teamwork training decreased medical errors in the
CSHwhile deployed in the combat theater in Iraq.
Copyright 2011 The Joint Commission
-
351August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
gram, in collaboration with the Agency for Healthcare
Researchand Quality. Although local applications of this program
havebeen evaluated in the United States,5,6 its potential to affect
pa-tient safety has not previously been investigated in a combat
the-ater of operations.
THE MILITARY HEALTHCARE SYSTEM IN A COMBATTHEATER OF
OPERATIONS
In the Iraq theater of operations during 20072009 therewere four
echelons (or levels) of care, as summarized in Table 1(above). This
organization and process is intended to movewounded soldiers as
quickly as possible to facilities that have thecapabilities to
provide definitive care.
When the Baghdad CSH began the TeamSTEPPS imple-mentation, there
were three CSHs in the combat theater. EachCSH is a flexible
collection of people, equipment, and other re-sources that can be
divided into multiple task force (TF) sites asthe needs demand and
conditions permit in the combat theater.These TF sites were the
workhorses of surgical support in com-bat zones, with wounded
flowing to them directly from the pointof injury on the
battlefield. This often resulted in rapid escala-tion of patient
census, followed by rapid de-escalation within amatter of hours.
Each CSH typically had around 500 individu-als assigned before
being split into separate TF sites that wouldfunction at different
locations. Local Iraqi patients who were ad-mitted to the CSH were
discharged or transferred to civilianIraqi care, which was very
limited, within 1 to 30 days when sta-ble. U.S. soldiers were
transferred to Level IV echelon carewithin 6 to 48 hours unless
delayed by operational or environ-mental conditions.
IMPLEMENTING TEAMSTEPPS IN IRAQThe Baghdad CSH conducted
continuous operations from a
fixed facility for a 13-month deploymentbetween November2007 and
December 2008. The TeamSTEPPS implementationin Iraq began at this
facility and spread throughout the combattheater of operations. We
now describe the implementation,which began at the Baghdad CSH and
proceeded to other loca-tions within the combat theater of
operations.
Implementation at the Baghdad CSH. The patient safety of-ficer
[V.L.], along with several TeamSTEPPS Master Trainers*assigned to
the Baghdad CSH developed a two-phase approachto implementing
TeamSTEPPS. The basic content of the train-ing was not altered from
what is delivered in civilian facilities,but examples where used
from the CSH context. In the firstphase, as many staff as possible
were exposed to the Team-STEPPS concepts and tools. The second
phase focused on pro-viding more comprehensive training as
scheduling allowed.
The first phase of implementation began with two Team-STEPPS
fundamentals courses attended by one to three individ-uals from
every unit and section of the hospital for a total of 50people.
This initial cadre of trained staff served as leaders in
theirsections and were each responsible for implementing one
Team-STEPPS concept in their area twice a week. Staff members
weretrained on these tools in several ways, including morning
reports,posting on a community whiteboard, daily announcements
bythe deputy in charge of physicians, a shared intranet-based
calendar, and direct e-mails to all CSH staff.
In addition, the TeamSTEPPS modules were accessible to allstaff
via the intranet. Although simulation is the preferredmethod of
delivery for team training, the needed resources werenot available.
However, delivering the teamwork content to in-dividuals through
Web-based methods does not differ signifi-
Level of Care General Description
Level I: Battalion aide stations are embedded within the troops
and serve as the first line of care for wounded
Battalion Aide Stations (BASs) soldiers. BASs consist of the
unit medics and usually a general medical officer.
Level II: FSTs are small, 20-member teams consisting of 3
general surgeons, 1 orthopedic surgeon, 2 nurse
Forward Surgical Teams (FSTs) anesthetists, 3 nurses, medics,
and other support staff. FSTs are designed to be rapidly
deployable
(setup time of 1 hour) and to move close to the front lines. The
goal of the FST is to stabilize and evacu-
ate patients to higher levels of care.
Level III: CSHs are 200-plus-bed hospitals with operating rooms
and radiology and laboratory services. The goal is
Combat Support Hospitals (CSHs) to have patient stays no longer
than 3 days before the patient is either released or transferred to
the next
level of care if further treatment is needed.
Level IV: Level IV facilities, located outside the theater of
operations, are where definitive treatment is provided
Definitive Care* for patients needing more than 30 days of
care.
* Not a part of this TeamSTEPPS implementation.
Table 1. Overview of Echelons of Care and the Units included in
the TeamSTEPPS Implementation
* The TeamSTEPPS Master Trainer course includes content on both
teamwork and
implementation and improvement.
Copyright 2011 The Joint Commission
-
352 August 2011 Volume 37 Number 8
The Joint Commission Journal on Quality and Patient Safety
cantly from typical delivery of TeamSTEPPS through
didacticmethods, which have proven effective.6 Section leaders
alsotrained, modeled, and coached these behaviors in their area
ofthe hospital. All staffincluding patient administrators,
labo-ratory personnel, for example, and not just the
providersre-ceived TeamSTEPPS training.
In the second phase of implementation, a four-hour Team-STEPPS
fundamentals course was given one to three times aweek. In a
three-month period, all 330 individuals (providersand general staff
) at the Baghdad component of the CSH, in-cluding the Iraqi
translators, received the fundamentals course.Typical deployment
length for the CSH was 1215 months.There was some turnover of
physicians and nurses, depending ontheir specialtyfor example,
surgeons typically deployed for 6months, emergency department
physicians for 12 monthsbutin general, turnover was minimal.
Expansion Throughout the Combat Theater of Operations.The
TeamSTEPPS implementation efforts at the Baghdad CSHwere recognized
early by leadership at the medical brigade level(the command for
all medical services in the combat theater ofoperations), and a
plan for spreading this initiative to all medicalfacilities in Iraq
(Levels I through III) was requested soon afterthe implementation
began. TeamSTEPPS was viewed as a poten-tial solution to problems
that led to a sentinel event* that theCSH experienced before
implementation, and early successesand good catches helped to
solidify support for the broaderimplementation.
The implementation strategy included two general methods:1.
Level I and Level II facilities used Web-based training;
these were small units, and travel within a combat zone was
dif-ficult and dangerous.
2. Level III facilities sent a champion or change team,
whichtypically consisted of a physician, a nurse, and a
non-commis-sioned officer, to the CSH in Baghdad for a 2.5-day
session cov-ering TeamSTEPPS fundamentals, trainer, and culture
changetraining. These teams then returned to their facilities and
re-peated the two-phase implementation method, as described
pre-viously. In some instances, instructors from the Baghdad
CSHtraveled to other CSH sites to assist the change team with
train-ing sessions. In total, more than 3,000 personnel were
trained inTeamSTEPPS concepts across the three levels of care in
Iraq.
MethodsPRE- AND POSTIMPLEMENTATION PERIODSFor the purposes of
evaluation, the 13-month deployment pe-riod for the CSH in Baghdad
was divided into a 7-month pre-implementation period and a 6-month
postimplementationperiod (after TeamSTEPPS training was implemented
and themajority of providers/staff were trained).
DATA COLLECTION: PATIENT SAFETY INCIDENTREPORTS
During the period of this process improvement project, pa-tient
safety/incident reports were filled out for any unusual oc-currence
or near miss/good catch. Examples of such eventsincluded but were
not limited to missed or incorrect medicationdoses, delay in care
episodes, and missed physician orders. Thisstandard patient safety
reportthe patient safety event (PSE)formwas modified by adding
items that were appropriate tothe environment of a deployed CSH. In
addition, the form waschanged to include the steps that the staff
were expected to gothrough to debrief after a patient safety event
to guide the debriefprocess after each event. Specifically, items
were added to reportcommunication and handoff clarity, role and
responsibility clar-ity, maintenance of situation awareness,
distribution of work-load, task assistance requests and provision,
as well as anassessment of errors and lessons learned (Appendix 1,
availablein online article; see SUPERVISOR/Team leader section for
spe-cific items). We now discuss the use of these data in the
processimprovement initiative and data coding for analysis.
INCIDENT REPORT CODINGThe patient safety reports were analyzed
in two steps. In the
first step, the event was analyzed as it happened for the
purposeof immediate process improvement. In the second step, all
re-ports for the 13-month period were retrospectively reviewed.
Step 1. The CSH patient safety committee, which consistedof the
patient safety officer, CSH leadership, and representativesfrom
each clinical area, held monthly meetings to review PSEreports.
This committee addressed precipitating factors leadingto PSEs.
After TeamSTEPPS training, the PSE form includedTeamSTEPPS tools,
such as a Brief or Huddle (Table 2, page354), in the analysis
process for each incident. Breakdowns inspecific teamwork behaviors
were considered as factors con-tributing to the event. In addition,
other features of the event, in-cluding general type of event (for
example, medication andtransfusion errors, needlestick exposures)
were tracked.
Step 2. At the end of the CHS deployment, a group