Top Banner
EBP 1 / 14 This page offers a starting point for finding information on Evidence Based Practice [EBP]. There are many definitions of EBP with differing emphases. A survey of social work faculty even showed they have different ideas about just what makes up EBP (This can be a source of confusion for students and newcomers to this topic of study. Perhaps the best known is Sackett et al's (1996, 71-72) now dated definition from evidence based medicine: "Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. By individual clinical expertise we mean the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice. Increased expertise is reflected in many ways, but especially in more effective and efficient diagnosis and in the more thoughtful identification and compassionate use of individual patients' predicaments, rights, and preferences in making clinical decisions about their care. By best available external clinical evidence we mean clinically relevant research, often from the basic sciences of medicine, but especially from patient centered clinical research into the accuracy and precision of diagnostic tests (including the clinical examination), the power of prognostic markers, and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens." This early definition, however, proved to have some important limitations in practice. Haynes et al (2002) - Sackett's colleagues in the McMaster Group of physicians in Canada - pointed out the definition did not pay enough attention to the traditional determinants of clinical decisions. That is, it purposefully emphasized research knowledge but did not equally emphasize the client's needs and situation, nor the client's stated wishes and goals, nor the clinicians' expertise in assessing and integrating all these elements into a plan of intervention. The contemporary definition of EBP is simply "the integration of the best research evidence with clinical expertise and patient values" (Sackett, et al. 2000, p. x). This simpler, current, definition gives equal emphasis to 1) the patient's situation, 2) the patient's goals, values and wishes, 3) the best available research evidence, and 4) the clinical expertise of the practitioner. The difference is that a patient may refuse interventions with strong research support due to differences in beliefs and values. Similarly, the clinician may be aware of factors in the situation (co-occurring disorders, lack of resources, lack of funding, etc.) that indicate interventions with the best research support may not be practical to offer. The clinician may also notice that the best research was done on a population different from the current client, making its relevance questionable, even though its rigor is strong. Such differences may include age, medical conditions, gender, race or culture and many others. This contemporary definition of EBP has been endorsed by many social workers. Gibbs and Gambrill (2002), Mullen and Shlonsky (2004, Rubin (2008), and Drisko and Grady (2012) all apply it in their publications. Social workers often add emphasis to client values and views as a key part of intervention planning. Many social workers also argue that clients should be active participants in intervention planning, not merely recipient's of a summary of "what works" from an "expert" (Drisko & Grady, 2012) Actively involving clients in intervention planning may also be a useful way to enhancing client motivation and to empower clients. Some in social work view EBP as a mix of a) learning what treatments "work" based on the best available research (whether experiential or not), b) discussing client views about the treatment to consider cultural and other differences, and to honor client self determination and autonomy, c) considering the professionals "clinical wisdom" based on work with similar and dissimilar cases that may provide a context for understanding the research evidence, and d) considering what the professional can, and can not, provide fully and ethically (Gambrill, 2003; Gilgun, 2005). With much similarity but some differences, the American Psychological Association (2006, p. 273) defines EBP as "the integration of the best available research with clinical expertise in the context of patient characteristics, culture and preferences." Gilgun (2005) notes that while research is widely discussed,
14

Evidence Based Practice

Dec 08, 2015

Download

Documents

Evidence Based Practice in Medicine and Education
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Evidence Based Practice

EBP

1 / 14

This page offers a starting point for finding information on Evidence Based Practice [EBP].

There are many definitions of EBP with differing emphases. A survey of social work faculty even

showed they have different ideas about just what makes up EBP (This can be a source of confusion

for students and newcomers to this topic of study. Perhaps the best known is Sackett et al's (1996,

71-72) now dated definition from evidence based medicine: "Evidence based medicine is the

conscientious, explicit, and judicious use of current best evidence in making decisions about the care

of individual patients. The practice of evidence based medicine means integrating individual clinical

expertise with the best available external clinical evidence from systematic research. By individual

clinical expertise we mean the proficiency and judgment that individual clinicians acquire through

clinical experience and clinical practice. Increased expertise is reflected in many ways, but especially

in more effective and efficient diagnosis and in the more thoughtful identification and compassionate

use of individual patients' predicaments, rights, and preferences in making clinical decisions about

their care. By best available external clinical evidence we mean clinically relevant research, often

from the basic sciences of medicine, but especially from patient centered clinical research into the

accuracy and precision of diagnostic tests (including the clinical examination), the power of

prognostic markers, and the efficacy and safety of therapeutic, rehabilitative, and preventive

regimens."

This early definition, however, proved to have some important limitations in practice. Haynes et al

(2002) - Sackett's colleagues in the McMaster Group of physicians in Canada - pointed out the

definition did not pay enough attention to the traditional determinants of clinical decisions. That is, it

purposefully emphasized research knowledge but did not equally emphasize the client's needs and

situation, nor the client's stated wishes and goals, nor the clinicians' expertise in assessing and

integrating all these elements into a plan of intervention.

The contemporary definition of EBP is simply "the integration of the best research evidence with

clinical expertise and patient values" (Sackett, et al. 2000, p. x). This simpler, current, definition

gives equal emphasis to 1) the patient's situation, 2) the patient's goals, values and wishes, 3) the best

available research evidence, and 4) the clinical expertise of the practitioner. The difference is that a

patient may refuse interventions with strong research support due to differences in beliefs and values.

Similarly, the clinician may be aware of factors in the situation (co-occurring disorders, lack of

resources, lack of funding, etc.) that indicate interventions with the best research support may not be

practical to offer. The clinician may also notice that the best research was done on a population

different from the current client, making its relevance questionable, even though its rigor is strong.

Such differences may include age, medical conditions, gender, race or culture and many others.

This contemporary definition of EBP has been endorsed by many social workers. Gibbs and Gambrill

(2002), Mullen and Shlonsky (2004, Rubin (2008), and Drisko and Grady (2012) all apply it in their

publications. Social workers often add emphasis to client values and views as a key part of

intervention planning. Many social workers also argue that clients should be active participants in

intervention planning, not merely recipient's of a summary of "what works" from an "expert" (Drisko

& Grady, 2012) Actively involving clients in intervention planning may also be a useful way to

enhancing client motivation and to empower clients.

Some in social work view EBP as a mix of a) learning what treatments "work" based on the best

available research (whether experiential or not), b) discussing client views about the treatment to

consider cultural and other differences, and to honor client self determination and autonomy, c)

considering the professionals "clinical wisdom" based on work with similar and dissimilar cases that

may provide a context for understanding the research evidence, and d) considering what the

professional can, and can not, provide fully and ethically (Gambrill, 2003; Gilgun, 2005). With much

similarity but some differences, the American Psychological Association (2006, p. 273) defines EBP

as "the integration of the best available research with clinical expertise in the context of patient

characteristics, culture and preferences." Gilgun (2005) notes that while research is widely discussed,

Page 2: Evidence Based Practice

EBP

2 / 14

the meanings of "clinical expertise" and "client values and preferences" have not been widely

discussed and have no common definition.

Drisko & Grady (2012) argue that the EBP practice decision making process defined by Sackett and

colleagues seems to fit poorly with the way health care payers enact EBP at a macro, policy, level.

Clinical social workers point to list of approved treatments that will be funded for specific disorders -

and note this application of EBP does not include specific client values and preferences and ignores

situational clinical expertise. Drisko & Grady point out that there is a conflict between the EBP model

and how it is implemented administratively to save costs in health care. While cost savings are very

important, this use of "EBP" is not consistent with the Sackett model. Further, the criteria used to

develop lists of approved treatments is generally not clear or transparent - or even stated. Payers very

often appear to apply standards that are different from multidisciplinary sources of systematic reviews

of research like the Cochrane Collaboration. Clinical expertise and client values too often drop out of

the administrative applications of EBP.

Evidence based practice is one useful approach to improving the impact of practice in medicine,

psychology, social work, nursing and allied fields. Of course, professions have directed considerable

attention to "evidence" for many years (if not for as long as they have existed!). They have also

honored many different kinds of evidence. EBP advocates put particular emphasis on the results of

large-scale experimental comparisons to document the efficacy of treatments against untreated control

groups, against other treatments, or both. (See, for example, the University of Oxford's Hierarchy of

Evidence for EBM). They do this because well conceptualized and completed experiments (also

called RCTs) are a great way to show a treatment caused a specific change. The ability to make cause

and effect determinations is the great strength of experiments. Note that this frames "evidence" in a

very specific and delimited manner. Scholars in social work and other professions have argued for

"Many Ways of Knowing" (Hartman, 1990). They seek to honor the knowledge developed by many

different kinds of research - and to remind clinicians, researchers and the public that the

conceptualization underlying research may be too narrow and limited. Thus Drisko & Grady (2012)

argue that EBP, as summarized by researchers, may devalue non-experimental research. Experiments

are only as good as the exploratory research that discovers new concepts, and the descriptive research

that helps in the development of tests and measures. Only emphasizing experiments ignores the very

premises on which they rest. Finally, note that EBM/EBP hierarchies of research evidence include

many non-experimental forms of research since experiments for some populations may be unethical

or impractical - or simply don't address the kinds of knowledge needed in practice.

All the "underpinnings" of experimental research: the quality of conceptualizations, the quality of

measures, the clarity and specificity of treatments used, the quality of samples studied and of the

quality and completeness of collected data are assumed to be sound and fully adequate when used to

determine "what works." There is also an assumption that the questions framing the research allow

for critical perspectives and are fully ethical. Social workers would argue they should also include

social diversity samples well - since diverse kinds of people show up at real world clinics.

International standards affirm basic ethical principles supporting respect for persons, beneficence and

social justice (see The Belmont Report.)

Is EBP only about Intervention or Treatment Planning?

No. This may be the most common application of EBP for clinical social workers, but the EBP

process can also be applied to a) making choices about diagnostic tests and protocols to insure

thorough and accurate diagnosis), b) selecting preventive or harm-reduction interventions or

programs, c) determining the etiology of a disorder or illness, d) determining the course or

progression of a disorder or illness, e) determining the prevalence of symptoms as part of establishing

or refining diagnostic criteria, f) completing economic decision-making about medical and social

service programs (University of Oxford Centre for Evidence-based Medicine, 2011), and even g)

understanding how a client experiences a problem or disorder (Rubin, 2008).

Page 3: Evidence Based Practice

EBP

3 / 14

EBP is also not the same as defining empirically supported treatments (ESTs), empirically supported

interventions (ESIs), or 'best practices.' These are different ideas and are based on different models.

These models don't include client values and preferences nor clinical expertise as EBP does.

EBP as a Social Movement

While EBP is most often described in terms of a practice decision-making process, it is also useful to

think of it as a much larger social movement. Drisko and Grady (2012) argue that at a macro-level,

EBP is actively used by policy makers to shape service delivery and funding. At a messo- level, EBP

is impacting the kinds of interventions that agencies offer, and even shaping how supervision is done.

Drisko and Grady (2012) also argue that EBP is establishing a hierarchy of research evidence that is

privileging experimental research over other ways of knowing. Experimental evidence has many

merits, but is not the only way of knowing of use and importance in social work practice. Finally, the

impact of EBP may alter how both practice and research courses are taught in social work. There are

other aspects of EBP beyond the core practice decision-making process that are re-shaping social

work practice, social work education, and our clients' lives. As such, it may be viewed as a public idea

or a social movement at a macro level.

Why Evidence Based Practice or EBP?

It is one step toward making sure each client gets the best service possible.

Some argue it helps keep your knowledge up to date, supplements clinical judgment, can save time

and most important can improve care and even save lives. Its a way to balance your own views with

large scale research evidence.

Some say it's unethical to use treatments that aren't known to work. (Of course, services may need to

be so individualized in unique circumstances that so that knowing "what works" in general may not be

the most salient factor in helping any particular client. Still, using the best available research

knowledge is always beneficial.)

Several web sites serve as portals to bodies of research useful to EBP. The focus on these

organizations varies, but the emphasis remains on (mainly) experimental demonstration of the

efficacy of treatments.

How is EBP Implemented in Practice?

Profiling research that informs professionals and clients about what works is where evidence based

practice starts. These summaries tells us what we know about treatment and program efficacy based

on experimental work - as well as what we don't know or aren't really sure about.

Having access to information on what works allows professionals, in conjunction with clients, to

select treatments that are most likely to be helpful (and least likely to be harmful) before intervention

is begun. Practice evaluation is quite different in that takes place at the start of treatment, during

treatment and after treatment. Practice evaluation also uses single case methods rather than large

sample, experimental research designs. EBP and practice evaluation work together very well, but

they have different purposes and use very different methods.

The creation of "User's Guides" is one way to make the results of research more available to

practitioners. In medicine, the idea is to get research results to the practitioner in an easy to

assimilate fashion, though this often has a price.

Funding is being offered to support EBP from governments and private/insurance sources.

Page 4: Evidence Based Practice

EBP

4 / 14

However, to understand and critically appraise this material, a lot of methodological knowledge is

needed. Sites offering introductions to the technology of EBP are growing.

How is EBP Taught?

There are some useful resources for Teaching and Learning about EBP. One fine example is offered

by Middlesex University in the United Kingdom which includes good information on critical

appraisal of information in EBP.

The State University of New York's Downstate Medical center offers a (medically oriented) online

course in EBP, including a brief but useful glossary.

The Major Sources of Research for use in EBP:

The Cochrane Collaboration [ www.cochrane.org ] sets standards for reviews of medical, health and

mental health treatments and offers "systematic reviews" of related research by disorder. The

Cochrane Reviews offer a summary of international published and sometimes pre-publication

research. Cochrane also offers Methodological Abstracts to orient researchers and research

consumers alike.

The Campbell Collaboration [ www.campbellcollaboration.org ] offers review of the impact of social

service programs. "The Campbell Collaboration (C2) is an organization that aims to help people

make well-informed decisions about the effects of interventions in the social, behavioral and

educational arenas. C2's objectives are to prepare, maintain and disseminate systematic reviews of

studies of interventions. C2 acquires and promotes access to information about trials of interventions.

C2 builds summaries and electronic brochures of reviews and reports of trials for policy makers,

practitioners, researchers and the public."

C2 SPECTR is a registry of over 10,000 randomized and possibly randomized trials in education,

social work and welfare, and criminal justice.

C2 RIPE [Register of Interventions and Policy Evaluation] offers researchers, policymakers,

practitioners, and the public free access to reviews and review-related documents. These materials

cover 4 content areas: Education, Crime and Justice, Social Welfare and Methods.

The United States government also offers treatment guidelines based on EBP principles at the

National Guideline Clearinghouse. [ http://www.guideline.gov/ ] This site includes very good

information on medication as wll as very clear statements of concern about medications indicated in

guidelines which later prove to have limitations.

The U.S. government provides information on ongoing, government sponsored, clinical trials.

Other Online Resources for EBP and Treatment Guidelines Derived from EBP Criteria and

Procedures:

The American Psychiatric Association offers Practices Guidelines. Please be aware that the numbers

of practice guidelines are few. Existing guidelines may be up to 50 pages in length. If you are not

allowed to enter via this hyperlink, paste the following URL into your browser:

http://www.psych.org/psych_pract/treatg/pg/prac_guide.cfm

The Agency for Healthcare Research and Quality also offers outcome research information. AHRQ

offers an alphabetical listing out outcome studies.

Page 5: Evidence Based Practice

EBP

5 / 14

Note that there are a growing number of commercial [.com] sites that offer their consultation

regarding EBP. It is not always easy to determine their organization structure and purposes, the basis

of their recommendations and any potential conflicts of interest. In this regard, the sites of the

government and of professional organizations are "better" resources as their purposes, missions and

funding sources are generally more clear and publicly stated.

References:

American Psychological Association. (2006). APA presidential task force on evidence based practice. Washington, DC: Author

Dobson, K., & Craig, K. (1998). Empirically supported therapies: Best practice in professional psychology. Thousand Oaks, CA: Sage.

Drisko, J. & Grady, M. (2012). Evidence-based practice in clinical social work. New York: Springer-Verlag.

Elwood, J.M. (2007). Critical appraisal of epidemiological studies and clinical trials (3rd ed.) New York: Oxford University Press.

Gambrill, E. (2003). Evidence-based practice: Implications for knowledge development and use in social work. In A. Rosen & E. Proctor (Eds.), Developing practice guidelines for social work intervention (pp. 37-58). New York: Columbia University Press.

Gibbs, L. (2003). Evidence-based practice for the helping professions. New York: Wadsworth.

Gilgun, J. (2005). The four cornerstones of qualitative research. Qualitative Health Research, 16(3), 436-443.

Howard, M., McMillen, C., & Pollio, D. (2003). Teaching evidence-based practice: Toward a new paradigm for social work education.

Research on Social Work Practice, 13, 234-259.

Mace, C., Moorey, S., & Roberts, B. (Eds.). (2001). Evidence in the psychological therapies: A critical guide for practitioners.

Philadelphia, PA: Taylor & Francis.

Mantzoukas, S. (2008). A review of evidence-based practice, nursing research and reflection: Levelling the hierarchy. Journal of Clinical

Nursing, 17(2), 214-223.

Roberts, A., & Yeager, K. (Eds.). (2004). Evidence-based practice manual: Research and outcome measures in health and human services.

New York: Oxford University Press.

Sackett, D., Rosenberg, W., Muir Gray, J., Haynes, R. Richardson, W. (1996). Evidencebased medicine: what it is and what it isn't. British

Medical Journal, 312, 71-72. http://cebm.jr2.ox.ac.uk/ebmisisnt.html

Sackett, D., Richardson, W., Rosenberg, W., & Haynes, R. (1997). Evidence-based medicine: How to practice and teach EBM. New

York: Churchill Livingstone.

Simpson, G., Segall, A., & Williams, J. (2007). Social work education and clinical learning: Reply to Goldstein and Thyer. Clinical Social

Work Journal, (35), 33-36.

Smith, S., Daunic, A., & taylor, G. (2007). Treatment fidelity in applied educational research: Expanding the adoption and application of

measures to ensure evidence-based practice. Education & Treatment of Children, 30(4), pp. 121-134.

Stout, C., & Hayes, R. (Eds.). (2005). The evidence-based practice: Methods, models, and tools for mental health professionals. Hoboken,

NJ: Wiley.

Stuart, R., & Lilienfeld, S. (2007). The evidence missing from evidence-based practice. American Psychologist, 62(6), pp. 615-616.

Trinder, L., & Reynolds, S. (2000). Evidence-based practice: A critical appraisal. New York: Blackwell.

Wampold, B. (2007). Psychotherapy: The humanistic (and effective) treatment. American Psychologist, 62(8), pp. 857-873.

Page 6: Evidence Based Practice

EBP

6 / 14

WHAT IS EBP

Evidence based practice (EBP) might best be viewed as an ideology or a "public idea" in Robert

Reich's (1990) terminology. The movement began with the work of Scottish physician Archie

Cochrane who sought to identify "treatments that work" using the results of experimental research.

Another important aspect of Cochrane's interest was to identify and end treatments that do harm or are

not effective. In practice, the idea was to supplement professional decision making with the latest

research knowledge. [It is worth noting that some critics might argue "replace" professional decision

making might even be applicable.] The goal was to enhance the scientific base of professional

practice in several disciplines - medicine, nursing, psychology, social work, etc. In turn, educational

efforts in these disciplines could be oriented to provide beginning professionals with effective tools

and a model for the continuing improvement and renewal of their professional practices.

EBP as described here is not the same as an empirically supported treatment [EST]. ESTs are

variously defined, but are basically treatments or services that have been empirically studied and

found to be helpful or effective - in one or more settings. The difference is that ESTs may be

treatments or services that are not fully replicable in other settings and most have not been studied in

multiple contexts. ESTs may not have been replicated - tested more than once - to insure the result

will be the same or similar in another setting. The efforts of the Cochrane Collaboration include

setting rigorous standards for examining the methods by which outcome research is done as well as

reporting research results. This is to insure transparency in their findings [that others can know

exactly how the conclusions were drawn] and not to exaggerate the results of a single test of any

treatment.

Why EBP? Some Say Any Other Approach Is Unethical

Some advocates argue that to treat anyone using treatments without known efficacy is unethical. That

is, if we know a given medicine or substance abuse program or treatment for attachment problems

works better than another treatment, it is an ethical obligation to use it in order to best serve clients or

patients. This is an argument that is hard to challenge - at least in an ideal world. Given strong and

unambiguous research evidence that is clearly useful to a given practice situation, and consistent with

the client's world view and values, using EBP "best treatment" is the best way to go.

Policy and Funding Issues

In social work and psychology, advocates have also argued that only interventions with demonstrated

efficacy should be supported financially. Such an argument links demonstrations of efficacy with the

funding structure of the current managed care environment. It may be seen as either a way to best use

limited dollars or yet another method to curtail funding for costly services. Without provision of

adequate funds to do thorough research on the great variety of treatments in use, the requirement of

proven efficacy may be used as a tool to limit treatment services.

Assessing EBP -- Some Key Issues

In psychology, the initial unveiling of "empirically validated treatments" by an American

Psychological Association Task Force brought forth both interest and criticism. It also brought out

differences regarding interpretations of the existing research literature and regarding the merits of

certain research methods. One key concern was the over-reliance on randomized control trials

[RCTs]. An RCT is an experiment in which participants are randomly assigned to either a treatment

or a control group. Ideally, neither participant or treating clinician knows which group is which. After

a course of treatment (or control), improvement is determined by comparing pre-treatment status with

post-treatment status. If the treated group improves significantly more that the controls, we can say

the treatment caused the change and that the treatment works (better than no treatment). In another

Page 7: Evidence Based Practice

EBP

7 / 14

form of RCT, the best known treatment is compared to a new treatment using random assignment. If

the new treatment produces better results than does the standard treatment, it is viewed as empirically

supported and "more efficacious."

Efficacy versus Effectiveness

Some practitioners argued that the RCTs don't always reflect "real world" conditions well, so the

results of such studies may not be the same as what is found in real clinics. The core of the concern is

that RCTs often use carefully assessed participants that have only a single disorder and often have

relatively strong social supports. Real world clinics are rarely able to undertake similarly detailed

assessments and, even if they could, would often have to treat people with co-existing (co-morbid)

conditions, less persistence, perhaps fewer social supports and perhaps lower motivation to be in

treatment. Thus carefully run RCTs reflect laboratory conditions rather than real world conditions.

The distinction is know as "effectiveness" versus "efficacy. Laboratory RCTs produce knowledge

about the "efficacy" of a treatment - that it works under ideal conditions. Experimental studies done

under less carefully defined conditions reflecting the variation in real world clinics are known as

"effectiveness" studies.

Conceptualizations of Disorders

It should be noted that most researchers undertaking RCTs assume that the problems or disorders they

are studying are completely and adequately defined. In mental health, the definition of problems most

often follows the American Psychiatric Association Diagnostic and Statistical Manual of Mental

Disorders (DSM) or the World Health Organization's ICD Manual. These definitions vary in clarity

and in their own empirical validation.

Social workers adopt a world view that suggests problems are best understood by viewing "persons in

situations." That is, both external environmental and social factors as well as internal health and

psychological factors will be important in understanding the whole person. This perspective is

partially incorporated in the DSM's Axes IV and V, but in a summary form.

Operational Definitions of Problems

Simply put, EBP generally applies operational definitions of problems in RCT reviews of treatment

effects. This is consistent with the medical model of research and general use in psychology and

social work research. The potential limitation is that such definitions of target problems locate the

problem within the individual and (mostly) ignore social circumstances, supportive and/or

oppressive. This may represent a limited definition of the target problem or a flaw in

conceptualization.

In much organic medical treatment, causes or etiologies may be more clearer identified than is

possible (or at least currently possible) in the world of mental health and social problems. Thus

applying an outcome model that assumes a single, clearly identified "cause" and problems that reflect

symptoms may, or may not, be optimal. Further, different "doses" of treatment may be identifiable

for organic medical conditions but may be less clear cut in the functional, mental health and social

world. Both conceptual and operational diagnoses in mental health pose some challenges and

multiple, comorbid disorders are commonplace -- making real world practice quite different from

tightly controlled and extensively tested experimental studies. (Which ties back into the Efficacy vs.

Effectiveness issue described above.)

Common Factors versus Specific Techniques

Some argue that treatment effects are due more to "common factors" shared by therapies than they are

due to specific treatment techniques. The level of client motivation, the strength and quality of the

Page 8: Evidence Based Practice

EBP

8 / 14

therapeutic relationship, a shared vision of what treatment will include, a shared sense of hope or

expectancy of improvement and even placebo effects are elements of treatment common across

differences in theory and technique -- especially in psychotherapy and social services. RCTs are often

designed to test differences of technique, but ignore or limit the role common factors.

Several meta-analytic studies of psychotherapy for adults demonstrate empirically that several types

of therapy for depression and anxiety are effective. This indicates that common factors, rather than

different treatment techniques, generate roughly equivalent change (at least for these disorders).

On the other hand, Reid (1997) did a meta-analysis of social work interventions for several quite

different problems (mental retardation, smoking cessation, substance abuse. etc.). He found many

types of treatments were helpful but behavioral and cognitive approaches appeared to work better than

did the other techniques. Note, however, that the study compares "apples and oranges", aggregating

dissimilar problems.

The common factors versus specific techniques question is as yet unresolved. Some honor it, others

believe it is not particularly important.

Variation Among Clients, Clinicians and the Treatments they Deliver

Since most quantitative experimental studies are based on group means, we know that "on average"

treatments generate a certain effect. This is valuable information. Yet it does not help the clinician

distinguish which specific client is like the mean responder and who may differ. With medication

some people respond to a smaller than average does, others need more than the average to be helped.

We might assume the same is true in mental health - some people respond with less effort (or are able

to better use opportunities and resources) while others will need much more help (or are less able to

use their resources and opportunities) to improve. Thus the clinician is left to think critically and fit

aggregate treatment results to the specific, unique reality of a given client.

We must also assume that clinicians vary in ability to deliver any given treatment. Referral may be

indicated where a treatment in which one is not fully trained is indicated as the best practice.

We can also assume that their is variation in effectiveness even among well trained clinicians. Unlike

pills, mental health issues appear heavily influenced by relationship factors and expectancy factors.

The Client's Views About the Treatment

In a profession that supports autonomous decision making by the client or client system, clinical

social workers must ask the client about their views of what EBP suggest is the most likely effective

treatment. If the client has concerns about the treatment, these views must be honored. Eileen

Gambrill has developed this idea in several published articles.

Critical thinking and efforts to find knowledge are needed, along with efforts to individualize

treatment to the person and environment (including culture) of the client.

Practice Wisdom and What Professionals Can Do

It would seem wise to allow professionals to use their knowledge, skills, training and experience to

determine if the best available research knowledge fits well with the circumstances at hand. This may

take some wisdom and judgment; may not be automatic. Of course, some supporters of EBP believe

the purpose of EBP is in large part to limit such application of practice wisdom.

Page 9: Evidence Based Practice

EBP

9 / 14

It may also be the case that what research shows is most likely to help based on other's experiences

may not be something the professional is trained to provide or is not comfortable providing. Referrals

may be made in such instances.

Racial, Ethnic and Social Diversity

Many scholars note that there is very little research on services and treatments to populations of color,

immigrant populations (who may have culturally different ideas about mental health and its

treatment), class differences in treatment effectiveness, differences in sexual orientation, and

sometimes gender differences. Research on children, teens and the elder is also often minimal. EBP,

as much of medicine, assumes people are people and that treatments are universally effective. This

may often be so for organic disorders, but is less certain for socially complex concerns such as mental

disorders. Research on the effectiveness of many treatments on diverse populations is lacking. This

is a major shortcoming of EBP at this time.

Reference:

Reich, R. (1988). The power of public ideas. Cambridge, MA: Ballinger Publishing.

THE STEPS OF EBP

The Steps of the EBP Practice Decision-Making Process

There are several steps in doing EBP, but the number varies a bit by author. Still, the key content is

essentially the same in them all.

Drisko & Grady (2012) have worked carefully to honor client values and preferences along with

research evidence and clinical expertise in formulating these six steps of EBP:

1) Drawing on client needs and circumstances learned in a thorough assessment, identify

answerable practice questions and related research information needs;

2) Efficiently locate relevant research knowledge;

3) Critically appraise the quality and applicability of this knowledge to the client's needs and

situation;

4) Discuss the research results with the client to determine how likely effective options fit with

the client's values and goals;

5) Synthesizing the client�s clinical needs and circumstances with the relevant research,

develop a shared plan of intervention collaboratively with the client;

6) Implement the intervention.

Our care in wording the steps of EBP starts with the fact that doing EBP rests first on a well done and

thorough clinical assessment. This is not directly stated in the EBP practice decision-making model,

but is the foundation on which all good intervention planning rests (Drisko & Grady, 2011). We also

view intervention or treatment planning as participatory and collaborative between client and clinician

- not a top-down process (as it appears in many EBM/EBP textbooks). Client values and preferences

are key parts of EBP. Finally, clinical expertise is needed to insure the best research evidence really

fits this the views and needs of client in this situation.

Page 10: Evidence Based Practice

EBP

10 / 14

Additional Steps?

Step 7. A few authors (Gibbs, for one) appear to make practice evaluation an aspect of EBP. That is,

the professional should audit the intervention (to verify it was done appropriately) and evaluate its

yield. This makes some sense, but note that the practice evaluation of the single case would be done

using methods quite different from those used in EBP. Single case or single system designs can help

identify progress, but are based on replication logic rather than the sampling logic underlying

experimental research. That is, the case studies one would use in practice evaluation are not highly

valued in EBP research summaries. Still, practice evaluation is a key part of all good practice.

Step 8. A few authors (Gibbs, for one) also add sharing your results with others and work toward

improving the quality of available evidence. This would be useful but again does not necessarily draw

on the same core logic of experimental research EBP emphasizes. In fact, case studies are often

viewed as the least useful source of evidence in many EBP "evidence hierarchies". Note, however,

that such work may be very helpful in identifying to whom and in what circumstances the best

research evidence does not work or is not appropriate. Ironically, very small scale research may be

very useful in shaping how and when and where to use large scale experimental evidence to best

advantage. Clinicians should publish about their work, but individual case outcomes have ethical

challenges and may not be much valued within EBP hierarchies of evidence.

The University of Oxford offers a fine page on the Steps of EBM.

Note that all steps are meant to be transparent and replicable by others. That is, the steps should be so

clear you could re-do them yourself with enough time and access. It also means many things are

accepted at face value (or as face valid) such as definitions of mental and social disorders (usually

defined via DSM or ICD) though these categories do change over time. Measures of treatments are

assumed to be adequate, valid, reliable and complete. Treatments though often only broadly

described, as assumed to be replicable by others in different settings, with different training and with

different backgrounds.

Note, too, that EBP focuses on the outcome of treatment, not the processes by which change

occurs. Understanding both outcome and change process is the cornerstone of science.

References

Drisko, J. & Grady, M. (2012). Evidence-based practice in clinical social work. New York: Springer-Verlag.

Gibbs, L. (2003). Evidence-based practice for the helping professions. New York: Wadsworth.

ESTs and EBP

The Steps of the EBP Practice Decision-Making Process

There are several steps in doing EBP, but the number varies a bit by author. Still, the key content is

essentially the same in them all.

Drisko & Grady (2012) have worked carefully to honor client values and preferences along with

research evidence and clinical expertise in formulating these six steps of EBP:

1) Drawing on client needs and circumstances learned in a thorough assessment, identify

answerable practice questions and related research information needs;

2) Efficiently locate relevant research knowledge;

3) Critically appraise the quality and applicability of this knowledge to the client's needs and

situation;

Page 11: Evidence Based Practice

EBP

11 / 14

4) Discuss the research results with the client to determine how likely effective options fit with

the client's values and goals;

5) Synthesizing the client�s clinical needs and circumstances with the relevant research,

develop a shared plan of intervention collaboratively with the client;

6) Implement the intervention.

Our care in wording the steps of EBP starts with the fact that doing EBP rests first on a well done and

thorough clinical assessment. This is not directly stated in the EBP practice decision-making model,

but is the foundation on which all good intervention planning rests (Drisko & Grady, 2011). We also

view intervention or treatment planning as participatory and collaborative between client and clinician

- not a top-down process (as it appears in many EBM/EBP textbooks). Client values and preferences

are key parts of EBP. Finally, clinical expertise is needed to insure the best research evidence really

fits this the views and needs of client in this situation.

Additional Steps?

Step 7. A few authors (Gibbs, for one) appear to make practice evaluation an aspect of EBP. That is,

the professional should audit the intervention (to verify it was done appropriately) and evaluate its

yield. This makes some sense, but note that the practice evaluation of the single case would be done

using methods quite different from those used in EBP. Single case or single system designs can help

identify progress, but are based on replication logic rather than the sampling logic underlying

experimental research. That is, the case studies one would use in practice evaluation are not highly

valued in EBP research summaries. Still, practice evaluation is a key part of all good practice.

Step 8. A few authors (Gibbs, for one) also add sharing your results with others and work toward

improving the quality of available evidence. This would be useful but again does not necessarily draw

on the same core logic of experimental research EBP emphasizes. In fact, case studies are often

viewed as the least useful source of evidence in many EBP "evidence hierarchies". Note, however,

that such work may be very helpful in identifying to whom and in what circumstances the best

research evidence does not work or is not appropriate. Ironically, very small scale research may be

very useful in shaping how and when and where to use large scale experimental evidence to best

advantage. Clinicians should publish about their work, but individual case outcomes have ethical

challenges and may not be much valued within EBP hierarchies of evidence.

The University of Oxford offers a fine page on the Steps of EBM.

Note that all steps are meant to be transparent and replicable by others. That is, the steps should be so

clear you could re-do them yourself with enough time and access. It also means many things are

accepted at face value (or as face valid) such as definitions of mental and social disorders (usually

defined via DSM or ICD) though these categories do change over time. Measures of treatments are

assumed to be adequate, valid, reliable and complete. Treatments though often only broadly

described, as assumed to be replicable by others in different settings, with different training and with

different backgrounds.

Note, too, that EBP focuses on the outcome of treatment, not the processes by which change

occurs. Understanding both outcome and change process is the cornerstone of science.

References

Drisko, J. & Grady, M. (2012). Evidence-based practice in clinical social work. New York: Springer-Verlag.

Gibbs, L. (2003). Evidence-based practice for the helping professions. New York: Wadsworth.

Page 12: Evidence Based Practice

EBP

12 / 14

RATING THE EVIDENCE

While assessing research evidence on any given topic can be very complex, EBP reviews are

categorized in a manner designed to convey quality in a simple format. (Note, however, there are a

lot of assumptions built into -- and omitted from -- such ratings!)

Here is an example of the use of evidence hierarchies in EBP. This is taken from the U.S. Department

of Health and Human Services' National Guidelines Clearinghouse which (as the name implies) sets

guidelines for treatment guidelines. The categories themselves are clearly imported from an uncited

source used in the United Kingdom.

Evidence Categories

I: Evidence obtained from a single randomised [sic - British spelling from the original] controlled trial

or a meta-analysis of randomised controlled trials

IIa: Evidence obtained from at least one well-designed controlled study without randomisation

IIb: Evidence obtained from at least one well-designed quasi-experimental study [i.e., no

randomization and use of existing groups]

III: Evidence obtained from well-designed non-experimental descriptive studies, such as comparative

studies, correlation studies, and case-control studies

IV: Evidence obtained from expert committee reports or opinions and/or clinical experience of

respected authorities

Recommendation Grades

Grade A - At least one randomised controlled trial as part of a body of literature of overall good

quality and consistency addressing the specific recommendation (evidence level I) without

extrapolation

Grade B - Well-conducted clinical studies but no randomised clinical trials on the topic of

recommendation (evidence levels II or III); or extrapolated from level I evidence

Grade C - Expert committee reports or opinions and/or clinical experiences of respected authorities

(evidence level IV) or extrapolated from level I or II evidence. This grading indicates that directly

applicable clinical studies of good quality are absent or not readily available.�

[Retrieved Feb 20, 2007 from

http://www.guideline.gov/summary/summary.aspx?doc_id=5066&nbr=003550&string=eating+AND

+disorders ]

One can plainly see that the Evidence Categories or hierarchies are used to 'grade' the

Recommendations that constitute practice guidelines. Note too that research evidence based on

multiple RCTs is privileged, while an work based on quasi-experiments, correlational studies, case

studies or qualitative research is viewed as not particularly useful. Quality of conceptualization (of

disorder and of treatment) is assumed; samples sizes and composition are not mentioned beyond

randomization; generalization from prior work is assumed to be non-problematic; analyses are

presumed to be done appropriately and issues of diversity and context are not considered.

_______________________

Page 13: Evidence Based Practice

EBP

13 / 14

Another, quite similar, rating system is used by the United States Preventive Services Task Force.

They use the following system:

Strength of Recommendations

The U.S. Preventive Services Task Force (USPSTF) grades its recommendations according to one of

five classifications (A, B, C, D, I) reflecting the strength of evidence and magnitude of net benefit

(benefits minus harms).

A.- The USPSTF strongly recommends that clinicians provide [the service] to eligible patients. The

USPSTF found good evidence that [the service] improves important health outcomes and concludes

that benefits substantially outweigh harms.

B.- The USPSTF recommends that clinicians provide [this service] to eligible patients. The USPSTF

found at least fair evidence that [the service] improves important health outcomes and concludes that

benefits outweigh harms.

C.- The USPSTF makes no recommendation for or against routine provision of [the service]. The

USPSTF found at least fair evidence that [the service] can improve health outcomes but concludes

that the balance of benefits and harms is too close to justify a general recommendation.

D.- The USPSTF recommends against routinely providing [the service] to asymptomatic patients. The

USPSTF found at least fair evidence that [the service] is ineffective or that harms outweigh benefits.

I.- The USPSTF concludes that the evidence is insufficient to recommend for or against routinely

providing [the service]. Evidence that the [service] is effective is lacking, of poor quality, or

conflicting and the balance of benefits and harms cannot be determined.

Quality of Evidence

The USPSTF grades the quality of the overall evidence for a service on a 3-point scale (good, fair,

poor):

Good: Evidence includes consistent results from well-designed, well-conducted studies in

representative populations that directly assess effects on health outcomes.

Fair: Evidence is sufficient to determine effects on health outcomes, but the strength of the evidence is

limited by the number, quality, or consistency of the individual studies, generalizability to routine

practice, or indirect nature of the evidence on health outcomes.

Poor: Evidence is insufficient to assess the effects on health outcomes because of limited number or

power of studies, important flaws in their design or conduct, gaps in the chain of evidence, or lack of

information on important health outcomes.

Once again, research evidence based on multiple RCTs is privileged, while an work based on quasi-

experiments, correlational studies, case studies or qualitative research is viewed as not particularly

useful. Quality of conceptualization (of disorder and of treatment) is assumed; samples sizes and

composition are not mentioned beyond randomization; generalization from prior work is assumed to

be non-problematic; analyses are presumed to be done appropriately and issues of diversity and

context are not considered. Still, the clarity is valuable - and useful if you understand the rating

system and its underlying logic.

This said, note that guidelines which lack any clear and explicit linkage to a research-based evidence

Page 14: Evidence Based Practice

EBP

14 / 14

are labeled in the Guideline's Clearinghouse materials. An example is a search for Asperger's

Syndrome" which yields Assessment and Screening Guidelines from the California Department of

Developmental Services. of the State of California [as of Feb 2007]. These guidelines lack any clear

linkage to a specific research evidence base. Specifically they state:

"TYPE OF EVIDENCE SUPPORTING THE RECOMMENDATIONS

The type of evidence supporting the recommendations is not specifically stated."

[Retrieved February 20, 2007 from

http://www.guideline.gov/summary/summary.aspx?doc_id=8269&nbr=004601&string=asperger''s+A

ND+syndrome ]

Such information does make plain to practitioners that there is no obvious research base for the

guidelines presented. This is very unhelpful and makes the criteria by which judgments were made

wholly unclear.